panoptic-toolbox
panoptic-toolbox copied to clipboard
I cannot download the dataset.
curl: (28) Failed to connect to domedb.perception.cs.cmu.edu port 80: Timed out rm: cannot remove 'kinectVideos/kinect_50_03.mp4': No such file or directory kinectVideos/kinect_50_04.mp4
Is it possible to mirror the data on a stable open data service like https://zenodo.org/ ?
Server is back. Can you retry?
@jugdemon yeah good point. As a challenge, our dataset is quite big and I am not sure whether it can be hosted by other ways.
The server is back and I can access the files. Thank you!
@jhugestar zenodo is a free service offered by CERN. They pretty much have for all intents and purposes infinite storage (mainly because their particle acceleration systems produce so much data that our data is minute by comparison). While a standard user can only upload 50gb in a single submission, you are allowed to contact them to make larger submission if it is required (https://help.zenodo.org/ ). They guarantee to host all the data as long as CERN runs and provide you even a versioned DOI (you can also give your github repo a doi through them if that is interesting).
I don't know how "expensive" it would be for your to re-arrange the data but in theory you could give every scene a DOI and store it as a separate file on zenodo or split the data set into groups of 50GB.
I'd be happy to chat more about zenodo if you are interest.
P.S. That was me not noticing that I was logged in with another account.
The server is back and I can access the files. Thank you!
@jhugestar zenodo is a free service offered by CERN. They pretty much have for all intents and purposes infinite storage (mainly because their particle acceleration systems produce so much data that our data is minute by comparison). While a standard user can only upload 50gb in a single submission, you are allowed to contact them to make larger submission if it is required (https://help.zenodo.org/ ). They guarantee to host all the data as long as CERN runs and provide you even a versioned DOI (you can also give your github repo a doi through them if that is interesting).
I don't know how "expensive" it would be for your to re-arrange the data but in theory you could give every scene a DOI and store it as a separate file on zenodo or split the data set into groups of 50GB.
I'd be happy to chat more about zenodo if you are interest.
I cannot download the dataset today. I am trying from three hours. This site can’t be reached