BuildingDetectors_Round2
BuildingDetectors_Round2 copied to clipboard
Documentation
Hi,
Is there more documentation on how to run this? I'm having trouble with training from scratch. Also, is "kohei-solution-20170612_4.zip" available somewhere?
Thanks
Hello,
The software code associated with kohei-solution-20170612_4.zip is in this directory.
https://github.com/SpaceNetChallenge/BuildingDetectors_Round2/tree/master/1-XD_XD
The zip file isn't currently available for download but the directions should be able to be followed from there.
This will require retraining.
Thank you, Dave
Dave,
Yeah I've tried following the training step from the "Instruction Manual" document", but I get errors like FileNotFoundError: File b'/data/working/models/v9s/AOI_5_Khartoum_val_evalhist_t h.csv' does not exist
as well as for the other files in /data/working dir. Isn't training supposed to create those files?
Also, am I right to assume that I'm supposed to mount a directory "data" like below to the docker? /data ├── test │ ├── AOI_2_Vegas_Test │ ├── AOI_3_Paris_Test │ ├── AOI_4_Shanghai_Test │ └── AOI_5_Khartoum_Test └── train ├── AOI_2_Vegas_Train ├── AOI_3_Paris_Train ├── AOI_4_Shanghai_Train └── AOI_5_Khartoum_Train
Thanks for the help
Hello,
What was the training command?
The working directory has large files that are created from the dataset. They are too large to host on Github. The training process is supposed to recreate them.
Are there any other errors above the FileNoteFound error. Usually this is caused because there was an error earlier in the data processing steps.
Thank you, Dave
Dave,
I followed these steps:
- Run Container (container name: kohei-container) $ sudo nvidia-docker start -a kohei-container -i
- Training root@xxxxxxxx:~$ ./train.sh
/data/train/AOI_2_Vegas_Train
/data/train/AOI_3_Paris_Train
/data/train/AOI_4_Shanghai_Train
/data/train/AOI_5_Khartoum_Train
Also, here is more of the output I got
`>>>>>>>>>> v17.py 2017-11-28 18:31:43,247 INFO Evaluate fscore on validation set: AOI_5_Khartoum 2017-11-28 18:31:43,247 INFO import modules Using Theano backend. WARNING (theano.sandbox.cuda): The cuda backend is deprecated and will be remove d in the next release (v0.10). Please switch to the gpuarray backend. You can g et more information about how to switch at this URL: https://github.com/Theano/Theano/wiki/Converting-to-the-new-gpu-back-end%28gpua rray%29
Using gpu device 0: Tesla K80 (CNMeM is disabled, cuDNN 5110)
2017-11-28 18:31:45,346 INFO Prediction phase
Traceback (most recent call last):
File "v17.py", line 623, in
Is that the entire output?
v17.py Is one of the last things to be printed. '/data/working/models/v9s/AOI_5_Khartoum_val_evalhist_t h.csv' should be produced in the pre_processing steps.
Usually there are errors higher up that indicate a problem
Thank you, Dave
Here's the first part:
root@*****:~# ./train.sh \
/data/train/AOI_2_Vegas_Train \ /data/train/AOI_3_Paris_Train \ /data/train/AOI_4_Shanghai_Train \ /data/train/AOI_5_Khartoum_Train
CLEAN UP rm -rf /data/working PREPROCESSING STEP python v5_im.py preproc_train /data/train/AOI_2_Vegas_Train 2017-11-28 18:29:49,663 INFO Preproc for training on AOI_2_Vegas 2017-11-28 18:29:49,663 INFO Generate IMAGELIST csv Traceback (most recent call last): File "v5_im.py", line 853, in
cli() File "/opt/conda/envs/py35/lib/python3.5/site-packages/click/core.py", line 722, in call return self.main(*args, **kwargs) File "/opt/conda/envs/py35/lib/python3.5/site-packages/click/core.py", line 697, in main rv = self.invoke(ctx) File "/opt/conda/envs/py35/lib/python3.5/site-packages/click/core.py", line 1066, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/opt/conda/envs/py35/lib/python3.5/site-packages/click/core.py", line 895, in invoke return ctx.invoke(self.callback, **ctx.params) File "/opt/conda/envs/py35/lib/python3.5/site-packages/click/core.py", line 535, in invoke return callback(*args, **kwargs) File "v5_im.py", line 745, in preproc_train prep_valtrain_valtest_imagelist(area_id) File "v5_im.py", line 568, in prep_valtrain_valtest_imagelist df = _load_train_summary_data(area_id) File "v5_im.py", line 502, in _load_train_summary_data df = pd.read_csv(fn) File "/opt/conda/envs/py35/lib/python3.5/site-packages/pandas/io/parsers.py", line 646, in parser_f return _read(filepath_or_buffer, kwds) File "/opt/conda/envs/py35/lib/python3.5/site-packages/pandas/io/parsers.py", line 389, in _read parser = TextFileReader(filepath_or_buffer, **kwds) File "/opt/conda/envs/py35/lib/python3.5/site-packages/pandas/io/parsers.py", line 730, in init self._make_engine(self.engine) File "/opt/conda/envs/py35/lib/python3.5/site-packages/pandas/io/parsers.py", line 923, in _make_engine self._engine = CParserWrapper(self.f, **self.options) File "/opt/conda/envs/py35/lib/python3.5/site-packages/pandas/io/parsers.py", line 1390, in init self._reader = _parser.TextReader(src, **kwds) File "pandas/parser.pyx", line 373, in pandas.parser.TextReader.cinit (pandas/parser.c:4184) File "pandas/parser.pyx", line 667, in pandas.parser.TextReader._setup_parser_source (pandas/parser.c:8449) FileNotFoundError: File b'/data/train/AOI_2_Vegas_Train/summaryData/AOI_2_Vegas_Train_Building_Solutions.csv' does not exist python v12_im.py preproc_train /data/train/AOI_2_Vegas_Train 2017-11-28 18:29:50,556 INFO Preproc for training on AOI_2_Vegas Traceback (most recent call last): File "v12_im.py", line 629, in cli() File "/opt/conda/envs/py35/lib/python3.5/site-packages/click/core.py", line 722, in call return self.main(*args, **kwargs) File "/opt/conda/envs/py35/lib/python3.5/site-packages/click/core.py", line 697, in main rv = self.invoke(ctx) File "/opt/conda/envs/py35/lib/python3.5/site-packages/click/core.py", line 1066, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/opt/conda/envs/py35/lib/python3.5/site-packages/click/core.py", line 895, in invoke return ctx.invoke(self.callback, **ctx.params) File "/opt/conda/envs/py35/lib/python3.5/site-packages/click/core.py", line 535, in invoke return callback(*args, **kwargs) File "v12_im.py", line 564, in preproc_train prefix=prefix)).exists() AssertionError python v16.py preproc_train /data/train/AOI_2_Vegas_Train Using Theano backend. WARNING (theano.sandbox.cuda): The cuda backend is deprecated and will be removed in the next release (v0.10). Please switch to the gpuarray backend. You can get more information about how to switch at this URL: https://github.com/Theano/Theano/wiki/Converting-to-the-new-gpu-back-end%28gpuarray%29
Using gpu device 0: Tesla K80 (CNMeM is disabled, cuDNN 5110)
2017-11-28 18:30:00,984 INFO Serialize OSM subset
2017-11-28 18:30:00,984 INFO Loading raster...
Traceback (most recent call last):
File "v16.py", line 1772, in
TRAINING v9s model
Thanks
Ok it looks like is is having a problem with reading the labels csv using pandas.
Do you have a csv file in summaryData. you can check to see if it's located correctly.
do you have a file at /data/train/AOI_2_Vegas_Train/summaryData
I don't any csv files nor do I have the summaryData subdirs in the AOI folders.
Hello,
The csv files that are stored in SummaryData were used as the training labels for the data. It looks like you may have to re-download the data. The solutions in this repository are designed to specifically use the SpaceNet data as packaged for the TopCoder competition.
See [SpaceNet Building Detector Round 2 on TopCoder] (https://community.topcoder.com/longcontest/?module=ViewProblemStatement&rd=16892&pm=14551) for more information about the contest and how to use the data.
There are instructions on how to download the data on that website.
For even more information on accessing the data from AWS please visit https://spacenetchallenge.github.io/
Thank you, Dave
Dave,
Thanks, I will try this
The zip file isn't currently available for download but the directions should be able to be followed from there.
Rather than posting a large file to gihub (I assume file size is the issue), could you put it on s3? This would help avoid the relatively large costs of downloading the entire dataset and training from scratch. This might also fix #5 for many users who just want to replicate the results as well.
Hello,@eong93 how do you get the osmdata,is it convenient for you to tell me?
Still wondering here how one could get the osm data for training...the link to it is broken now.
Still wondering here how one could get the osm data for training...the link to it is broken now.
Found this website that also hosts OSM data, don't know if that'll be helpful... https://www.interline.io/osm/extracts/