dsb-2017
dsb-2017 copied to clipboard
how to run evaluation/prediction after training?
Is there a script in the current repository doing that? Ideally, it'd be nice to let index.py generate three csv files: train, validation, and test sets. This way we can fine-tune hyper-parameters on validation and get an idea of end performance on test. Could you please advise how to do so, especially the prediction part?
The way this code is written, the evaluation is done by picking 128 mostly random chunks out of each sample in the validation set. If you are trying to classify an entire sample as positive/negative, you will need to slide a 3D window over the entire sample, possibly with overlapping strides. This code doesn't include that functionality.
thanks for the information! On a different note, how do I submit a training job to a GPU queue? Is there anything I should change in the following command:
in run.sh:
python ./run.py -e 2 -w dsb-2017/vids/ -r 0 -v -eval 1 -z 16 -s model.pkl
in grid_run.sh:
qsub -V -b yes -cwd -l h_vmem=20G -l gpu=true,gpu.num=1 -N dsb_train -o gpu_train.log -j yes train.sh
When I train only one subset out of the 10 from Luna16, it seems to take ~10 hrs on a CPU queue for one epoch. Is this reasonable? How much speech up i could expect on GPU? Please advise. Thank you!