Chris Choy
Chris Choy
You can directly feed the data into lib/datasets/preprocessing/stanford.py after updating the paths in the file. Do you have a problem with running the code?
Hi, This is the weights I trained on the ScanNet.v2 official train split with batch_size 10 for 120k iterations. (This did not use all the augmentations that I added on...
Ah sorry, there is a problem with the weights. If you load it, the final iteration this weight was trained is 27k not 120k. The training died at 27k and...
Hmm looks pretty normal to me with the indoor.py  I am pretty sure you did not load the weights correctly. Download the indoor.py that works: [indoor.py](https://gist.github.com/chrischoy/b68c426362ae28a96ad22183d0e2b174) ``` python indoor.py...
After 120k iterations on training set only, without Hue-Saturation data augmentation, I get Score: 89.145 mIOU 72.219 mAP 75.612 mAcc 80.402 without any rotation average. The weights are available at...
Sorry for the late reply. I haven't measured the entire training time as the server kicks me out after the max wall time of the SLURM. However, each iteration takes...
I posted an entire training log on https://github.com/chrischoy/SpatioTemporalSegmentation/issues/8 In sum, started 09/11 14:59:47 ended 09/16 15:34:33 for 60k and scannet v2 validation every 1k which takes about 7 min each...
I have not used these datasets nor have pretrained weights. If the point cloud density differs for training and test, it would be unlikely to get state-of-the-art performance.
It is possible that the json file is empty or the process cannot load it properly and the network fails. Try to make an isolated test function like https://github.com/chrischoy/3D-R2N2/blob/master/lib/data_process.py#L216 and...
Hi, this issue was resolved #15 Please pull again from the master.