train the model with custom dataset(different size of image)
I'd like to train m3d-model using my custom dataset.I rewrite the imdb_util.py and it can generate imdb rightly.However,my image(W=3384,H=2710) in the custom dataset is larger than images in KITTI. At the beginning, I preprocess my images by cropping the top half of it,which is the part I can ignore.Now ,the image becomes W=3384,H=1855.Then,I changed the config file,using the parameters as below:
conf.test_scale = 1344 conf.crop_size = [1344, 3200]
However,it reports run out of CUDA.It seems that this size is too big to run on my gpu.
Then ,I changed the parameters smaller,like conf.test_scale= 1344 conf.crop_size =[832, 2048],
but I got acc (bg: 1.00, fg: 0.00, iou: nan), loss (bbox_3d: inf, cls: 2000.0002, iou: nan), misc (ry: inf, z: inf), dt: 1.70, eta: 20.7h .Is it because of the unfittable initialization of anchor size ,affected by crop_size?
So what should I do to train my dataset rightly without the development of gpu?
I'd really appreciate it if you could do me a favor!
Hi @kaixinbear I am also having the same problem as you are. I have yet to have a solution, but if you have solved this let me know!
I have also encontered this problem, I am using Ko-per Intersection dataset, with image size 656x494
hello i also faced the same problem nan in loss i created the dataset from carla using kitti format but i got nan in loss bbox_means: [[0.002, 0.003, 0.163, -0.218, 0.034, -0.025, 2.312, nan, inf, inf, -0.234]] bbox_stds: [[0.131, 0.106, 0.211, 0.203, 0.136, 0.108, 5.789, nan, nan, nan, 1.777]] iter: 250, acc (bg: 0.16, fg: 0.43, iou: 0.59), loss (bbox_3d: nan, cls: 2.1787, iou: 0.5432), misc (ry: 1.80, z: 3.67), dt: 0.63, eta: 8.7h)