Abhinav Kumar | अभिनव कुमार

Results 14 comments of Abhinav Kumar | अभिनव कुमार

Hi @makaveli10 Thank you for showing interest in our work. Here are a few stuff which I would try: - Please try training on KITTI dataset first and see if...

This is pretty strange. > I have used the custom dataset with mmdetection3d and gives expected results. Our DEVIANT codebase is essentially a fork of [GUPNet](https://github.com/SuperMHP/GUPNet) codebase, which is not...

> I met the same issue when training KITTI... - Since I am not able to reproduce your issue on our servers, could you paste the training log here. -...

> I can successfully run the inference code. That is great. > BTW, I only modified the training and validation data in the folder of ImageSets. I also see that...

> Thanks much for your reply. Did your problem get solved? In other words, are you able to train your model on a different KITTI split?

Hi @Zillurcuet > I downloaded the pre-trained weighs. I would like to use the kitti weights to run on my raw video/webcam and get the output with 3d box and...

I just found out the answer to my question. Both **Mean** and **[email protected]** seem to be very closely related. If we look into the file [mpii.py](https://github.com/leoxiaobin/deep-high-resolution-net.pytorch/blob/master/lib/dataset/mpii.py), the **Mean** is indeed...

Hi @JacobKan > I don't have v1.0-trainval#number_blobs_camera.tgz and v1.0-trainval01_blobs_lidar.tgz and many other directories in this .sh file. These files are available from the [nuscenes website](https://www.nuscenes.org/nuscenes/). Here are the steps: -...

> Hi, For the paths in [convert_nuscenes_to_kitti_format_and_evaluate.sh](https://github.com/abhi1kumar/DEVIANT/blob/main/data/nusc_kitti/convert_nuscenes_to_kitti_format_and_evaluate.sh), do I need to change the `/home/abhinavkumar` to my own $HOME path? Yes, you have to replace `/home/abhinavkumar` to your own `$HOME`. >...

> Hi, yesterday I trained the model and it works now. The reason is batch_size needs to be divisible by the number of training sets, otherwise it will calculate loss...