Tae Ha "Jeff" Park
Tae Ha "Jeff" Park
Hello, Thank you for pointing it out. You are correct that it is an error. The CSV files that I've had to run the experiments have correct bounding box labels....
Hello, I just updated the repo with a bug fix, and also updated the README with a link to the binary masks. Please give them a try and let me...
Hi Mohsi, It seems the issue is with the `val_loader` [here](https://github.com/tpark94/spnv2/blob/abc2210a72e2fad77e7fe47b9f3f7e7a085a25e1/tools/odr.py#L123), which sets `load_labels=True`, which, due to the way the code is written, requires the bounding box information for augmentations...
Hey Mohsi, So, it seems that the augmentations are indeed being built due to `split=='train'` [here](https://github.com/tpark94/spnv2/blob/abc2210a72e2fad77e7fe47b9f3f7e7a085a25e1/core/dataset/build.py#L30), but they are simply not being used because `cfg.AUGMENT.P` is set to 0 for...
Hi Mohsi, I just updated the README with a link to binary masks, please give it a try for ODR and let me know if any issue comes up.
Hello, You can find the coordinates of 11 keypoints we used in `src/utils/tangoPoints.mat`. You can take the max/min values of these points along each dimension to get its size.
Hello, I believe the second element of `output = model(image)` is the regressed pose according to the setting in the YAML file. So, the camera intrinsics file shouldn't play a...
I see that you seemed to have implemented your own dataset class as shown by `from core.dataset.MyDataset import SPEEDPLUSDataset`. Please do note that if you follow a different set of...
Could you please check if you are using the data transformations as defined in `core/dataset/transforms/build.py`? In line 59, there is a line that goes `transforms += [A.Normalize(mean=(0.485, 0.456, 0.406), std=(0.229,...
Hello, Thanks for your question. Looking back at the code, it seems `MODEL.BACKBONE.NAME` is irrelevant since the backbone is created using the `EFFICIENTDET_PHI` parameter, e.g., ``` # Regular with BN...