XJay18
XJay18
Hi, basically, you need to: 1. Prepare a dataset with images and camera information; 2. Obtain corresponding pointcloud data from lidar sensors; 3. Preprocess pointcloud data (e.g., denoising and downsampling);...
Hi, I think you're on the right track. However, you should set `Output Render` to `rgb` in the right panel to see the rendered images. Once the training progress is...
Hi, If you are using multiple gpus, you should modify the parameter `--nproc_per_node` in the training scripts. For example, for training with 2 gpus: ```{bash} CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2...
`path/to/config.yaml` should be set as a specific file location, e.g., `config/Recce.yml`
I retrained the model on FF++ (c23) and you can access the model parameters via [this link](https://sjtueducn-my.sharepoint.com/:u:/g/personal/junyicao_sjtu_edu_cn/EZ0r0ej1QGBCl7OxyPqxvPYBzcKYNaDL69bLK6WK4Ppi9w). (Password: `gn4Tzil#`)
Hi, the inputs to the network contain both real and fake images. The main idea is to compute the reconstruction loss for real images only, aiming to learn the common...
Hi, the previous sharing link expired. You can access the re-trained FF++ weights via [this link](https://sjtueducn-my.sharepoint.com/:f:/g/personal/junyicao_sjtu_edu_cn/EjRzA7P7WlVAtGCn7F7w_8IBYoW2omsVDiC_zJjdTCBs0A). (Password: `7v+MRf8L`)
Hi, I think sampling only one frame from each video for testing may result in large variations. On average, we use about 50 frames/sequence for testing. Frame-level performance is considered....
Close due to inactivation. Please feel free to reopen this issue if you still have related problems.
Thank you for mentioning the issue. It turns out I have edited some files in the NeRFStudio project locally. I have reorganized our code to include those implementations. Please pull...