Train on nerf of stage 1 with custom dataset
Hi, guys, i rendered my own rings dataset with 32 views , for every view, i have a RGBA image, where A means mask, normal image and depth image, and also saved the camera pos -- a 4*4 RT matrix. The size of the dataset is 8000. And i use the command to train:
python train.py --base configs/instant-nerf-large-train.yaml --gpus 0,1,2,3,4,5,6,7 --num_nodes 1
After step = 7000, i checked the train images, as below, the third row, the two object has the same shape.
Even i input a yellow duck multi-view images, the model output a ring like the third row.
Can anyone help? Thanks. That's really weird.
I'm sorry I have no idea on this problem.
@hopeliu20160622 hello,how did you render your datasets with 32 views?Use python code?i am interested in it .!!
I'm sorry I have no idea on this problem.
Thanks , can you share a training sample, I think there is something wrong with my render script, maybe i can compare the details to find the difference. Thanks. [email protected]
Hello,I am interested in render script for preparing training dateset, cloud you share me it? Thanks, [email protected]
I'm sorry I have no idea on this problem.
Thanks , can you share a training sample, I think there is something wrong with my render script, maybe i can compare the details to find the difference. Thanks. [email protected]
Hello,I am interested in render script for preparing training dateset, cloud you share me it? Thanks, [email protected]
@hopeliu20160622 Hi, I already made the dataset, but I don't know how to add my own dataset in 'instant-nerf-large-train.yaml', can you share your config file? My dataset is of the following type for your convenience.Where in pose is the npz file of 32 viewpoints. In rgba there are 32 views (png type),Looking forward to your reply ,[email protected]
Hello, I'm also interested in the rendering scripts for rendering the training dataset. [email protected], Looking forward to your reply.
Can anybody post the rendering scripts here? It helps a lot of people. Thanks!
Hi, guys, i rendered my own rings dataset with 32 views , for every view, i have a RGBA image, where A means mask, normal image and depth image, and also saved the camera pos -- a 4*4 RT matrix. The size of the dataset is 8000. And i use the command to train: python train.py --base configs/instant-nerf-large-train.yaml --gpus 0,1,2,3,4,5,6,7 --num_nodes 1 After step = 7000, i checked the train images, as below, the third row, the two object has the same shape. Even i input a yellow duck multi-view images, the model output a ring like the third row.
Can anyone help? Thanks. That's really weird.
Hi @hopeliu20160622, may I know did you load in the openlrm checkpoint for training instantnerf? It seems like there are several tensor shape unmatching cases here.
Hi, guys, i rendered my own rings dataset with 32 views , for every view, i have a RGBA image, where A means mask, normal image and depth image, and also saved the camera pos -- a 4*4 RT matrix. The size of the dataset is 8000. And i use the command to train: python train.py --base configs/instant-nerf-large-train.yaml --gpus 0,1,2,3,4,5,6,7 --num_nodes 1 After step = 7000, i checked the train images, as below, the third row, the two object has the same shape. Even i input a yellow duck multi-view images, the model output a ring like the third row.
Can anyone help? Thanks. That's really weird.
Did you figure out this problem? Now I am fine-tuning the NeRF model, but cannot run due to 'cuda out of memory'. I set batch size =1, and use 2x L40 VRAM 48G
Can anyone help? Thanks. That's really weird.