TensoRF icon indicating copy to clipboard operation
TensoRF copied to clipboard

how to preprocessing data so as to train on my own datasets?

Open loongofqiao opened this issue 2 years ago • 12 comments

loongofqiao avatar Mar 24 '22 13:03 loongofqiao

I had the same question about this, did you solve it? Or wait for the author to reply.

LeeHW-THU avatar Mar 25 '22 18:03 LeeHW-THU

Hi, you can follow the instructions in the NSVF repo, then set the dataset_name to 'tankstemple'.

apchenstu avatar Mar 26 '22 15:03 apchenstu

Hi, I had some problems when I made my own dataset. So I downloaded the Tanks&Temples dataset and used Truck to test my process of making the data. I follow the instructions in the NSVF repo ①I use the images in Truck dataset as my rgb. ②I use imgs2poses.py to process the rgb folder to get cameras.bin, images.bin, points3D.bin. ③I use colmap model_converter to convert bin to txt. ④I use colmap2mvsnet.py to convert colmap output to pose dir (I modified the script so that the txt file contains only extrinsic information) ⑤ For bbox.txt, I wrote a script to count the (x,y,z) information inside points3D.txt and got x_min y_min z_min x_max y_max z_max (initial_voxel_size I am not sure how to get it, so I directly refer to the value of the original dataset and set it to 0.16) ⑥ Finally I got pose dir, rgb dir , intrinsics.txt, bbox.txt, and when I replaced Truck with My_Truck for training using TensorRF, I got completely different results.

Is there something wrong with my steps?

LeeHW-THU avatar Mar 28 '22 08:03 LeeHW-THU

I have updated the readme, please try again with the instruction :)

apchenstu avatar Mar 30 '22 13:03 apchenstu

I have updated the readme, please try again with the instruction :)

Hello, thank you very much for your outstanding work. I used colmap2nerf.py to make my dataset successfully, but I encountered anomalous results while experimenting, and there is no good improvement after I modified scene_bbox and near_far. For the definition of these two parameters, can you please share your experience ?

LeeHW-THU avatar Apr 02 '22 02:04 LeeHW-THU

Hi

I generated the transforms.json for the lego dataset to start understanding how to create my own dataset. I was able to generate the .json successully but upon training, obtained completely different results. It was mentioned to modify scene_bbox and near_far but I couldn't find where I can alter these parameters. Anyone could help me out on how to alter scene_bbox and near_far ?

Another question that I have is that why colmap2nerf.py does not analyze all images. It currently only analyzing between 18-20 images.

tks in advance

best regards

supdhn avatar Jul 27 '22 13:07 supdhn

@LeeHW-THU @supdhn Were you able to successfully train on your custom data?

When I run colmap2nerf.py, it only generates transform.json and not 'transform_train.json' and 'transform_test.json'. How can I get these files?

basit-7 avatar Aug 16 '22 07:08 basit-7

Hi @basit-7

You will have to run colmap on our test images as well. Just rename the generated transform.json to transform_train.json. Do the same renaming once you generated the transform.json for the test images.

Best regards

supdhn avatar Aug 16 '22 12:08 supdhn

Hello @supdhn

Thank you for responding, I figured it out eventually. I successfully trained it on one of the scans of the DTU dataset and the results are not too great. There are a total of 49 images, 16 for testing and 33 for training.

Would you suggest something that could improve the results?

basit-7 avatar Aug 16 '22 12:08 basit-7

Hi @basit-7

increasing the number of images is of great help :D ... One thing that I notice was that colmap was not generating the 4D array for all images which decremented the results so just check if all of your images are in the json (I haven't figured it out why sometimes colmap skips some images and sometimes it doesn't). Also the authors mention something regarding the config file when getting abnormal results:

Calibrating images with the script from NGP: python dataLoader/colmap2nerf.py --colmap_matcher exhaustive --run_colmap, then adjust the datadir in configs/your_own_data.txt. Please check the scene_bbox and near_far if you get abnormal results.

But I haven't messed with that

Have you tried instant-NGP or DVGOv2 ? I am currently investigating theses approaches in parallel (trying to get the DVGOv2 to run :/)

supdhn avatar Aug 16 '22 13:08 supdhn

@supdhn Thanks! Will try your suggestions.

Yes, have tried Instant-NGP and DVGO (not v2). Could you share the link to the GitHub repo for v2? Or is it the same repository as v1?

What datasets do you use for testing these methods?

basit-7 avatar Aug 16 '22 14:08 basit-7

Hello,I'd like to ask you some questions.Do you understand the data processing method of ngp in the readme document? I processed the self-made dataset with the ngp method in TensoRF, but I found that it could not be run in the instant-ngp code, and the phenomenon that the image file could not be found appeared. I wonder if you have the same experience?

newforests avatar Dec 19 '22 14:12 newforests