NeRFCapture icon indicating copy to clipboard operation
NeRFCapture copied to clipboard

Tips for Preparing Datasets

Open ackd06 opened this issue 2 years ago • 5 comments

Hello, Jad: Thank you for your previous answer. I have encountered some problems when preparing the dataset. I took about 120 images of a large object, but when training the model, I found that the 3D model did not render well. , May I ask if you have some shooting skills and can provide me with some suggestions when collecting dataset? Thank you.

ackd06 avatar Mar 22 '23 06:03 ackd06

Hey! Yes I found that ARKit without lidar can be a little bit off and has errors in poses. Unfortunately, NeRFs are quite sensitive to pose errors and that might explain why your training is not going well. Maybe when taking the dataset make sure the origin is staying in place (having good texture in the scene helps). Another source of error is the change in exposure and focus in the camera (admittedly I should have an option in the app to disable auto exposure but haven't gotten around to it). The new instantngp has a setting that allows training per camera latents that may help. You may also want to try optimizing the camera extrinsics although I have had little success with it myself. Lastly, you may want to adjust the scale of the unit box so that background objects that would be outside the unitbox do not create floaters within your model.

On Wed, 22 Mar 2023 at 16:09, ackd06 @.***> wrote:

Hello, Jad: Thank you for your previous answer. I have encountered some problems when preparing the dataset. I took about 120 images of a large object, but when training the model, I found that the 3D model did not render well. , May I ask if you have some shooting skills and can provide me with some suggestions when collecting dataset?

— Reply to this email directly, view it on GitHub https://github.com/jc211/NeRFCapture/issues/3, or unsubscribe https://github.com/notifications/unsubscribe-auth/AHCZMIV7SAR6IETL2K6VYJLW5KJQXANCNFSM6AAAAAAWDL5334 . You are receiving this because you are subscribed to this thread.Message ID: @.***>

jc211 avatar Mar 22 '23 06:03 jc211

Hello,Jad: In the instant-ngp project, there is an adjustable parameter aabb_scale. I would like to ask if aabb_scale can be adjusted in the NeRFCapture project? At present, the results of our trained model are very bad, so we want to optimize the final 3d model by adjusting the parameters. If If there are parameters that can be adjusted in the NeRFCapture project, would you like to ask if there are other recommended parameters (programs in the mobile app or training programs) that can be adjusted? Thank you very much!!!!!!!!!!!!!!!!

ackd06 avatar Mar 22 '23 10:03 ackd06

I don't think you need to adjust anything in the nerfcapture app. You can adjust everything directly in the nerfcapture2nerf.py line 90. You can change testbed to have whatever scale you like there. Alternatively you can save the dataset without using --stream and then adjust the transforms.json file of your dataset. At the moment the app doesn't have any adjustable parameters. I should add some in the future including changing resolutions, setting exposure time, and fixing the focus of the camera.

On Wed, 22 Mar 2023 at 20:13, ackd06 @.***> wrote:

Hello,Jad: In the instant-ngp project, there is an adjustable parameter aabb_scale. I would like to ask if aabb_scale can be adjusted in the NeRFCapture project? At present, the results of our trained model are very bad, so we want to optimize the final 3d model by adjusting the parameters. If If there are parameters that can be adjusted in the NeRFCapture project, would you like to ask if there are other recommended parameters (programs in the mobile app or training programs) that can be adjusted? Thank you very much!!!!!!!!!!!!!!!!

— Reply to this email directly, view it on GitHub https://github.com/jc211/NeRFCapture/issues/3#issuecomment-1479271582, or unsubscribe https://github.com/notifications/unsubscribe-auth/AHCZMIQPEHF32RP44B65MA3W5LGEDANCNFSM6AAAAAAWDL5334 . You are receiving this because you commented.Message ID: @.***>

jc211 avatar Mar 22 '23 11:03 jc211

Hello, Jad : I am trying to reconstruct a 3d model of a human arm or human body parts, I have a few questions that need your help, the first is is there a limit to the size of the object being photographed? The second is when shooting, for Is there any requirement for the background of the object to be photographed? The third item is whether to press the reset button every time an image is taken? Finally, I have provided the dataset image I am currently shooting and the final 3D model. If you have free time, can you help me to see if there is any problem with my shooting method? Thank you very much! https://drive.google.com/drive/folders/1N1_0HZXwkF9VNuMOBWsRsflfSucSeHN7?usp=sharing https://drive.google.com/drive/folders/1Cc_HtV1A5IExTxqPTT2J2yr7eE-tE54H?usp=sharing

ackd06 avatar Mar 23 '23 06:03 ackd06

1: You can always change the aabb_scale to fit a larger or smaller object. 2: This is a difficult question. if you have the ability to mask the background then go ahead and do so. You can add masks to the dataset and instantngp will ignore the background. If you choose to keep the background in there and the size of the unit box is not big enough to include where the background items actually are, you end up with floaters that corrupt your 3d reconstruction. You can use depth supervision to help here. 3: You do not need to press the reset button everytime, although training is sensitive to initial conditions. So if you capture 100 images, reset, and train, you'll likely get a different result to training online as you capture.

jc211 avatar Mar 24 '23 06:03 jc211