Run on non light-field data
Hi,
I am trying to run NeRF on my synthetic dataset, which consists of an object observed from 50 viewpoints roughly sampled on a hemisphere around the object. I already have the calibration. (I didn't try, but I doubt that llff code would run correctly on these data?). I multiplied the second row of my extrinsic matrices by -1, so that the z axis points to the right direction, saved everything into poses_bound.npy, with focal length and image dimensions, and set the mode to 'llff'. The code runs, but doesn't converge. What steps should I take, for it to run on my data? I set the background to white in all my images, but it doesn't solve my problem.
Have I done something wrong with the poses, or is there something else I should be doing? What is simplices.npy and what should I put in it? Are my viewpoints too sparse ?
Thanks for your help
I have a similar issue. I am using 100 images directly exported from a blender script I made and I am unable to converge. I tried changing the near and far planes and I am starting to get the right color strokes in the renders, but it does not converge.
Don't know if its related, however in my case NeRF produced mostly white images. The issue was that my objects background was white and not transparent. Fixing this solved my problems. Note that transparent is necessary only for blender dataset, not llff ones.
https://github.com/bmild/nerf/issues/119#issue-983144214