nerf
nerf copied to clipboard
How to get the spatial location (x,y,z) and viewing direction (θ,φ) of a set of images?
Hi, This project is so cool and amazing. Great work! I want to generate a model as the lego model in your project, but what I have is only a set of images. Could you please tell me how to get the spatial location (x,y,z) and viewing direction (θ,φ) of those images? Looking forward to your reply!
Are you talking about poses? You can generate poses (the relative position and rotation of the camera for the images) by following here. The xyz and direction is for each pixel, not for the whole image.
Are you talking about poses? You can generate poses (the relative position and rotation of the camera for the images) by following here. The xyz and direction is for each pixel, not for the whole image.
Thanks for your relpying~ How can I extract geometry from a NeRF and generate a mesh file by training with llff dataset rather than synthetic dataset? It seems that the extract_mesh.ipynb is designed for synthetic dataset.
See this issue The method he takes for synthetic object is to predefine a fixed grid volume, and predict each grid point is occupied or not. This method is practically inapplicable to real forward-facing scene because you'll need to define a HUGE volume to cover the whole scene, where most of the grid is empty... not only is it time consuming but also ineffective. So currently there is no off-the-shelf way to extract mesh from real scene. I was trying to extract point cloud instead here, but am still experimenting.
See this issue The method he takes for synthetic object is to predefine a fixed grid volume, and predict each grid point is occupied or not. This method is practically inapplicable to real scene because you'll need to define a HUGE volume to cover the whole scene, where most of the grid is empty... not only is it time consuming but also ineffective. So currently there is no off-the-shelf way to extract mesh from real scene. I was trying to extract point cloud instead here, but am still experimenting.
Thanks again!
Actually I want to reconstruct a set of images with white background instead of real scene. I've been training on these data for couple of hours. I'll try to extract mesh from the model when it's ready.
Your experiment is also cool~It will be so appreciated if you can share the result with me :)
The extract_mesh notebook is not specific to a synthetic object -- if you train a real scene that has 360 degrees of views all around an object and do NOT use NDC coordinates (probably should use the "spherify" arg instead), the central object should be normalized in the same way. The main thing you'd need to change would be the line
t = np.linspace(-1.2, 1.2, N+1)
which controls which region of the NeRF network is densely queried to get the volume that is converted into a mesh. For synthetic objects the cube [-1.2,1.2]^3 is a good bounding box. For your own real data, you might have to experiment.
Yes it's also applicable for real inward-facing scenes, but definitely not for forward facing scenes where you have HUGE space.
That's right. For forward facing you would get a horrible mesh full of cracks anyway since so much of the occluded content is not observed -- you'd be much better off extracting a depth map (or set of depth maps as in "layered depth images") than a mesh in that case.
I use the extract_mesh notebook to receive a DAE file of lego successfully, it's white and looks perfect, but i want to know how to generate a mesh model or other 3D model with it's original color, just like the lego example shows in the output mp4 video?
It's really a great work, maybe may of us want to use this to generate a 3D model for 2D real images, and the output mp4 video seems not that useful in this situation. I'll be vey happy for your reply.
The color depends on the viewing direction; I think the author wants to highlight this fact that he didn't add color to the 3d model. You might be able to define the color of mesh from a certain viewing direction, but some of the meshcubes could be occluded and have strange color. You might need to fuse the color from different directions.
The extract_mesh notebook is not specific to a synthetic object -- if you train a real scene that has 360 degrees of views all around an object and do NOT use NDC coordinates (probably should use the "spherify" arg instead), the central object should be normalized in the same way. The main thing you'd need to change would be the line
t = np.linspace(-1.2, 1.2, N+1)
which controls which region of the NeRF network is densely queried to get the volume that is converted into a mesh. For synthetic objects the cube [-1.2,1.2]^3 is a good bounding box. For your own real data, you might have to experiment.
Thanks! I've extract the mesh from my scene successfully, however it's white instead of colorful. Could you please tell me how to add color to it?
I wrote code to generate colored mesh in my implementation, will add a video explanation these days.