mitsuba2 icon indicating copy to clipboard operation
mitsuba2 copied to clipboard

Setting up a scene: passing vertices and faces directly instead of loading object from disk

Open hakanyi opened this issue 4 years ago • 16 comments

I'd like to adopt Mitsuba2 for a research project, but only just started. I am generating meshes (vertices & faces) using a generative model and would like to avoid saving to and loading from disk every time I sample a mesh from that model. Is there way to feed the model output to a scene directly? Thanks a lot in advance!

hakanyi avatar Jul 13 '20 11:07 hakanyi

Hi @hakanyi ,

Using the python bindings, it is possible to create a Mesh object directly in python like this:

    import mitsuba
    from mitsuba.render import Mesh
    m = Mesh("MyMesh", 3, 2)
    m.vertex_positions_buffer()[:] = [0.0, 0.0, 0.0, 1.0, 0.2, 0.0, 0.2, 1.0, 0.0]
    m.faces_buffer()[:] = [0, 1, 2, 1, 2, 0]
    m.parameters_changed()

Have a look a test_mesh.py for more examples.

You can also use the traverse(scene) function to access the vertex_positions_buffer of the meshes in your scene directly. You will need to checkout the currently WIP pathreparam-optix7 branch for this.

Have a look here for instance: https://github.com/mitsuba-renderer/mitsuba2/pull/157/files#diff-ff12a4872e035ffe98173208cc6cc132

Speierers avatar Jul 14 '20 06:07 Speierers

Thanks, @Speierers ! Given that I have already loaded scene (e.g. scene = load_file('test/scene.xml') how would I go about adding a Mesh to it and deleting the Mesh after rendering?

I have the following use case: In the same scene (consisting of a light and a camera), I'd like to render several objects (either from .obj or directly constructed using mitsuba.render.Mesh) one after another without having to load an entire scene from disk or Python dict for each of those renders.

So the ideal pipeline would look like this (given that I already have a scene with camera and light): a.1) create a Mesh from .obj (not clear how to do that -- I have not found a way to do that except for .xml or Python dict), or a.2) create a Mesh directly (what you showed above) b) add Mesh to scene (not clear how to do that) c) render scene (that's clear) d) delete Mesh from scene (not clear how to do that) e) change camera position (that's clear)

Thanks in advance for your help! I'm a total beginner at Mitsuba2, but excited to use it more frequently :)

hakanyi avatar Jul 27 '20 15:07 hakanyi

Currently it isn't possible to add a new Mesh to an existing scene.

One "hacky" solution would to have a scene with placeholder meshes (e.g. mesh with a very tiny triangle placed right behing the camera. Adding a new mesh would simply mean take one of those placeholders and update it's vertex and face buffer.

Would this help in your case?

Speierers avatar Aug 10 '20 08:08 Speierers

Just to make sure I understand correctly:

  1. If I want to replace an object that is already in the scene, all I need to do is access (and replace) its vertex_positions_buffer as well as its faces_buffer, correct?

  2. My setup is basically: Pytorch Tensor (low-dimensional) --> Model --> Pytorch Tensors vertices + faces --> Pytorch Tensor image (using Mitsuba's render_torch function). Does replacing the buffers by Pytorch tensors keep the pipeline differentiable? I'm aiming to propagate the loss back to the low-dimensional Pytorch tensor.

  3. I am looking through the example you sent before (https://github.com/mitsuba-renderer/mitsuba2/pull/157/files#diff-ff12a4872e035ffe98173208cc6cc132) and trying to adopt it for my use case. I'm sure there is a docs section on its way that explains what exactly is happening in there -- do you know when that'll be online?

  4. What is the purpose of the Mesh object, other than testing, when there's no way to interact with a scene?

hakanyi avatar Aug 12 '20 18:08 hakanyi

  1. Indeed. You will need to call params.update() so that the Mesh::parameters_update() method gets called internally.

  2. I doubt it will be as simple as that. You will need to convert the pytorch buffer into an enoki array first, which will them update the pytorch graph during the backpropagation in enoki. Please take a look at the enoki doc on autodiff and pytorch for that.

  3. Here it is: https://github.com/mitsuba-renderer/mitsuba2/pull/157/files#diff-047f8c37b37a6d045472e24b7f60e9c4 You can also checkout this branch and compile the documentation yourself, it would be easier to read ;)

  4. You could use Mesh to create a mesh directly in python and then write a ply file to disk for instance.

Speierers avatar Aug 13 '20 06:08 Speierers

This really helps, @Speierers, thank you! I followed your advice and the tutorials in the docs that I successfully built with your help to attempt a) a replacement of a placeholder mesh by b) keeping the Pytorch -> Enoki (-> Pytorch) pipeline fully differentiable.

I faced some difficulties doing that and I was hoping you could give some guidance (hopefully final, in this matter).

The following code snippet worked in terms of a) converting Pytorch tensors into enoki arrays and b) replacing the vertex_positions_buf and faces_buf. However, scattering, for example, the new vertex buffer (of size N) onto the old one (of size 18) only copies the first 18 entries of the new buffer to the old one.

params = traverse(self.scene)
# convert torch tensors into enoki arrays
verts = ravel(FloatD(new_verts.flatten())) # verts is of type enoki.cuda_autodiff.Vector3f
faces = ravel_int(UInt32(new_faces.flatten())) # faces is of type enoki.cuda_autodiff.Vector3u
# replace the old buffers and update the scene
unravel(verts, params["OBJMesh.vertex_positions_buf"])
unravel(faces, params["OBJMesh.faces_buf"])
params.set_dirty("OBJMesh.vertex_positions_buf")
params.set_dirty("OBJMesh.faces_buf")
params.update()

Writing to the param entries directly (see below) did not work as params.update() just aborts the program altogether without providing any errors or warnings. Without the params.set_dirty() commands, params.update() works but has no effect, i.e. the placeholder mesh remains in the scene.

params = traverse(self.scene)
verts = FloatD(new_verts.flatten()) # verts is of type enoki.cuda_autodiff.Float32
faces = UInt32(new_faces.flatten()) # faces is of type enoki.cuda_autodiff.UInt32
params["OBJMesh.vertex_positions_buf"][:] = verts
params["OBJMesh.faces_buf"][:] = faces
params.set_dirty("OBJMesh.vertex_positions_buf")
params.set_dirty("OBJMesh.faces_buf")
params.update()

hakanyi avatar Aug 25 '20 10:08 hakanyi

Hi @hakanyi ,

If I understand correctly you are trying to "resize" the vertex/face buffers of a mesh and assign new values. This is something we have never tried.

I guess you will need to resize the buffer before assigning the new values. You could try ek.set_slices(params["OBJMesh.vertex_positions_buf"]) although I am not sure this will work.

Note that you might also need to change the "vertex_count" and "face_count" parameters of the mesh.

Speierers avatar Aug 26 '20 06:08 Speierers

Thanks @Speierers! Correct, that's what I'm trying to do. It seems like allocated variables cannot be resized though: ek.set_slices(params["OBJMesh.vertex_positions_buf"], 20) results in

terminate called after throwing an instance of 'std::runtime_error'
  what():  cuda_var_set_size(): attempted to resize variable 674 which was already allocated (current size
 = 18, requested size = 5)

Do you know how to reallocate memory to params["OBJMesh.vertex_positions_buf"]? Any other ideas are, of course, much appreciated.

Also, it'd be interesting to know why params["OBJMesh.vertex_positions_buf"][:] = verts with subsequent params.update() doesn't work.

Thanks a lot!

hakanyi avatar Aug 26 '20 13:08 hakanyi

Not really sure why params["OBJMesh.vertex_positions_buf"][:] = verts isn't working. You are probably writting out of range, making the program to crash.

Resizing meshes might actually be a bit more complex than that. One thing I would try is to add a method to Mesh, something like Mesh::resize_foo(size_t size) and bind it. It might be that the way we access the buffer via params[...] in ek.set_slices(params["OBJMesh.vertex_positions_buf"], 20) is confusing enoki.

Otherwise you could try to allocate a very large mesh (assuming you know some upper bound on the number of vertices) and make sure Mitsuba/Optix only uses a portion of the buffers.

Again, this is a bit hacky. We should eventually support this operation, although we are already busy with a massive refactoring of the codebase, so I won't have the time to look into this soon unfortunately.

Speierers avatar Aug 26 '20 13:08 Speierers

Hi @Speierers, I am trying to do something similar (i.e. have a PyTorch model predict the vertex positions of a mesh). Do I need the pathreparam integrator for this? If I just assign the output of a PyTorch model to the vertex buffer of the mesh (after converting the PyTorch tensor to Enoki array of course), then render the scene with other integrators (e.g. path/direct) and finally use the render in a PyTorch loss function which is backpropagated, will everything still be differentiable? What is the purpose of the pathreparam integrator actually?

I would prefer to avoid using the pathreparam-optix7 branch if possible as that branch does not have a projector emitter that I need and I failed merging the projector in that branch. Apologies for being so naive but really need to understand this stuff and any help will be hugely appreciated.

mehrab2603 avatar Mar 16 '21 08:03 mehrab2603

Optimizing vertex positions of a mesh using differential rendering does require a proper handling of the visibility discontinuities in the differentiated integrant. Currently the only available solution in Mitsuba 2 is to use the pathreparam integrator.

Besides that, the pipepline you describe above should work. Please follow carefully the different steps from the Pytorch <-> Mitsuba tutorial for this 😉

Speierers avatar Mar 16 '21 13:03 Speierers

Hi @Speierers, I am loading a mesh from a obj file into the scene, but it seems like the ordering of the nodes in the vertex buffer is arbitrary. So the output from my Pytorch model, which has nodes sorted nicely according to their x and y coordinates that don't change (only the z does), does not correspond to the points in the vertex buffer causing garbage output even when I provide ground truth node locations. Is there any pattern to the order of the nodes in the vertex buffer? I tried sorting the points in the obj file itself before loading but the vertex buffer is still unsorted.

mehrab2603 avatar Mar 22 '21 09:03 mehrab2603

Internally, the obj.cpp implementation relies on some hash table to build a vertex map and therefore the order of the vertices might be different than the ones from the input file. Although I am not sure to understand how this could produce "garbage output" as the face indices should be modified accordingly. Also how is that related to Pytorch? Do you write your "Pytorch mesh" into an .obj file before loading it into Mitsuba for rendering?

Speierers avatar Mar 22 '21 11:03 Speierers

My PyTorch mesh is a HxWx3 tensor that corresponds to a 2d plane of height H and width W. The obj file I load into the scene is also a 2d plane. I flatten this PyTorch tensor and write it directly to the vertex buffer. The order of the nodes in this flattened tensor does not match up with that of the vertex buffer.

mehrab2603 avatar Mar 22 '21 11:03 mehrab2603

Could you write in the face index buffer as well then? I suppose you are still using the one you get from the OBJ constructor?

Speierers avatar Mar 22 '21 13:03 Speierers

Ah yes, writing the correct faces to the face buffer (and also populating the texture coordinate buffer) produces reasonable result. Thanks for the suggestion. Let's just hope all this is differentiable and converges. I'll report back later after testing if anything goes wrong.

mehrab2603 avatar Mar 23 '21 20:03 mehrab2603