kaolin-wisp icon indicating copy to clipboard operation
kaolin-wisp copied to clipboard

Wisp supports joint rendering of various pipelines (NeRF, SDFs, meshes)

Open 3a1b2c3 opened this issue 2 years ago • 8 comments

Very curious if there are any examples of combining pipelines? Just add them to multiple scene_graph.neural_pipelines

3a1b2c3 avatar Sep 01 '22 10:09 3a1b2c3

Hi @3a1b2c3 , correct, adding objects to WispState's scene_graph.neural_pipelines should do the trick. We'll add an example to showcase that soon.

Be aware that for now, the neural fields we optimize are assumed to exist in the normalized range of -1 to 1 (supporting per instance model matrix is on our short-term roadmap)

orperel avatar Sep 01 '22 20:09 orperel

Great, what template would be adding a mesh for rendering? Does that mean mesh as sdf?

3a1b2c3 avatar Sep 01 '22 20:09 3a1b2c3

We don't have renderable meshes yet ;)

  1. This is not mesh rendering per se, but a quick workaround - We have a PR under review for rendering kaolin's Structured Point Clouds as a neural field, which should be integrated very soon. That would entail converting your mesh to a SPC like so.

  2. The correct way to go about it would be to implement a RasterizedRenderer for rendering meshes -> RenderBuffer. For example, kaolin's DIB-R rasterizer could be used as an actual implementation for such RasterizedRenderer.

orperel avatar Sep 01 '22 21:09 orperel

Ugly code but got it in state image but optimizerApp wont allow multiple inputs for training function so curious to see an example. I am trying to move stuff around a bit to see what i can get

So DIB-R rasterizer does the camera space magic. Can i use RasterizedRenderer and Nerf renderer at the same time (with these masks)? Like render a picture with a nerf and a mesh?

3a1b2c3 avatar Sep 02 '22 11:09 3a1b2c3

Would love to see a road map for this project at some point?

3a1b2c3 avatar Sep 03 '22 04:09 3a1b2c3

Small win: getting to train nerf and sdf. Still fighting transforms a a bit, would you normalize sdf in first place if thats not the final result i want? Can sdfs do textures?

image

3a1b2c3 avatar Sep 10 '22 11:09 3a1b2c3

Hi @3a1b2c3 , Great to see you've got some progress there!

For your questions -

  1. "OptimizerApp wont allow multiple inputs for training function" - did you end up creating 2 separate trainers for nerf / sdf? If so you could try:
def joint_train_step():
   nerf_trainer.iterate()
   sdf_trainer.iterate()

and pass trainer_step_func=joint_train_step to the OptimizerApp.

  1. "Can I use RasterizedRenderer and Nerf renderer at the same time?" - yup, that's the idea behind the renderers. They each represent an object in your scene graph so you can have multiple - a mesh and some nerf at the same time (even if one is ray traced and the other is rasterized). I've just added some explanations and diagrams about the renderers, you can check them out here: https://github.com/NVIDIAGameWorks/kaolin-wisp/tree/main/wisp/renderer

  2. For roadmap of missing capabilities: right now we're still setting the course - but if you're worried about implementing something from our future plans, please don't be :) We're always happy to accept external contributions! (i.e. if you get support for meshes working).

  3. Re: normalization - yes, the scale you set during optimization is assumed to be -1 to 1 (that's just convenient). What you're missing is an object transform that takes the rendered object and translates / rotates / scales it. I actually implemented, but I still need to iron out some bugs before I can push that one. See attachment:

https://user-images.githubusercontent.com/9556101/190154914-0ce0024c-f6e9-4213-ba58-0d0b7c2018d2.mp4

orperel avatar Sep 14 '22 12:09 orperel

Having the transform exposed in the gui is way nicer than in the config like i did it :). I grabbed some code from pytorch3d for that. Transforming a nerf is slightly harder... Do you apply the transform to the grid or the resulting data in training? Normalization again: training will fail when data is bigger than -1, 1 space? Love your renderer drawings.

What is your take on best way to load several configs from commandline? I just hard coded the sdf one for now into my main script..

I have to check my code my code for the training details, basically i allow an array of hooks to be called OptimizationApp but your sugestion requires fewrr code changes. There is some problem with stopiteration exception now i think. Thanks for your help

3a1b2c3 avatar Sep 14 '22 13:09 3a1b2c3

Just trying the new version after being away to Europe ... lovely. Joint rendering would still be parallel more with no expectation of different pipelines occluding each other correctly etc? Would it make sense to add it to the config system? image

3a1b2c3 avatar Oct 08 '22 12:10 3a1b2c3

So I kind of could do a frankenrender (2 renders needed) vs frankennerf (Merge trained nerf and mesh use 1 renderer) approach. Might be worth trying both?

3a1b2c3 avatar Oct 10 '22 20:10 3a1b2c3

Nerfstudios renderer seems to be dealing with that scene slightly better (they also have controllable clipping planes) image

3a1b2c3 avatar Oct 28 '22 07:10 3a1b2c3

Hi @3a1b2c3 ! Just to let you know that ObjectTransforms are now finally pushed: https://github.com/NVIDIAGameWorks/kaolin-wisp/pull/65/files

How to use them: You should see them right there in the gui. You could also define a transform programmatically like so:

from wisp.renderer.core.api import add_to_scene_graph, remove_from_scene_graph
from wisp.core import ObjectTransform

t = ObjectTransform()
add_to_scene_graph(wisp_state, name=you_object_name, obj=you_object_pipeline, transform=t)
t.translate(...)
t.rotate(...)

Sorry for the delay, to address some of your questions:

  • Transforms are applied in 2 different ways: ray traced objects (including NeRFs), actually apply the inverse transform on the rays. For rasterized objects, the model matrix from the transform object is applied in conjunction with the camera view matrix (for example, for data layers, that happens when the shaders are invoked, i.e. the PrimitivesPainter for data layers).

  • Transforms are applied in a separate way to training. Training should always take place in normalized object coordinates (-1 to 1, for example). Unless you have a good reason, my suggestion would be to stick with those bounds and transform your object later for visualization purposes.

  • Loading several configs: we've revamped the mains / config parser to make them easier to customize with your logic. This should be pushed soon, I plan to follow up with an example showing how to load objects via a main and view them side by side.

  • Occlusions of different neural fields / pipelines: We handle it right now by predicting a depth channel, so assuming your SDF and NeRF both produce RenderBuffers with depth channels (should do that by default, according to the behavior of our default tracers), you should be fine. Some quality issues like "z fighting" or properly blending transparent areas from the 2 fields could be tricky to get right. The depth precision you had in wisp could, i.e. account for the differences to the scene quality via other software (tuning hyperparameters and using regluarizers could mitigate that).

  • Merging together various object types: I'd argue this is still an open research problem. The solution we have for multiple pipelines is sort of a "frankenrender", it works but of course not optimal.

orperel avatar Oct 31 '22 13:10 orperel

Fantastic update, keen to try transforms. I am interested in the problem because its open :). Interested in opinions and approaches of all kinds

3a1b2c3 avatar Nov 01 '22 08:11 3a1b2c3

Screenshot_20221018-222519_Docs.jpg the depth channel in my test looks systemetically incorrect in ways other nerf render seems to not work (see above) Would you consider this an expected result?

This is the interactive renderer and a minimal modified data sets from original paper https://github.com/bmild/nerf

and new paper on the idea using instant ngp https://dl.acm.org/doi/10.1145/3564533.3564565 using a frankenrender with "depth harmonizer"

image

3a1b2c3 avatar Nov 02 '22 01:11 3a1b2c3

instant ngp depth channel image

3a1b2c3 avatar Nov 26 '22 02:11 3a1b2c3

I think good to close, thanks everybody for the help

3a1b2c3 avatar Dec 15 '22 05:12 3a1b2c3

@3a1b2c3 (or @orperel ) would be easy to elaborate a bit how it is possible to have multiple pipelines rendered at the same time. For example I would be interested to have multiple sdfs rendered/trained at the same time, is that possible?

Thanks.

ttsesm avatar Jul 04 '23 15:07 ttsesm

Oh that was a while ago. Maybe there are better ways now

3a1b2c3 avatar Jul 04 '23 20:07 3a1b2c3

Oh that was a while ago. Maybe there are better ways now

Maybe @orperel can give some feedback on this then.

ttsesm avatar Jul 05 '23 08:07 ttsesm

@ttsesm You can add multiple objects to the scene graph like this:

from wisp.framework import WispState
from wisp.renderer.core.api import add_to_scene_graph

wisp_state = WispState() # Your global shared info here.. Created once per app
nerf_pipeline1 = Pipeline(nef=nef1, tracer=tracer1) # See main_nerf.py for an example of creating a Pipeline..
nerf_pipeline2 = Pipeline(nef=nef2, tracer=tracer2) # See main_nerf.py for an example of creating a Pipeline..

# Optional NeRF args are args that NeuralRadianceFieldPackedRenderer.__init__ takes as input:
# https://github.com/NVIDIAGameWorks/kaolin-wisp/blob/main/wisp/renderer/core/renderers/radiance_pipeline_renderer.py#L26
# batch_size is an optional setup arg here which hints the visualizer how many rays can be processed at once
# e.g. this is the pipeline's batch_size used for inference time
nerf_specific_args = dict(batch_size=2**14)

# Add object to scene graph: if interactive mode is on, this will make sure the visualizer can display it.
add_to_scene_graph(state=wisp_state, name="My NeRF", obj=nerf_pipeline1, **nerf_specific_args)
add_to_scene_graph(state=wisp_state, name="Another NeRF", obj=nerf_pipeline2, **nerf_specific_args)

Each object has its own ObjectTransform so you can control their orientation, dimensions and location around the scene.

Some more explanation about the scene graph and NeuralRadianceFieldPackedRenderer is available in the docs: https://kaolin-wisp.readthedocs.io/en/latest/pages/renderer.html#the-scenegraph

Keep in mind that the app runs in an infinite loop, add_to_scene_graph is an async request. The object actually gets added to the scene graph when the next frame is drawn.

If you have further questions please open a separate issue, it makes it easier for me to track :)

orperel avatar Jul 05 '23 12:07 orperel

If you have further questions please open a separate issue, it makes it easier for me to track :)

Ok, I will open a new issue

ttsesm avatar Jul 05 '23 13:07 ttsesm