mitsuba2 icon indicating copy to clipboard operation
mitsuba2 copied to clipboard

[❔ other question] Implementing Emissive Medium Rendering

Open Microno95 opened this issue 3 years ago • 18 comments

Summary

I would like to implement rendering of an emissive medium (that is also possibly absorbing/scattering), but I'm not sure what the best place to start is. From what I can tell, the volpath integrator assumes that mediums are attenuating/scattering and the base class medium is implemented with this assumption.

My understanding is that, at a minimum, the medium class would need to be updated to account for emission and a new volumetric integrator would need to be included that can correctly sample emissive volumes. I don't know if you have anyone working on this feature, but I would be really grateful if you could point me towards any resources or aspects of the code that might help with implementing this feature.

Microno95 avatar Nov 03 '20 13:11 Microno95

Hi,

Yes, you would have to add support for emission to the medium class and implement a new integrator. You could use the volpath integrator as a starting point. You would probably also want to implement some kind of importance sampling scheme for the emissive volumes. We don't have anyone working on this right now, as it is quite a bit of work with limited use for us at this point.

At some point it would also be interesting to see if the emissive volumes could be integrated into volpathmis integrator (which implements https://cs.dartmouth.edu/~wjarosz/publications/miller19null.html). Potentially the null scattering path integral formulation could be useful for MIS with emissive volumes, but I haven't really looked into it more.

Delio

dvicini avatar Nov 03 '20 14:11 dvicini

So I've been looking into this the past few days and I've essentially modified the volpathmis integrator by adding volume emitter sampling and emission contribution for rays passing through a medium. Although I'm not sure if I've got MIS correctly implemented (I will need to go over the code). The current state allows me to create renderings like the following.

torus - best

I am 60% sure that energy conservation is respected and the emissive medium contribution is correctly sampled (aside from scaling of units). I'm looking into ways to test the correctness of the implementation, and I'm open to suggestions as I've hit a bit of a dead end. My best idea thus far is to photograph a neon bulb using a DSLR and attempt to correctly recreate that using the emission spectra of Ne gas. But that might require handling line emission spectra which I don't think is a feature of the current spectral rendering setup.

Microno95 avatar Nov 07 '20 21:11 Microno95

That looks cool!

I wouldn't try to match a photograph yet, that's usually quite tricky. You would need to correctly convert the DSLR pixel values to radiance, which is non-trivial itself.

I would just make sure that your results are consistent between not doing any emitter sampling, using emitter sampling and using full MIS. These kind of consistency checks usually reveal a lot of issues already. You could also try checking against a renderer which does support emissive media.

Delio

dvicini avatar Nov 08 '20 11:11 dvicini

I see now how photo matching might be tricky :sweat_smile:

With regards to with and without emitter sampling, are you referring to rendering without the emitter in the scene or disabling the sampling in the code path? I have tried the former where I rendered two images, one with only volume emission and one with only the square emitter as light sources. Summing the two images (using the photoshop add overlay) results in the same image as the render with both light sources enabled.

Microno95 avatar Nov 08 '20 12:11 Microno95

What I meant is the following: I assume that your code does explicit sampling of a position in the volume to sample the volumetric light source and do next event estimation, right? But you can of course also sample the light source by just hitting it randomly after BSDF/phase function sampling. These two sampling strategies should give you the same result in the end.

dvicini avatar Nov 09 '20 12:11 dvicini

I've now tried doing that and have set up a test scene with a pure volume emitter and a pure surface emitter enclosed inside a sphere with an attached irradiancemeter in order to see if I've correctly implemented the emission aspect. Turns out, I have not. Thus I've moved to a far simpler volume path tracing that does not do NEE nor MIS and have tried my hand at simply getting that to work which does not.

In the current integrators, the absorption/scattering probability is merged into one and treated as such, thus there are only 2 possible ray outcomes during a collision: scattering and null. Incorporating emission requires 3 possible outcomes: absorption/emission, scattering and null. This is the point where I've hit a dead-end. Either I am incorrectly understanding what the current integrator is doing or I am implementing the null-scattering with emission incorrectly.

Thus I have a few questions:

  1. The current integrators appear to be using some form of delta tracking where the transmittance (throughput) is tracked for every bounce/volume interaction until either an emitter is encountered or max_depth is reached. Is this correct?
  2. Are there any papers that would be useful for understanding how spectral sampling has been implemented? I am having issues where the volume rendering seems okay in RGB mode, but then the result is roughly 100 times more energetic in spectral mode.
  3. Are there any references used for implementing the MIS that is used in the current integrators?

I hope that's not too many questions

Microno95 avatar Nov 13 '20 00:11 Microno95

I am not sure how a volumetric and a surface emitter relate. There is probably a way to check the correctness of an emissive volume by comparing to a surface emitter, but I would have to think about it. It's not clear to me what the easiest test case would be here.

I would suggest to do what you now described, comparing the different versions "No NEE" / "Just NEE" / "MIS" and check if they are consistent.

Are you looking at volpath or volpathmis? I would strongly suggest to first make it work in volpath and then extend to volpathmis. The latter implements https://cs.dartmouth.edu/~wjarosz/publications/miller19null.html, but this is quite complicated and requires tracking quite a few quantities along the path. The volpath integrator is much simpler and more "conventional". There should only be a difference when using spectrally varying extinction. I would also suggest to first implement a homogeneous emissive volume correctly, then you don't have to worry about null interactions.

To answer your other questions:

  1. Yes, or russian roulette terminates the path
  2. For spectral rendering, I would check the Mitsuba 2 paper (https://rgl.epfl.ch/publications/NimierDavidVicini2019Mitsuba2) and https://rgl.epfl.ch/publications/Jakob2019Spectral, which describes the spectral upsampling from RGB coefficients. I would try to make your method work in RGB first and then look at the spectral case.
  3. https://cs.dartmouth.edu/~wjarosz/publications/miller19null.html

I hope that helps

dvicini avatar Nov 13 '20 17:11 dvicini

Thank you so much! Those resources were really helpful and helped me incorporate the emission term in volpath. I tested the no emission case with both scalar_rgb and scalar_spectral modes and it gives identical results compared to both volpath and volpathmis on the same scene. Furthermore, I rendered all the scenes in resources/data/tests/scenes/participating_media and in comparison to the references that are in the repository, all of them look near identical. This suggests to me that the absorption, scattering and null-scattering terms are mostly correct in that they are being correctly sampled with the correct weights.

Regarding a test case that can be automated, a sphere with homogeneous volumetric emission would emit a fixed amount of light per unit time. Could the irradiancemeter with a spherical surface be used to measure the radiant flux from the emitting volume which could then be compared to an analytically derived result under the assumption that there is no scattering?

Microno95 avatar Nov 14 '20 17:11 Microno95

You could try rendering a test case where you set albedo to zero, that disables scattering and the medium would only be absorbing. I am not sure if it's that easy to then derive an analytic formula, but it might be possible. We opted to use mostly rendering tests such as the ones you already ran. If your implementation is consistent between the different sampling techniques, that's also a strong indication that it is correct.

dvicini avatar Nov 16 '20 09:11 dvicini

I've run all the participating media test scenes in the resources/data/tests/scenes/participating_media directory and in all but one case the results look near identical (aside from differences in noise).

The one case that fails is in rendering the test_hetero_sigmat_albedo case in the participating media tests when rendering in scalar_spectral mode and with MIS. The volume with the emissive formulation is darker than the one without in regions like the one inside the cyan circle. This is not true for the non-MIS version of the emissive volpath renderer.

comparison_of_renders

I'm not sure how to actually debug this as it only happens with the spectral mode and not the RGB mode. I have narrowed it down to either being a weird interaction with the difference in sampling colour channels (sampling R, G and B vs Spectral MIS+Hero Wavelength sampling) or it is related to the second issue with path termination.

So in the conventional null-scattering formulation paths have 3 possible events: absorption/emission, scattering and null-scattering where upon encountering an absorption/emission event the path is terminated. In the default mitsuba integrators, since the emission event is neglected, path termination does not occur and instead paths are either scattered or null-scattered depending on the combined extinction (or delta-tracking continues without either event). This leads to less noise since paths aren't abruptly terminated within the volume and are instead traced all the way to the light sources in the scene.

What I am wondering is if there is a way to continue paths upon encountering an absorption/emission event such that we can also accumulate the contribution of other emitters along that path (volume or otherwise).

I'm posting this as an update, but if you have any ideas I'd be happy to try them.

This is the RGB mode rendering in comparison: comparison_of_renders

Microno95 avatar Dec 02 '20 17:12 Microno95

And I have fixed that issue as well; when hero wavelength sampling, in the cases where the albedo is 0 for the hero wavelength, the probability of a scatter event becomes 0 (as the events have now been split into 3 instead of 2 with the probability of scatter being proportional to the transmission coefficient of that wavelength instead of the scattering coefficient). In order to verify that the renderer was working as intended, I not only compared to Tungsten, but also to a raymarching integrator (with and without spectral MIS) that I implemented as well. This incorporates the analytic solution of the non-scattering emission+absorption VRE in the homogeneous case which I have used as a baseline.

In order to automate some of these consistency tests, I was curious as to how I should go about incorporating these new integrators into the test suite since they currently are not tested by CI. Is there a particular coding convention or structure that I should follow?

Microno95 avatar Feb 12 '21 15:02 Microno95

Hi, Sorry for the delay. That is some good progress!

We currently only test the volume integrator(s) by rendering images and comparing them to the references using the src/librender/tests/test_renders.py script. This script renders the test scenes in participating_media and compares to the existing reference images in the repository.

How different is your emissive implementation now from my volpathmis integrator? Could we potentially port the required changes into that integrator or does the increase in complexity warrant having a special integrator for the less common use case of emissive media?

dvicini avatar Feb 16 '21 08:02 dvicini

Hi! I'll look into creating some reference emissive renders and adding emissive media testing.

As for porting the changes, most of the original volpathmis integrator is still intact so incorporating the changes should not be too difficult. Although it's not as easy as just merging the branches (partly cause my branch has other modifications to accommodate raymarching and some experimentation).

The list of changes are:

  • volpathmis.cpp:
    • Medium interaction is sampled after computing the next ray intersection (enables uniform sampling of optically thin, emissive media)
    • Additional branch + probability calculations for sampling the emission term
  • medium.h/medium.cpp:
    • Added has_emission and is_optically_thin flags
    • Added is_natural_emission flag to differentiate between the case where radiance is μ_a×L_e (mediums where emission and absorption spectra are the same) and the case where the radiance is simply L_e (plasma, optically thin mediums where μ_a is almost zero but you still have very strong emission like candle flames)
    • get_radiance method to get the radiance for a given MediumInteraction.
  • homogeneous.cpp/heterogeneous.cpp:
    • m_radiance attribute to store the radiance
    • updates to constructor and methods to deal with the above additions to Medium

In order to ease some of that I made changes to constant3d.cpp so that max is implemented instead of throwing a NotImplementedError, and changes to grid3d.cpp so that the max computation is detached from the enoki gradient tree (apparently hmax does not have gradients in enoki and it seemed the simplest way to resolve some of the gradient descent issues I had).

I can create a branch with the modifications that you'd like in volpathmis, test it and create a pull request for porting it over.

Microno95 avatar Feb 16 '21 12:02 Microno95

Hi,

Okay, that sounds good. Just some comments;

  • The reason to compute the ray intersection after sampling the medium interaction was that this allows to significantly speed up homogeneous media, e.g. for subsurface scattering by tracing "short rays" with the maximum intersection distance set from the medium sampling distance. It would be nice to keep this. I haven't benchmarked this in a while, but in some experiments I did back with Mitsuba 1 this gave an up to 3x performance boost on SSS.

  • Are you sure we need to distinguish between is_natural_emission true/false? Shouldn't there be a formulation which always holds? Or is this just about making the parameters more user friendly? The fewer extra flags we can introduce, the better.

  • Why do we need an implementation for max of a constant grid? Wouldn't it be better to switch to a homogeneous medium in that case? The maximum should only be used to get the maximum density for delta tracking. It doesn't hurt to implement the missing method, I am just wondering if it's really needed. Detaching the maximum from the gradients is good - it doesn't really make sense to propagate gradients through that (for now at least).

Please create a branch and pull request such that I can have a look at the code. Note that it might take a bit for me to incorporate it: we are currently in the process of (once again) changing large parts of the code base and the master branch is really outdated at this point. The latest version of the code is now on the next branch

dvicini avatar Feb 18 '21 13:02 dvicini

Okay, I will start working on that, hopefully can have the PR up in ~1 week.

To answer your questions:

  • I see how that can speed up short rays, I will look at making it so that only optically thin media have a pre-intersection, that way for optically dense media like subsurface scattering, this should keep the performance boost (and it would make sense since they are the two extremes of ray length)
  • From a computational perspective, the distinction has the advantage that for natural emission, uniform sampling can be turned off for optically thin media and analogue probabilities for the null-scattering events can be used (otherwise a separate weight is required for the emission). I would have to do further tests to see how this impacts variance, but it should reduce it in the general case. From a user perspective, there's also the semantic distinction between types of emitting media where the emission is proportional to the absorption spectra and media where the emission is not proportional to the absorption spectra.
  • The max of a constant grid is useful when a heterogeneous media might only have a varying emission or albedo, but a constant optical density, or when the optical density varies but the albedo is constant. There's also the computational upside where the maximum value can be used to determine if the medium is scattering/absorbing/emitting by checking if the max is greater than zero. For example, I use this to turn off the absorption event in the null-scattering formulation thereby avoiding early terminations when emission is zero.

I can also look at building the PR based on the next branch if that's preferable.

Microno95 avatar Feb 18 '21 13:02 Microno95

  • So the pre-intersection is really necessary for the optically thin media? I am not sure I fully understand whats going on: aren't we still just sampling the transmittance and the emission sampling would be done in the sample emitter / NEE routine? I am just concerned to add even more complex conditional logic to the already hardly readable volpathmis integrator
  • From a user's point of view, isnt the distinction between the two cases simply done by converting to the other representation if needed? I.e. we could specify that Mitsuba always uses one form, and users can convert their data into that form by multiplication/division. By the same reasoning we only support volume parameters being specified as albedo & sigma_t and do not support sigma_s & sigma_a. It's better if the renderer just supports one version and the user is responsible to provide the correct data.
  • If the density is not varying, it would be better to use the homogeneous medium. It should already support spatially varying albedo. Do you think adding spatially varying emission to a homogeneous medium? We could also add a function is_spatially_varying to the Volume class, analogous to the function for Textures.

Yes, it would be good to base the pull request on next. Not much should have changed with regards to volpathmis there anyway, so hopefully it doesn't imply much extra work

dvicini avatar Feb 18 '21 14:02 dvicini

  • In the null-scattering events, the absorption/emission event terminates the ray and samples the emission at the sampled ray distance t. In optically dense media, the sampling of this distance is proportional to Beer's Law (using the majorant), but when the medium is optically thin (ie. μ ~ 0.0) then the sampled distance will almost always be very large (ie. exceeding si.t). If the medium is strongly emissive, then the very few samples that fall within the volume will have very large contributions thus leading to huge variance (regardless of if the emission is proportional to absorption or not). In this regime, the sampling has to be proportional to the emission which reduces to uniform sampling of distances and in order to uniformly sample distances, I will need to know the distance to the next surface which makes the pre-intersection necessary. This image shows how Beer's Law sampling fails for strong emission at low densities: comparison of sampling The switched sampling refers to the fact that both Beer's Law Sampling and Uniform Sampling are used, but the medium majorant and whether the medium is emissive determines which one is selected. (The increase in brightness is due to the fact that there is less attenuation of the emission by the medium itself - otherwise the emission is identical between all images)
  • Yes, it could definitely be done, but the flag is still useful for setting the correct probabilities for the null-scattering events. The analogue probabilities are conventionally P_a = μ_a/μ_t, P_s = μ_s/μ_t, P_n = μ_n/μ_t (where I've dropped the over bar for the majorant μ_t). When the emission is decoupled from the absorption, the probability of sampling an emission event needs to be increased in order to account for the fact that the likelihood of termination of a ray is not determined by a reduction in the throughput, but rather the increased probability of having encountered a light source.
  • Yes, that actually sounds like a better idea since emission is a property like albedo. The only other case where max is useful in constant3d is that it allows detecting of whether the medium is emissive and that informs the type of ray distance sampling to use (if the medium isn't emissive, then Beer's Law works perfectly fine for sampling distances).

There were also two things that I wanted to ask about specifically:

  1. Currently, the grid3d texture only supports 3 channel or 1 channel volume data, is there a way to load spectral data in the grid?
  2. There is also an issue in the magnitude of the radiance when using it in rgb vs spectral mode where the latter is ~50 times more energetic/brighter. Am I correct in thinking that the conversion from rgb triples to spectra does not necessarily preserve energy when it comes to values pertaining to the emission of light?

Microno95 avatar Feb 18 '21 15:02 Microno95

Okay, thanks for the explanatory image. I can see how one could basically always break the exponential sampling strategy. I wonder how important that use case is though? I've never rendered emissive volumes, so I don't know what common parameters would be. Do you expect that a flame should be modeled as a medium which does not absorb or scatter light, but only has emission? Generally speaking, it's almost always possible to break rendering algorithms in one way or the other. The question is for me which are the cases we need to support for the implementation to be useful (I assume you have some specific use in mind?). From my point of view, I am fine with introducing some extra complexity if it makes the integrator practical. But I also don't want to make the integrator too complex for a very rare use case.

Also, are you doing MIS between the uniform and exponential sampling strategies? I am further wondering: are you also adding up emission when a null scattering event is sampled? This would be similar in spirit to ratio tracking ("Rao-Blackwellization" would be the technical term). This does not solve the issue you are mentioning, but can potentially reduce variance in other cases. There is no need to do a discrete decision on whether to consider emission or not

As for the parameters: I am not sure i understand what you mean. Once we take a decision on what the meaning of the parameters is, thats just what the integrator will assume and the users will have to correspondingly adapt their parameters. So there is no more reason to switch between different interpretations

For the last two questions

  1. We never tried to load a spectral grid - we simply did not have any data. What you can do is render an RGB grid using spectral mode. I am curious about spectral volume rendering, as I've never really looked into that. The volume data format is identical to the one used by Mitsuba 1, so you can (for now) check its documentation to see the details about the binary format
  2. Generally speaking you should not see such a huge difference. So you are specifying emission as RGB and then get a much brighter image in spectral mode? Does the same also happen for the emission of light sources?

dvicini avatar Feb 18 '21 16:02 dvicini