yt
yt copied to clipboard
[WIP] Added a render sampler for external lighting
PR Summary
So, it turns out that the Cython render code has a sample option for volume rendering a field under external lighting. I copied the VolumeRenderSampler python method and made a sampler for the lighting. I've marked this as WIP because I am unsure about how you all would want this implemented into the code base, but in this pull request, you can turn on the lighting render through the following code:
sc = yt.create_scene(ds, 'qc')
source = sc[0]
...
cam = sc.camera
source.sampler_type = 'light-render'
source.set_sampler(cam)
Whether or not this is the correct way to expose this option to the user, I would like to leave up to the maintainers to discuss. Additionally, right now there isn't really user-exposed control about which direction the lighting is coming from. This could likely be an attribute added to the camera class and passed through to the sampler.
Below are two examples of the rendering, one with the lighting sampler and one with the default volume sampler. I have yet to fully dig into what's going on, and if the fidelity of the result can be improved.
PR Checklist
- [ ] Code passes flake8 checker
- [ ] New features are documented, with docstrings and narrative docs
- [ ] Adds a test for any bugs fixed. Adds tests for new features.
Wow!
Ping @samskillman
This seems like a reasonable way to select this rendering method.
maybe call it light_source_volume_render
?
This'll need docs and some tests before it can be merged. Let me know if you need a hand getting the tests passing.
@ngoldbaum I'll sheepishly admit I don't have much experience when it comes to tests, but I'd be more than happy to do the work if you can point me in the right direction regarding what needs to be done. I'll fill in what documentation I can within the new methods created.
For adding new tests, take a look at the existing volume rendering tests in yt/visualization/volume_rendering/tests/
. There are also instructions on how to write tests and run the test suite in the documentation: http://yt-project.org/doc/developing/testing.html.
Basically you'll be writing a python function that makes a volume rendering using your new sampler and checks to make sure that whatever you're doing doesn't crash the code and generates a correct answer. For things like volume rendering where the answer is an image it's sometimes not straightforward to write a test that checks if the answer is correct without directly comparing images, but if you can think of a way to do that that would be best. It's also possible to save reference images and compare with those using the answer testing infrastructure, some of the volume rendering tests do that. It's also ok (but not great) to just write smoke tests that make sure the code runs without crashing and not do any further verification to make sure the answer is correct or doesn't change.
If possible try to write your test using fake in-memory data (e.g. maybe take a look at yt.testing.fake_vr_orientation_ds
as well as the tests that use it).
This is awesome. Glad that it was re-discovered and amazed it works :).
Hi @keltonhalbert I see that this has not been updated in a while! Would you like some help getting this finished and merged?
pre-commit.ci autofix
I did some house cleaning (merge conflict resolution + lint), and CI seems to be green again, but we would still require new tests (and docs) in order to merge this.
@keltonhalbert, I am sorry this has lingered for so long. Do you wish to ever come back to it or should we look for someone else to take over ?
Hi @neutrinoceros - sorry for the slow response! Unfortunately, I'm currently in the process of starting up new employment while also finishing up my thesis. This is not likely something I have the time to add to my plate right now, so please feel free to bring on somebody else to finish this up!
Thank you for taking the time to respond, there is no urgency on our end. Good luck with your defense !