THREE.js-PathTracing-Renderer icon indicating copy to clipboard operation
THREE.js-PathTracing-Renderer copied to clipboard

More samples per frame option

Open tom-adsfund opened this issue 3 years ago • 63 comments

On high-end graphics cards, a bottleneck is the 60fps cap. As part of the "more abstractions", it would be good to have an option to adjust the number of samples calculated per frame. This could then be experimented with, maybe making it adaptive depending on attained frame rate.

tom-adsfund avatar Feb 16 '22 08:02 tom-adsfund

Hi @tom-adsfund This is actually something I think that could be added without too much trouble. From the beginning of this project, I've had limited system specs (laptop with integrated graphics, mobile devices, etc.). I'm actually ok with continuing to use these underpowered devices to develop on - it makes me think outside the box and come up with not-so-obvious solutions to rendering/real time problems so that most everyone, no matter what hardware they're on, can enjoy real time path tracing in the browser.

However, for users like yourself who have more modern GPUs, the 60fps cap of WebGL2 doesn't allow the full potential of path tracing on more powerful hardware. In addition to a SPF (samples per frame) option, I would also like to add a max specular and max diffuse bounces option.

The ultimate goal here would be to have something like the Blender Cycles side panel, where users can use the sliders to adjust the rendering quality vs speed for their particular device.

The only problem I can forsee at the moment with adding this sort of options is that I don't want to add 'if' statements to my shaders, in an attempt to make them more generalized and abstracted. GPUs don't like divergence, and I already have it in spades, ha (necessary for any path tracer) - I don't want to bog it down further, especially in the super-tight bounces for-loop in each shader.

So that means we would have to devise a pre-compilation system (basically pound-defines and ifdef's) that would build the shader with the user's specifications up front. This is what three.js does with their normal WebGL renderer, but I've never really studied it closely enough to imitate it.

If we can bypass those concerns, I see no problem adding your suggestion, and even more fine tuning options.

erichlof avatar Feb 16 '22 15:02 erichlof

Given the limits of the shader language, I think the best option would be to have a shader generator in JavaScript, and that way you can compose a spec in some higher-level way and then produce the shader code from that spec. This allows a powerful decoupling where you can also test the specs for viability and give users feedback etc. Even more advanced would then be to generate specs depending on client machines.

I find there's a benefit to working both with high powered and low powered machines, because the higher powered allows rapid testing and exploration. Also, today's "high end" hardware rapidly becomes more available, and there are at least as many benefits to catering to the high end market (more powerful apps).

tom-adsfund avatar Feb 16 '22 15:02 tom-adsfund

I've just been testing the latest version with some upweighting controls, and while everything is extremely good, this frame rate limit makes it essentially impossible to determine whether upweighting improves on the current situation. I'm left waiting as the frames cycle for the updates.

Hopefully this gives an impression of the speed (unfortunately Github only allows up to 10MB videos, so had to scale down):

https://user-images.githubusercontent.com/3634745/154503753-5d41afa1-5abf-4611-9d63-38a79ac44f0b.mp4

tom-adsfund avatar Feb 17 '22 14:02 tom-adsfund

@tom-adsfund

Yes that looks pretty smooth- although it's hard to tell if the up-weighting scheme is making a big impact or not.

I don't know if I mentioned it, or you may have seen my comments in other threads, but this demo is the only one in the entire repo not made by me. It was submitted years ago by n2k3. I don't believe he's working on it anymore, but I've tried my best to update it and maintain it, as there have been tons of changes to my project, dependencies, and path tracing algos since then.

Since you've already been working with that demo it appears, I'm hesitant to say it, but I would maybe suggest to test with one of my demos (I don't know how invested you are with that particular demo at this juncture), but that demo doesn't follow my clear pattern of file and dependencies organization and init patterns.

I say this also because I don't know how many revisions down the line I can keep going in and fixing errors that crop up with every change to my repo - for instance, on the recent start-at-black fix, it automatically just worked for my demos repo wide, but I had to go into his source code and fix the errors by hand. Otherwise, this demo would not work at all since many years ago.

Lastly, I haven't had the time or motivation to go in and add by hand all the recent real time denoising efforts that just work on all of my own demos. I think this might help with your perceived smoothness and noise suppression.

If you're needing a glTF model demo, any of my BVH or HDRI demos have a hopefully clear and consistent loading, processing, and rendering pattern across the board.

erichlof avatar Feb 17 '22 15:02 erichlof

Yeah, I was highlighting how it's not really possible to know without the frame rate issue being "fixed". I don't know if it's obvious why that is, but having the faster sampling would highlight the improvement when moving, for example.

I only started with that demo because it was one of the few with a control for the pixel ratio(!!)

I'll happily move to any other one. But I do need a control to allow more sampling per frame.

tom-adsfund avatar Feb 17 '22 15:02 tom-adsfund

I'm currently working on looping multisamples per frame and when I get something working, I will make a special test demo for you on the repo (but it won't have a public-facing clickable link, like all the other demos). Multi-sampling is a little tricky, simply because my denoiser has been set up with 1 SPP per frame in mind. But I'm confident I can get a little test scene for you to experiment with.

erichlof avatar Feb 17 '22 15:02 erichlof

As promised, here is a new custom demo that lets you dynamically change the pixel resolution (pixelRatio in three.js), as well as dynamically change the number of samples per frame: MultiSamples Per Frame Demo

I chose the Geometry Showcase demo as a scene starting point, partly because it loads super-quickly for fast developer/tester iteration times, and partly due to this collection of shapes (curves and straight edges), lights (multiple area lights) and materials (most common materials that are encountered in the wild) as being a good representative demo of a generalized scene and setup. Lastly, I chose this scene over a BVH one as the amount of lines of code is significantly reduced. This way, you can quickly navigate to a part of the code that I added or that you are interested in, and you should be able to immediately see how I did it. The 3 files of this demo are MultiSamples_Per_Frame.html (just a shell), MultiSamples_Per_Frame.js (setup / GUI handling), and MultiSamples_Per_Frame_Fragment.glsl (the heart of the path tracing demo).

The Pixel Resolution slider can go anywhere from 0.3 (pretty chunky) to 1.0 (glorious full resolution). In the past I've used 0.5 as default for my demos, but recently found that 0.75 offers a little better quality (less noticeable noise patterns) and is still able to keep the frame rate up somewhat. 0.75 is currently the page start default now across the repo. As everyone's GPUs and mobile devices get faster in the future, I would like 1.0 to be the ultimate goal and default.

As far as Multi-Samples per frame goes, I included a similar gui slider to choose between 1 and 20 samples per pixel, per frame. 1 to 2 samples is fast but too noisy to be useable without my custom denoiser (that was unfortunately designed for 1 SPP per frame). 6 to 10 seems like a nice balance between quality and performance. 10-20 we start to see the curse of Monte Carlo diminishing returns, as I can't see much of a difference between 16 and 20. However the difference between 16 and 20 is noticeable in the drop in frame rate, at least on my humble laptop with integrated graphics. I have to shrink the browser window down to postage stamp size (ha) to run 20 SPP at 1.0 full resolution. But boy does it look good though! 😄

Interested to see what kind of performance you can get on your setup. -Erich

erichlof avatar Feb 17 '22 20:02 erichlof

I've always loved that demo!!

Awesome, I'll work on it now.

tom-adsfund avatar Feb 17 '22 20:02 tom-adsfund

It's hard to show the quality with a 10MB limit... but it's amazing.

https://user-images.githubusercontent.com/3634745/154574622-f1ea54db-0bf5-44d1-8e3c-77510fb3037a.mp4

tom-adsfund avatar Feb 17 '22 21:02 tom-adsfund

Notice the sample counts in these screenshots:

multi-sample-upweight multi-sample-upweight2

tom-adsfund avatar Feb 17 '22 21:02 tom-adsfund

So,

I've found the limits of the Tesla hardware: you're realistically looking at a 1080p with 6 samples per frame roughly as in the images above (bigger than 1080p).

And I think the upweighting makes a strong perceptible difference to the quality. Without it, you get a mushiness to the image at first.

I thought that demo was the moving one... if you can set that one up I'll try it.

tom-adsfund avatar Feb 17 '22 21:02 tom-adsfund

Whoo hoo! That video looks like each frame was pre-rendered offline - except that it wasn't and you only had to wait a fraction of a second for each frame to finish! (lol). Thanks for posting the example pics and videos. It really helps communication here on a GitHub thread.

If you don't mind me asking, what are your system specs? CPU/GPU? And can you get 30-60 fps even with higher sample counts? 0.75-1.0 resolution?

erichlof avatar Feb 17 '22 21:02 erichlof

One NVIDIA Tesla V100 16GB, 90GB of regular memory.

This is last generation hardware, the A100 is the latest. With that I assume you'd be able to do 1440p realtime.

The framerate definitely goes down with higher settings. I really want to see the animated demo with the settings I showed in video.

Also, just to say: I had to struggle with the movement controls again...!!

tom-adsfund avatar Feb 17 '22 22:02 tom-adsfund

Hi, It's possible to remove the echo effect during transformation? ... I try to think a different way to mix two different position avoiding the echo... especially from yellow light reflection ( I know that merge frames it's to avoid black refresh... but... ) Many thanks for your hard job!!

passariello avatar Feb 17 '22 22:02 passariello

@passariello My guess is that the echo will easily be removed by tweaking the parameters. There will be many improvements like that to make.

tom-adsfund avatar Feb 17 '22 22:02 tom-adsfund

@passariello Yes that echo (I think the traditional CG name for it is 'motion blur', or 'ghosting') comes from the previous animation frame being blended with the current animation frame. There are 2 ways around excessive motion blur. The first is to simply have enough FPS (like 50 to 60 preferably), where the previous image gets cleared so fast that the eye cannot see the ghosting before the renderer has drawn the next frame. In this case, a simple half/half blend will be perfect. For example, finalPixelColor = previousPixelColor * 0.5 + currentPixelColor * 0.5;

If frame rate cannot be kept at those speeds, the 0.5 strategy above will still result in ghosting (as seen in the video), so the 0.5 blending weights need to be adjusted. The more previousPixelColor you have, the more ghosting but less noisy the image will be. On the other hand, the more currentPixelColor you have, the faster the update of the screen, but it might show more distracting noise. Since the weights need to add up to 1.0, maybe something like finalPixelColor = previousPixelColor * 0.3 + currentPixelColor * 0.7; might do the trick. It is very much a user-subjective opinion and up to their taste to decide on which weights look good to him/her.

Personally speaking, I prefer just a little blur when moving the camera fast, as this mimics real physical cameras that can't keep their shutter speeds up with the quick movements. But too much motion blur can be as distracting as the raw noise. It really takes some experimentation on a personal basis.

On that note, I will soon try to add a slider for both of the weights that control the frame blending. Then the end users can simply dial in the animated look that they want.

erichlof avatar Feb 18 '22 03:02 erichlof

@tom-adsfund Thanks for the specs and reports - and yes I'll be happy to add a similar multi-sample version of the GameEngine_PathTracer.html. I believe when you said 'the scene that moves', this is what you are referring to. It contains the exact shapes, but a handful of them are moving around.

Just a note about that, this will require a further fine tuning of the 2 weights as discussed in my previous reply post. This is because, unlike all of the static scenes, like Geometry_Showcase (which you were just experimenting with recently), the progressive samples never quite settle down and converge. There has to be a steady stream of incoming currentPixelColor samples on every frame, otherwise, major ghosting occurs on the moving objects over time. I tend to go with 0.8, 0.2 or maybe 0.7,0.3 and just live with the slight ghosting. When you leave the camera still, this helps settle down the room's background diffuse walls, floor, and ceiling, which are being continuously sampled to achieve global illumination.

A more sophisticated approach that is used on custom real time path traced shaders for games like Minecraft RTX, is one where any surface that has been sitting still for even a couple of frames, is made to settle down and no new samples are taken, therefore - no noise. The 3rd person player's character that is always moving though, has to be handled with a different strategy, kind of like my edge-detecting Gaussian blur and noise filter on this repo. I would like to copy the more sophisticated code/algos someday, but there's is proprietary and closed-source. NVIDIA'S even uses deep learning AI-trained denoising/image reconstruction to achieve real time sample noise-suppression on dynamic objects. It is pure sorcery!

Will be back soon with the dynamic scene demo for you!

P.s. By the way, sorry for the late replies - Sometimes there might be a lag between when you post a question and when I respond. I promise to always respond, but this is after all my passion hobby, and life happens, and I must tend to various things. I will respond eventually though! 🙂

erichlof avatar Feb 18 '22 03:02 erichlof

@tom-adsfund Oh I forgot - sorry that you're having issues with my controls. If I may ask, what is it that you are running up against when using my control scheme? Is it something that doesn't work correctly, or is it something that you would like to add, or maybe something that you would like a slider/setting for more fine control? I'll be glad to take a look at my controls and keyboard/mouse handling and see if there's anything that would make them more useful or satisfying.

erichlof avatar Feb 18 '22 04:02 erichlof

Yeah, given what you've said, I think I can solve that ghosting problem in a more robust way. I'll do it as part of testing the moving demo (which is the one you said).

The controls I'm talking about are the mouse controls, which on desktop go crazy if you go past a certain distance from where you started. And so I spend almost a minute trying to get the view back to something of any interest. Having sliders would be much better generally for fine control, as you say.

tom-adsfund avatar Feb 18 '22 08:02 tom-adsfund

Less echo:

https://user-images.githubusercontent.com/3634745/154685595-838d07a2-9c8d-4dae-a0dc-77f62247b294.mp4

I do it using the distance between the two pixels, which in shader language is just distance(pix1,pix2)!

tom-adsfund avatar Feb 18 '22 12:02 tom-adsfund

@erichlof I'm probably going to wait until the port to WebGPU and Node Materials until I put more into all of this. I'd be interested in the motion demo, but my main interest will be when we can use this renderer with general Three.js scenes.

tom-adsfund avatar Feb 18 '22 13:02 tom-adsfund

@tom-adsfund Ok I'll look into providing some camera fine controls. In the meantime, here is the Dynamic moving test scene you requested: MultiSPF Dynamic Scene Demo

The noise is a lot more pesky on this demo, simply because the diffuse surfaces can never settle down completely (ghosting issue talked about before).

I added another slider so you can control the previousFrameBlendWeight amount directly. Adjusting this does the inverse to the current pixelColor, currentPixelColor *= (1.0 - previousFrameBlendWeight). This ensures that all the weights add up to 1.0, as required by Monte Carlo-style integration.

Interested to see what kind of quality you can get on your better hardware setup. Enjoy! -Erich

erichlof avatar Feb 18 '22 18:02 erichlof

@tom-adsfund Regarding controls, I have 2 global variables in place that scale the speed of movement and rotation of the camera. These are camFlightSpeed and cameraRotationSpeed. Both should be defined/adjusted on a per-scene basis because one scene might be a tiny room, while another might be an entire mountain range.

I just realized that camFlightSpeed and cameraRotationSpeed are inconsistent in that camFlightSpeed is defined with the keyword 'let' on each and every demo's js init file, while cameraRotationSpeed is defined with the keyword 'let' also, but in global fashion only once in the large InitCommon.js file. I wasn't aware of this, apologies for the confusion. I will address this right away, it's just that I need to update that 1 line across the dozens of demos' respective/matching js init files (which is easy, but annoying, lol).

I think defining both of these variables once in the common init file that all demos/scenes use, would be the best plan and the least amount of code. As proof of concept, I will retroactively place 2 new sliders in your 2 test demos, so you can see how you like it, and see if that fixes the camera wackiness on your system. Just a side hint: if you zoom in quite a bit with the mouse wheel for an FOV of 30 and below, yes the camera rotation movement becomes very very sensitive. I've always just dealt with it, but I realize some users like yourself might be trying to capture a certain small part of the scene and have the camera be smooth, even at high zoom levels. Hopefully the new global camera controls variables combined with sliders for each will solve the problem.

Will be back with those changes soon!

erichlof avatar Feb 18 '22 22:02 erichlof

So I won't make videos trying to show the real quality: it's not really worth the time trying to fit something into 10MB.

But here's a clip of my version that highlights that it's blending all the time, including when the camera moves, and avoids all noise (pretty hard to see with the video):

https://user-images.githubusercontent.com/3634745/154769909-01c684ea-88b4-4e4c-bdf1-d01c4d8354ef.mp4

My summary after playing with it for about an hour is:

Most importantly: with the current WebGL setup and that card, you can get very high quality but with an impractical size.

If you can live with noise, you can get a large size and see lighting effects you wouldn't get elsewhere, and there is a level of quality with that, but it's not really end-user friendly.

There's plenty of room to tweak the settings on my distance-based setup to presumably get exactly the level of clarity and motion blur, but I won't do that until the general Three.js integration.

I'd be very interested to see the performance gains from WebGPU which are supposed to be great. Maybe that will make the Tesla GPU practical for good sizes.

tom-adsfund avatar Feb 18 '22 22:02 tom-adsfund

And to give an impression of what that video should look like:

Screenshot from 2022-02-18 22-47-51

tom-adsfund avatar Feb 18 '22 22:02 tom-adsfund

@tom-adsfund Wow it looks very nice on your system - thanks for posting!

In an effort to give more fine-grained camera controls to users, I have made cameraFlightSpeed and cameraRotationSpeed more consistent with each other, in that they are now both defined once globally in InitCommon.js. If the end user wants a different value for these than the defaults (as defined in InitCommon.js), then they can simply go in the accompanying js file for each demo/scene (or their personal custom scene), and set these variables to their desired values.

For your test scenes I went back and added these variables as sliders in the GUI, on both the MultiSamples_Per_Frame test demo as well as the MultiSPF_Dynamic_Scene test demo. I tried to give a wide but useful range to the sliders. Hopefully this will allow you to have very fine-grain control over the camera/mouse manipulations. Let me know if this helps in that department.

Thanks!

erichlof avatar Feb 19 '22 06:02 erichlof

@erichlof I've made a 2.4GB recording of some trials I was doing, do you know a good way I can share that with you?

tom-adsfund avatar Feb 19 '22 12:02 tom-adsfund

@erichlof Trailer (lol):

https://user-images.githubusercontent.com/3634745/154801648-9a686703-7967-4395-9ab8-63e764a23e9e.mp4

tom-adsfund avatar Feb 19 '22 12:02 tom-adsfund

@passariello Yes that echo (I think the traditional CG name for it is 'motion blur', or 'ghosting') comes from the previous animation frame being blended with the current animation frame. There are 2 ways around excessive motion blur. The first is to simply have enough FPS (like 50 to 60 preferably), where the previous image gets cleared so fast that the eye cannot see the ghosting before the renderer has drawn the next frame. In this case, a simple half/half blend will be perfect. For example, finalPixelColor = previousPixelColor * 0.5 + currentPixelColor * 0.5;

If frame rate cannot be kept at those speeds, the 0.5 strategy above will still result in ghosting (as seen in the video), so the 0.5 blending weights need to be adjusted. The more previousPixelColor you have, the more ghosting but less noisy the image will be. On the other hand, the more currentPixelColor you have, the faster the update of the screen, but it might show more distracting noise. Since the weights need to add up to 1.0, maybe something like finalPixelColor = previousPixelColor * 0.3 + currentPixelColor * 0.7; might do the trick. It is very much a user-subjective opinion and up to their taste to decide on which weights look good to him/her.

Personally speaking, I prefer just a little blur when moving the camera fast, as this mimics real physical cameras that can't keep their shutter speeds up with the quick movements. But too much motion blur can be as distracting as the raw noise. It really takes some experimentation on a personal basis.

On that note, I will soon try to add a slider for both of the weights that control the frame blending. Then the end users can simply dial in the animated look that they want.

Probably to reduce echo ( ghosting or motion blur are usually zdepth or options... I think that echo it's more appropriate about baking frame) ... we need to have a "time exposure" or "sampling exposure timer" for animation. Blur it's not a good way to have in final render, usually it's used a velocity channel for post-production. Bake it's the key.

:)

passariello avatar Feb 19 '22 19:02 passariello

Also, probably it's necessary some option about camera like f-stop and exposure. Usually blur or depth are post-processing. z-channel should be very welcome in future for a professional use in exporting from High-End app to web. Channels:

  1. velocity Map
  2. Zmap (or depth)
  3. Normal Map
  4. Fake occlusion and cavity
  5. ID map

My suggestion is to focus first in Arch, Design and Prototyping to have a product (like component in react) to use in web prods. and embed system like iframe should be very useful and permit to your project to bring life, money and interest. Please, let me know if you like to discuss. I have some ideas and I want really help you. [email protected]

passariello avatar Feb 19 '22 19:02 passariello