THREE.js-PathTracing-Renderer icon indicating copy to clipboard operation
THREE.js-PathTracing-Renderer copied to clipboard

You're a legend! Baking results feature

Open DVLP opened this issue 6 years ago • 16 comments

This is what I've been looking for. How to bake global illumination into textures? I'm building a game engine and modified editor based on three.js and after setting up a scene in editor I want to be able to bake AO and GI into textures. http://editor.bad.city (nick is enough to enter)

DVLP avatar Feb 02 '18 10:02 DVLP

Hi @DVLP , That's a really cool editor! That is looking like it will be a great tool for your team! About the baking, that's an interesting idea that I hadn't thought about before: specifically, using my path tracer to perform global illumination and instead of rendering real time to the web browser window, render offline to textures to be used later for light mapping. Do I have this right, is this what you were requesting?

I must admit I haven't attempted this yet with my project. I will have to review online how other path tracers do it, like Blender Cycles, or the new Octane for Unity's editor that recently came out on the Unity store. In the meantime, you might want to check tools such as those that are already mature and have been designed with baking in mind. These software packages will have all the tools and final quality options necessary for light map baking. My project I'm afraid really doesn't have any tools like that at the moment - it was built with raw speed in mind, rendering real time in the browser so the user can explore path traced worlds with keyboard, mouse, or mobile devices. But once I figure out how to render offline to a .png texture for example, maybe you could go from there.

One thing that immediately confuses me is "how do you specify where 1 texture starts and ends, and where another begins when you are baking lighting for a giant map like you have in your editor?" I have no idea how that even works, but like I said I will see if I can find some help or resources online on my own. Also, I've mentioned in other issues (the 'Multiple .obj models' issue on this repo) that I don't have a functioning acceleration structure for ray tracing yet because I'm waiting on WebGL 2.0 support from the three.js library. I can quickly render simple shapes like spheres, boxes, and cones, but if I tried to set my project to pathtrace one of your editor's scenes containing 1000's of triangles using brute force, it would probably take all day! In the meantime, you might want to check out more mature baking tools that are designed from the ground up with this as their goal. For instance, you could load your entire completed three.js scene into Unity, then use the brand new Octane (from OTOY) light map baking tool, all for free! Unity video and Blender vs. Unity video

Thanks again! -Erich

erichlof avatar Feb 05 '18 05:02 erichlof

Hi Erich, thanks for your answer! You get it correctly. I'm talking about baking. I guess this is the brute force method you're talking about https://threejs.org/examples/webgl_simple_gi.html and indeed it takes a long time to colour vertices. Actually no rendering offline is needed but rendering to a canvas like during regular rendering. Once an image is in a canvas it's trivial to save it as an image.

Your renderer is very fast and it's awesome for realistic scenes like architectural visualisations. Only for games it's better to precompute and bake. The whole point of this editor is to prepare entire level in a browser and launch a server from there so baking in external tools is not an option.

DVLP avatar Feb 05 '18 23:02 DVLP

@DVLP Ahh, understood about not being able to use external tools, thanks for the clarification.

Thanks for the link to the simple GI demo - I had somehow missed that in browsing the three.js demos over the years! That's a very clever technique, it essentially renders the scene (thousands of times by the time it's completed) from a camera located at each vertex (the camera clones are positioned at each vertex, looking out in the direction of each vertex normal), then averages the multiple render results and uses that color result as the final material color for that particular vertex on the mesh.

This is a good technique for small indoor scenes with a couple of objects, but quickly gets out of hand as the scene grows. In your case where there is a large outdoor environment, it makes more sense to do real raycasting like what I do on this repo, and average those results into textures; essentially lightmap baking.

Using the simple three.js GI demo with the torus knot object in the small room, another reason why that works well is because it just colors the vertices of the knot mesh, no textures are involved, only vertex colors. You don't have to have UV's for the lightmap that will be applied to the hundreds of triangles making up the torus knot object. But when we move to the real raycasting baking feature that would be needed for your game engine, the problem of UV mapping comes up.

Let's say you have a building that you want to apply GI to. Just thinking this all the way through for my benefit too, for each lightmap I would create a clone camera that looks at your building from some angle. Then we set the path tracer in action, what the little clone camera sees in its viewport is rendered to a texture, and uses the canvas feature toDataURL() where it can be saved as a .png for example. Then we take that resulting lightmap texture and we apply back to the building. But how to apply it? We must 'project' that texture from where the clone camera was looking from. And for that we need the entire building mesh to have 1 big UV layout (including all of its triangles), so if you used the default checkerboard texture that comes with Blender, you could apply the 1 texture only to the entire mesh and all the checkers would look acceptable, not squished or stretched too much. If you looked at our path traced .png texture, it would look funny because it is flattened perspective from the view of the clone camera, but when you apply (unproject) it to the mesh correctly using the mesh's UV map, and then rotate the player camera in-game, it should look correct.

That's my initial guess as to how to go about this. I haven't seen yet online how the pro companies do lightmap baking. The angle at which to render from still confuses me - I'm not sure how you decide the clone lightmap camera position (is it positioned above the mesh looking down at the mesh in Orthro mode?, is it looking from the lightsource like the sun as a shadowmap is done?, etc.). I'm not sure about the basic approach so far as lightmap cameras and projecting UV's are concerned. I'll keep researching.

If you have any ideas or resources, please send them my way. Thanks!

erichlof avatar Feb 06 '18 17:02 erichlof

👍 Yes, It would be great to have bake to texture capability. I don't have much knowledge but you might have some ideas through these repos:

  • https://github.com/ands/lightmapper
  • https://www.joshbeam.com/articles/dynamic_lightmaps_in_opengl/

Ben-Mack avatar Oct 04 '18 02:10 Ben-Mack

@erichlof This is one true amazing repo!

I'm also very interrested in this feature! Regarding your question about the angle at which to render. For a Lightmap it's essential that every piece of surface has a unique place on the texture. For ThreeJS, the UV's related to the Lightmap are in a secondary UV set. I've just created a topic on the ThreeJS Discourse, which explains this here

In theory I expect that when baking lightmaps,

  • You know where any triangle is.
  • You know it's UV locations on the texturemap

I'm wondering, would it be possible to create a planar camera per surface normal and make it sample the rendered pixel on the surface? Ofcourse, presetting the size of the texture is required.

I'll be following this repo!

MaartenBreeedveld avatar Apr 15 '19 12:04 MaartenBreeedveld

@MaartenBreeedveld Hi, thank you for the compliment! I must admit I don't yet understand how lightmap baking works under the hood, so far as saving the path-traced lighting to textures is concerned. I am currently going back and studying the above link to the dynamic lightmaps in OpenGL, because there isn't too much code to sift through in his project and I can more easily try to follow along with the code to guess what it's doing.

Yes I think your idea of the camera for every normal makes sense at least intuitively, but actually implementing it is something different and requires a more deep understanding (which I don't yet have unfortunately). I will try to watch some instructional videos on lightmapping just to gain a more solid understanding.

Believe me when I say if I knew better the subject matter, I would dive in right away and implement this feature for you guys - I know it is something that many people might want from a path tracer in three.js. If I make any discoveries I'll be sure to share them here on this issue, as well as try to work toward implementing just a basic ambient occlusion version (without all the bells and whistles) that at least works correctly.

Thanks again, -Erich

erichlof avatar Apr 17 '19 04:04 erichlof

Hey @erichlof, I agree that's it's a wonderful work! I'd love a feature for baking to texture too... Here's a link that explains the process, it's pretty old but everything is detailed: https://www.flipcode.com/archives/Light_Mapping_Theory_and_Implementation.shtml

There are 3 main steps:

1. Calculating / retrieving light map texture co-ordinates.
2. Calculating the world position and normal for every pixel in every light map.
3. Calculating the final color for every pixel.

I understand that's the last part that would be done by your pathtracer, but I have trouble figuring exactly how. Would that mean moving a camera at the pixel location making a rendering through pathtracing, and averaging all rendered pixels?

Also fyi there a lighmapper implementation for playcanvas, could be worth a look: https://github.com/playcanvas/engine/blob/master/src/scene/lightmapper.js

FlorentMasson avatar Apr 04 '20 14:04 FlorentMasson

Hello @FlorentMasson Thank you for the links! I don't mind that the first link is old - it is very detailed like you mentioned, and that will help me get started in understanding this process. Your question about the last part and how it is done is still a mystery to me also. Although I suspect that rather than moving the camera to each pixel, I would assume that the camera must somehow be placed/centered or projected onto each texture and then each texel would be like each pixel of a normal rendering, as seen from that particular texture's point of view. I will have to read your first link in its entirety.

And thank you for the second PlayCanvas source code link - it appears that rather than path trace the image, they somehow do a shadowmap-type rendering from each light's point of view and use that somehow in the final set of textures (or lightmaps). If I'm correct, this would be the reverse process. SO in the first link, the old-fashion way is to originate rays from the camera, and the 2nd link the rendering is done originating from each light source and accumulated into the final texture. I will investigate further, but these links will help!

Thanks again, -Erich

erichlof avatar Apr 05 '20 03:04 erichlof

I dug through the code from the PlayCanvas lightmapper and I found this blog post from the author: https://ndotl.wordpress.com/2018/08/29/baking-artifact-free-lightmaps/ He is actually using a clever trick in a shader where he's swapping every vertex position by the UV position on screen and immediately get a (rough) lightmapped texture. And then explains what he is doing with raytracing wich is out of my reach for now.

FlorentMasson avatar Apr 05 '20 12:04 FlorentMasson

@FlorentMasson Ahh ok, thanks for that link as well. I will check it out. It's funny - now that I have been working on this path tracing project for 5+ years now, I can fairly understand the ray tracing parts, but it's the vertex positioning and light mapping / shadow mapping that I'm really rusty on - lol. But hopefully I can learn from the author's blog post. Thanks again!

erichlof avatar Apr 05 '20 18:04 erichlof

There's a newly lightmap generation example in Three.js that you might be interested: github.com/mrdoob/three.js/issues/14048#issuecomment-792212707

Ben-Mack avatar Mar 10 '21 03:03 Ben-Mack

@Ben-Mack Thank you for the heads-up! I will be following this issue and zalo's contributions and (hopefully) PR soon!

erichlof avatar Mar 10 '21 15:03 erichlof

@erichlof PR's Here if you're curious: https://github.com/mrdoob/three.js/pull/21435

If there were UV-mapped Mesh objects underlying your path tracing, then two simple modifications would connect these two techniques together:

  • Instead of using gl_FragCoord.xy, project the worldPosition back into the camera to get the synthetic camera frag coord: vec4 syntheticFragCoord = projectionMatrix * viewMatrix * worldPosition
  • Swap in your material here: https://github.com/mrdoob/three.js/blob/e16c5e145ffc4d955e2935133708f4b46ed25e79/examples/jsm/misc/ProgressiveLightMap.js#L41

That would apply your path tracer to all of the meshes in the scene, and average out the results in texture-space.

As an aside, it's eerie how similar my implementation resembles the ndotl post in the beginning, but my demo just averages directional light shadows over time in texture space (rather than doing any sort of ray/pathtracing).

Spiritually, my version has more in common with: https://vimeo.com/227284722 https://vimeo.com/227146227 https://web.archive.org/web/20180317181435/http://thomasdiewald.com/blog/?p=2099

zalo avatar Mar 10 '21 20:03 zalo

Hello @zalo Wow that is an awesome PR and contribution to three.js! I very much enjoyed playing with your cool demo that showcases the new features. And thank you for the links - I actually just got lost inside Thomas Diewald's blog posts; inspiring work and so much to study!

Which brings me to my comment on this topic and a proposition. I have had to travel a long road of 6+ years to be able to get my path tracer working correctly and efficiently inside any commodity hardware with a browser. Only now do I feel comfortable in the way that my path tracer works and I know most of the rendering code intimately. That being said, lightmapping, texture baking, UV wrapping/unwrapping is outside my realm of knowledge and outside my comfort zone (there is so much to study in the world of computer graphics!).

Since it is just me working on the demos, the engine, and maintaining the ever-growing codebase, it will be hard for me to devote time to studying this new topic and will therefore take several months if not an entire year to be able to get a working lightmapper up and running. So here is my proposal:

Would it be possible for you to consider working on a similar PR for my repo as well? You clearly have done the legwork and have a good grasp on the subject of light mapping (whereas I would be starting from 0 and flailing about). Of course I would be available to answer any questions (no matter how detailed) about how my system works every step of the way. I envision starting out with the most simple setup - a Cornell box with a cube or sphere on the floor and a single light. It would be cool to not only render the soft shadows to textures, but to render everything else as well - diffuse color bleeding, reflected/refracted caustics, etc. That part of the equation I know exactly how to do. What I would need help with is getting the render textures, object materials, UV parameters all set up so that the path tracer can do its magic.

Would you consider working with me on this? I think everyone who is following this issue will agree: it will be much faster and will be much more beneficial to everyone if someone like yourself gets the lightmapper off the ground. I realize this would be very much a side project for you, and there's no rush or time pressure on my end. I think it would be the great learning opportunity for me and who knows - you could conceivably take my path tracing code and add it to your light mapper for three.js! I know mrdoob and the three.js core dev team would very much welcome that addition!

Please let me know your thoughts on this subject. If it is doable, I look forward to discussing further and possibly collaborating on this feature! -Erich

erichlof avatar Mar 11 '21 07:03 erichlof

By my estimation, it doesn't make much sense to integrate the two unless the useful parts we want from the path tracer are entirely mesh-based. Do you feel that your BVH system is powerful and fast enough to incorporate an arbitrary number of meshes (as in a typical three.js scene)? At the moment, it looks like even the BVH scenes are hand constructed from verbose manual raytracing definitions.... or is it just that the whole raytracing engine is duplicated with each scene?

One of the goals with my lightmapper is to get existing scenes up and running with as little modification as possible. Ideally a function to initialize, a function to add mesh objects and lights to the lighting simulation, and a function to update. Do you believe it's possible to abstract all of that in your system? I'll consider it if you can demonstrate that it's simple to add and manage a number of normal three.js meshes in the path-tracing world...

Fwiw, at the moment, I'm working on an extension to mine that adds indirect illumination, based on this paper. It's still in a nascent state, but this is what I've got so far for rendering the first bounce of indirect illumination..

zalo avatar Mar 11 '21 09:03 zalo

@zalo

Thank you for the reply. Your observations are correct that my BVH system, in its current state, has to be custom tuned for each demo. On issue #37 it was asked if any three.js scene could be loaded in and path traced. As I replied to that issue, unfortunately as of now, the answer is no. Allow me to explain further:

I have figured out how to get a singular gltf/glb model of up to a million triangles loaded and displayed inside a browser path traced scene (which was a monumental effort taking years, having to deal with WebGL/opengl es limitations and commodity hardware (no RTX to count on)). So I am proud of that achievement of being able to path trace in real time (30-60 fps) an arbitrary single model of multiple thousands of polys right in your browser with no required specialized ray tracing cards. But being able to load in an entire arbitrary Three.js scene, consisting of any number of models, each with their own BVH and UV mapping and associated textures (and paths to those files on disk/server that might or not be there), would require a monumental effort.

You are correct that my path tracing core gets loaded for each demo/scene. Per mrdoob's suggestion, I have implemented an 'include' system in the path tracing fragment shaders that pulls in the necessary geometry ray caster for that particular scene (i.e., IntersectSphere, IntersectBox, IntersectTriangle, etc). Then I just handle each material (Diffuse, Metal, Dielectric, clearCoat, etc) inside the tight 'bounces' loop in each fragment shader. Admittedly, this material code is duplicated uneccessarily from demo to demo, but I just haven't found the time to make those material handlers into generalized 'includes' as well.

To all reading this issue, I think it's safe to say my webgl BVH system (although I'm proud of the progress made in browser path tracing technology) is not ready at this time to support a generalized light mapping solution for three.js.

That being said, for anyone out there who needs this capability in the near future, I would recommend following @zalo 's excellent work and his solution. Rather than try to retrofit an arbitrary scene loading system on top of my path tracing engine in order to bake GI, it would be better in my honest opinion to use, and work in conjunction with, the existing strengths of the current three.js scene loading management system, and then bake the results using @zalo 's already-functioning technology.

Normally I don't like just referring people to another source (I usually prefer rolling my own code), but in this instance, judging by the large requirements and prerequisites necessary to implement such a light mapping/baking solution that is robust and generalized, and that works in grain with three.js' alteady-functional mesh management system, I must suggest that you use the system implemented by zalo.

That being said, @zalo, instead of using the research papers you linked to, if you would like to discuss possibly using any or all of my efficient, already-battle-tested, path tracing Monte Carlo algos, my ever-growing ray intersection library, or my custom importance sampling material BRDF routines in order to bake all GI components into the light maps, I would be more than happy to help in any way I can! In other words, if you can get the geometry and mesh textures/uv mapping up and running, you could conceivably run parts of my path tracing code for an arbitrary user-defined number of bounces and end up with a system similar to Unreal and Unity (maybe even a little better, because mine can do reflected/refracted caustics in addition to the normal GI diffuse baking). ;-D

Thank you all for your time and insight.

Cheers, -Erich

erichlof avatar Mar 11 '21 17:03 erichlof