raytracing.github.io
raytracing.github.io copied to clipboard
TheNextWeek final scene perlin rendering along wrong axis
The perlin noise sphere at the end of The Next Week isn't rendering any noise. It could be due to my render being low spp. But, it appears that there isn't a noise texture, and is being rendered as flat lambertian.
Figured it out. No easy answers.
The final picture in TheNextWeek has a very nice marble looking texture. It is very similar to the marbling in Chapter 5: Perlin Noise
When I rendered the final_scene from our src the perlin wasn't noticeable. I assumed that the noise texture was being dropped somehow.
I was wrong. The perlin noise is being rendered, and correctly at that.
The problem is:
virtual vec3 value(double u, double v, const vec3& p) const {
return vec3(1,1,1)*0.5*(1 + sin(scale*p.z() + 10*noise.turb(p))); }
Enhance!
`scale*p.z()`
The noise texture is being split along planes in the z-axis.
The camera in the final_scene in TheNextWeek is pointed in the z-axis, whereas the camera for the perlin_spheres is pointed in the x-axis.
I couldn't see the perlin noise in final_scene because I was looking at the noise head-on (imagine seeing the storms of jupitor from one of it's poles.
In order to get nice marbling from the perlin sphere in final_scene one of two things must be done:
- change
scale*p.z()->scale.p.x()so that we're looking from a different angle Making this explicit would be necessary - Rotate the camera (and scene) of the
final_sceneby 90 degrees so that we're looking in the +x axis
@hollasch This is required before I can start drawing new pictures for book2
I think there is a second difference between the "final scene" image and the one generated by the code (as described in the book and in the repo) for the sphere with Earth image texture. It appears the published image uses a noise generator to "swirl" the image before mapping it onto the sphere (or is using another very different image map).
I also noticed the problem with the noise textured sphere. I think another way to consider the problem is that the noise texture function is dependent on the size and position of the object (since it uses the intersection point location). So, if you scale up the "world" keeping everything else the same (i.e. scaling the object size and position, and moving the camera), you get very different textures. One way to resolve this, is to use the intersection normal instead of the point. In effect, this brings the texture generation back to the unit sphere (though it does not generate a pattern similar to the one shown in the "final image", it does generate one closer to the original Perlin mapped spheres.).
The camera for the final scene needs to be explicitly outlined in the text
Bumping this. Here is the NoiseTexture::value() function that works well for me:
vec3(0.5) * (1 + std::sin((_scale * p.x) + (5 * _noise->turb(_scale * p)))
I think the code should be changed to this, one as it is more pleasing to see in the final scene, and two that it would match the cover of the book.
If you want to see what it looks like, check the second render here: https://gitlab.com/define-private-public/PeterShirley-RayTracing-Nim/-/tree/master
Fixed in #1176