NextFace
NextFace copied to clipboard
Reproducing the paper results
Hi Abdallah,
Thanks for open-sourcing your amazing work. I am currently trying to replicate results from the "Practical Face Reconstruction via Differentiable Ray Tracing". The teaser image on Github (I get the same result when I run the code) is different than the teaser from the paper. In particular, there is a lot of shading baked into the albedo map and possibly poor separation between the diffuse and specular/roughness maps. Are there any settings that can be changed to get closer to the results from the paper?
Hi, and sorry for my late response,
NextFace does not reproduce the lighting model used in the paper that is based on a virtual light stage that can model point lights and can capture high frequency light variations. NextFace uses SH that can't approximate point lights. That is why u are not getting the same results. however, within nextFace, u can increase the texture regularizers (symmetry and consistency) in optimConfig.ini to obtain a better separation. However this will trade-off albedo details.
There is a special configuration in NextFace (optimConfigShadows.ini) that u can try which can produce better results. The regularizers that u need to increase are the following (plz also refer to readme on this):
weightDiffuseSymmetryReg weightDiffuseConsistencyReg weightDiffuseSmoothnessReg
weightSpecularSymmetryReg weightSpecularConsistencyReg weightSpecularSmoothnessReg
weightRoughnessSymmetryReg weightRoughnessConsistencyReg weightRoughnessSmoothnessReg
Thanks for the reply. If I wanted to re-implement the virtual light stage, where would I start? Alternatively, if I wanted to provide a point light location and optimize only for its intensity, what would I need to change? Looking at the code, I'd probably create a small area light instead of the environment map in line 127 of renderer.py. Is there anything else I'd need to take care of?
Hi Dazinovic, The light stage used in the paper is composed of 20 area lights sampled on an icoshaedron. U can build this lightstage for instance using meshlab, where each face represents an area light. next, to give some degrees of freedom to each of this face (light). In the paper, each face was parametrized by it surface, position with respect to the origin (face at the origin) and the light intensity. that means that the optimization will optimize the surface/distance/intensity for each light. Think about it as a lightstage where u can adjust each of the spot light in the light stage. another way to think about it, is that u can see it an environment map with more degrees of freedom. an environment map represents light at infinity where only intensity can change. here the virtual light stage have more degrees of freedom. that means that u can approximate various type of light sources with the light stage (far and close one).
One thing to take care with this light stage, is that generally when u change the position/surface the intensity will change. what u need to ensure is that these parameters are 'orthogonal' as much as possible. plz refer to section Virtual Light Stage in the paper for more details.
if u want to use a fixed area light where u only optimize the intensity, u need to replace remove the spherical harmonics lighting model and use ur area light. the rendering part needs to be adjusted as well, (buildScenes in renderer.py)where u should remove the env map (obtained from SH) and pass instead ur area light.
i hope that makes sense for u .
let me know if u have more questions