Wenbin Lin
Wenbin Lin
Your equation is right. The inverse depth is `1 / depth`, as there can be large depth values in the background and using the inverse depth stabilizes the depth values....
We are sorry that we lost the training log, but we are retraining the RGB-D based optical flow model. When the training is done, we will share the evaluation results...
We use PhySG's code to convert HDR images to SGs, see https://github.com/Kai-46/PhySG/blob/master/code/envmaps/fit_envmap_with_sg.py. There is also another script for rotating lights: https://github.com/Kai-46/PhySG/blob/master/code/envmaps/rotate_lightsg.py.
1. We find that directly optimizing the roughness is hard to get good results. The idea of using basis is from [this paper](https://arxiv.org/pdf/2203.12909). 2. Thanks for noticing it. It seems...
The ZJU-Mocap dataset downloaded from https://github.com/zju3dv/animatable_nerf/blob/master/INSTALL.md has already been preprocessed. For training, just change the cfg_file to the corresponding data, the configuration file can be found at https://github.com/wenbin-lin/RelightableAvatar/tree/main/configs. For example:...