multinerf
multinerf copied to clipboard
How do you edit diffuse color and roughness in Ref-NeRF?
To whom it may concern,
Hello! I'm very interested in the Scene Editing section in Ref-NeRF. It said people can manipulate the k values used in the IDE to edit the objects' surfaces. I checked the render_image() function and found out that render_eval_pfn, state.params and ray placeholder are directly fed into the functools. I don't know how to change any of the trained weights here in order to edit the diffuse color or roughness. Would you please explain more about this? Thank you!
Good question! For making the figures in the paper, we defined a modified roughness function which maps any 3D position to the new wanted roughness value at that position. Then we replaced lines 522-523 in models.py
, where roughness is computed using the MLP, with values computed by the modified function evaluated at the sample locations means
. This just overrides the MLP and injects whatever values you'd like. I hope this is clear!
Hi @dorverbin , sorry to bother you. I want to confirm a simple question, the directional MLP width in your article is 256, right? But I found that the default directional MLP width used in the code is 128, and there is no re-modification in blender_refnerf.gin. After I changed the width to 256, the number of parameters to be optimized is 1208590. Is this consistent with the implementation in your paper?
Good question! For making the figures in the paper, we defined a modified roughness function which maps any 3D position to the new wanted roughness value at that position. Then we replaced lines 522-523 in
models.py
, where roughness is computed using the MLP, with values computed by the modified function evaluated at the sample locationsmeans
. This just overrides the MLP and injects whatever values you'd like. I hope this is clear!
Apologies for the inconvenience, but I have a few questions based on your previous comment. Would it be possible to train the ref-nerf model using the default configurations and make adjustments to the (roughness_bias) (I guess) in the rendering process to simulate a rough object? If it is correct, what parameters should I use to get a completely rough object?
Yes, simply changing roughness_bias
should also work. The higher it is the rougher, so you can just set it to a very large number which doesn't NaN out. This would just multiply all the spherical harmonic values by 0:
https://github.com/justin871030/refnerf-pl/blob/1528264347fde5f828bde03f1a4526c7b378247f/internal/ref_utils.py#L155
Hi @dorverbin , sorry to bother you. I want to confirm a simple question, the directional MLP width in your article is 256, right? But I found that the default directional MLP width used in the code is 128, and there is no re-modification in blender_refnerf.gin. After I changed the width to 256, the number of parameters to be optimized is 1208590. Is this consistent with the implementation in your paper?
Oops, I missed this question. In case it helps, it's a mistake in the paper's text, and the config file is consistent with the paper's results. The layer width we used was 128 and not 256.
Yes, simply changing
roughness_bias
should also work. The higher it is the rougher, so you can just set it to a very large number which doesn't NaN out. This would just multiply all the spherical harmonic values by 0: https://github.com/justin871030/refnerf-pl/blob/1528264347fde5f828bde03f1a4526c7b378247f/internal/ref_utils.py#L155
Thank you so much for your quick response! However, I am still confused about the roughness. Here are my results:
Though the model hasn't been finished yet, is it a correct result? It seems that the object is not diffuse when roughness_bias=1000. Besides, it looks much better than roughness_bias=-1, which is the default value. PS: I used roughness_bias: float = -1. for training and roughness_bias: float = 1000 for evaluation to get a rough result.
Do you know if geometry is recovered correctly in this region, i.e., are the normals correct? This scene is challenging for our model, and using the default parameters the model doesn't converge to the correct geometry everywhere. When geometry is off, specularities can be "faked" using emitters below the surface. Those tend to have low-frequency view-dependent appearance, and are therefore not affected by the roughness parameter.
Do you know if geometry is recovered correctly in this region, i.e., are the normals correct? This scene is challenging for our model, and using the default parameters the model doesn't converge to the correct geometry everywhere. When geometry is off, specularities can be "faked" using emitters below the surface. Those tend to have low-frequency view-dependent appearance, and are therefore not affected by the roughness parameter.
Now I understand it. Thank you so much for your kind reply.