Could you tell me about auxiliary inputs in hash encodings
Hi, I'm a student who just started to learn computer vision. I'm very impressed for your research and felt overwhelmingly grateful for pytorch bindings for your work. I got curious while using your hash encodings for NeRF. And I found there is no auxiliary inputs(in paper) in your hash encodings. I know that the auxiliary inputs varies from tasks to tasks. But due to my poor understandings for encodings and python codings, I can't figure out to implement codes as you explained in your paper(T.Muller. et al. 'Instant Neural Graphics Primitives with a Multiresolution Hash Encoding'). I wonder if you can tell me some details about implementation of auxiliary inputs. thank you so much!
Hi, you can compose multiple encodings (and thereby achieve auxiliary inputs) using the Composite encoding in the json config passed to tiny-cuda-nn.
Here's the encoding config we used for neural radiance caching:
"encoding": {
"otype": "Composite",
"nested": [
{
"n_dims_to_encode": 3, # Position
"otype": "HashGrid",
"per_level_scale": 2.0,
"log2_hashmap_size": 15,
"base_resolution": 16,
"n_levels": 16
},
{
"n_dims_to_encode": 5, # Interesting conditionals
"otype": "OneBlob",
"n_bins": 4
},
{
"n_dims_to_encode": 6, # Linear conditionals that should be identity encoded
"otype": "Identity"
}
]
},
Your reply helped me a lot. Thank you!