tiny-cuda-nn icon indicating copy to clipboard operation
tiny-cuda-nn copied to clipboard

Enable 5D grids

Open dmholtz opened this issue 1 year ago • 2 comments

Recently, I worked successfully with 5D hash grids of tinycudann by removing the comment in https://github.com/NVlabs/tiny-cuda-nn/blob/2ec562e853e6f482b5d09168705205f46358fb39/include/tiny-cuda-nn/encodings/grid.h#L1178

To streamline the setup process, I would like to get rid of my private fork and have 5D grids directly enabled in the official tinycudann codebase.

@Tom94 do you have any objections against this? If not, I will prepare a PR which removes the comment in the line and adapts the documentation.

dmholtz avatar Mar 19 '24 07:03 dmholtz

Hi, the reason some of the dimensionalities are commented out is that they increase the compile time and memory cost by quite a lot -- for a feature that most people are not going to use.

But I hear your preference for upstreaming this. How about making the enabled hash encoding dimensions a preprocessor define? I am thinking TCNN_HASH_MIN_DIM (with default 2) and TCNN_HASH_MAX_DIM (with default 4). Those variables could then be overridden via CMake options -- see the top of CMakeLists.txt for how existing options are defined and then turned into preprocessor defines via target_add_definitions.

For the Python bindings, we could have an additional setup.py argument, similar to how --no-networks is currently handled, that similarly gets turned into TCNN_HASH_MIN_DIM and TCNN_HASH_MAX_DIM.

Tom94 avatar Mar 19 '24 08:03 Tom94

Thank you very much for your reply. I agree that preporcessor defines would be the cleanest approch to balance feature scope and compile time for all users of tcnn.

In principle, I would be interested in making such a contribution. However, I cannot promise ATM when I find time to look into this in more detail. Until then, I will stick to the fork-solution and revist this issue once I start refactoring my own code.

dmholtz avatar Mar 25 '24 07:03 dmholtz