Data exceed int32 range
RuntimeError: /io/build/temp.linux-x86_64-cpython-37/spconv/build/core_cc/src/cumm/conv/main/ConvMainUnitTest/ConvMainUnitTest_matmul_split_Simt_f32f32f32_0.cu(222)
int64_t(N) * int64_t(C) * tv::bit_size(algo_desp.dtype_a) / 8 < int_max assert faild. your data exceed int32 range. this will be fixed in cumm + nvrtc (spconv 2.2/2.3).
Is there any solution to resolve the issue?
请问这个问题解决了吗
I hope the author can take some time to address this issue.
@traveller59 Is there any solution?
same issue...
same issue +1.
tried latest released version: no help. build wheel from latest code: no help.
same + 1 tried latest released version: no help.
I solve this issue by build wheel from source:
- download cumm source file, for example tag/v0.7.11
- modify the definition of int_max to int64_t in the file of cumm/gemm/nvrtc_code.py and cumm/conv/nvrtc_code.py
constexpr int64_t int_max = std::numeric_limits<int64_t>::max(); - modify the TV_ASSERT_RT_ERR to int64_t in the file of cumm/conv/main.py(total of 8)
TV_ASSERT_RT_ERR(int64_t(N) * int64_t(C) * {ker.dtype_b.bitsize()} / 8 < std::numeric_limits<int64_t>::max(), "your data exceed int32 range. this will be fixed in cumm + nvrtc (spconv 2.2/2.3).") - buid && install
export CUMM_CUDA_VERSION="11.8"export CUMM_DISABLE_JIT="1"python setup.py bdist_wheel+pip install dists/xxx.whl - download spconv source file, for example tag/v2.3.8
- buid && install
export SPCONV_DISABLE_JIT="1"python setup.py bdist_wheel+pip install dists/xxx.whl
But I'm not sure if there are other issues involved.
I fixed this using a simple skipping method. While basic, it works well during inference, so I believe it may be suitable for that purpose. To apply this fix, you should first thoroughly uninstall both cumm and spconv. Then, clone the following repositories: cumm-int32 and spconv-int32. Next, install cumm in editable mode first, followed by spconv. For example:
cd submodules/cumm-int32; pip install -e .
cd submodules/spconv-int32; pip install -e .
This fix is not official, but it has been verified to work well when the mesh is very large.