cumm
cumm copied to clipboard
Building cumm for spconv 2x Issue
Hello and thanks for the great work.
I am trying to build from source spconv following the official instructions. I am however facing some problems.
When I run git clone https://github.com/FindDefinition/cumm, cd ./cumm, pip install -e .
I face the same issue as #8.
As you suggested, I checked out the v0.2.8 tag and I was successful in building cumm. Using python
, import cumm
gives no errors.
However, when I then install spconv I get the following errors:
user@workstation:~/workspace/project/external_libs/spconv2/spconv$ python
Python 3.8.13 | packaged by conda-forge | (default, Mar 25 2022, 06:04:10)
[GCC 10.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import spconv
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/piroli/workspace/projects/SprayNet/external_libs/spconv2/spconv/spconv/__init__.py", line 15, in <module>
from . import build as _build
File "/home/piroli/workspace/projects/SprayNet/external_libs/spconv2/spconv/spconv/build.py", line 21, in <module>
from .constants import PACKAGE_NAME, PACKAGE_ROOT, DISABLE_JIT
File "/home/piroli/workspace/projects/SprayNet/external_libs/spconv2/spconv/spconv/constants.py", line 19, in <module>
from cumm.gemm.constants import NVRTCMode
ImportError: cannot import name 'NVRTCMode' from 'cumm.gemm.constants' (/home/piroli/workspace/projects/SprayNet/external_libs/spconv2/cumm/cumm/gemm/constants.py)
I noticed that the v0.2.8 https://github.com/FindDefinition/cumm/blob/main/cumm/gemm/constants.py does not have the NVRTCMode class, however the main branch does.
Is there a way to compile the latest version?
Encountered the same problem
i meet the same problem. CY
Mark!
Encountered the same problem. Constants.py in Main branch seemes to be as bellow;
# Copyright 2021 Yan Yan
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import enum
WARP_SIZE = 32
OPTIM_ACCESS = 16
OPTIM_ACCESS_BITS = OPTIM_ACCESS * 8
SMEM_BANK_SIZE_BITS = 128 * 8
class NVRTCConstants:
SIZEOF_KEY = "kSizeOfParams"
SMEM_KEY = "kSmemSize"
NUM_THREADS_KEY = "kNumThreads"
CONSTANT_PARAM_KEY = "params_raw"
class NVRTCMode(enum.Enum):
"""nvrtc mode for *NON-STATIC* gemm kernels.
kernel params of gemm contains different init code that need jit
if we doesn't provide a static init function.
another solution is implement all param calculation in c++/python.
this method requires additional time to maintain code, so I
have no interest on it.
"""
Disabled = 0
# calc params directly in kernel. VERY SLOW.
Direct = 1
# launch a kernel, calculate params, then launch gemm kernel
# in that kernel.
# greatly slower than KernelAndCPU/ConstantMemory, I don't know why.
DynamicParallism = 2
# run init kernel first to generate params, copy params to cpu, then use
# that param to launch kernel in host.
KernelAndCPU = 3
# similar to KernelAndCPU, don't need dev to cpu copy, copy result to
# Constant Memory instead.
# fastest way, but DON'T SUPPORT MULTIPLE STREAM. the
# constant memory is allocated once when nvrtc
# module is created. so create new nvrtc module
# for every stream.
ConstantMemory = 4
# static mode, only support implemented input iterators. NOT IMPLEMENTED
Static = 5
So, it can be solved by setting SPCONV_NVRTC_MODE = 4
insetad of SPCONV_NVRTC_MODE = NVRTCMode.ConstantMemory
.
However, it seems that "TensorOp" in cumm/gemm/algospec/core.py
also doesn't exist in v2.8.0 branch. How can I solve this problem ?
I want to hit the command python create_data.py kitti_data_prep --data_path=KITTI_DATASET_ROOT
in second.pytorch.