tvm
tvm copied to clipboard
TVMError: Binary was created using cuda but a loader of that name is not registered
Earlier implementation in Docker was download and install tvm using CUDA flags on:
# install tvm
RUN git clone --recursive https://github.com/apache/incubator-tvm tvm && \
cd tvm && \
git reset --hard 338940dc5044885412f9a6045cb8dcdf9fb639a4 && \
git submodule init && \
git submodule update && \
mkdir ./build && \
cd build && \
cmake -DUSE_CUDA=ON -DUSE_CUDNN=ON -DUSE_CUBLAS=ON -DUSE_THRUST=ON -DUSE_LLVM=ON .. && \
make -j$(nproc) && \
cd ../python && \
python3.8 setup.py install && \
cd ../.. && rm -rf tvm
Now I'm just generating tvm wheel and then installing and using it in another docker container where cuda, cudnn is installed as well. Current code:
# TVM wheel generation from another place
RUN git clone --recursive https://github.com/apache/incubator-tvm tvm && \
cd tvm && \
git reset --hard 338940dc5044885412f9a6045cb8dcdf9fb639a4 && \
git submodule init && \
git submodule update && \
mkdir ./build && \
cp cmake/config.cmake build && \
cd build && \
cmake -DUSE_CUDA=ON -DUSE_CUDNN=ON -DUSE_CUBLAS=ON -DUSE_THRUST=ON -DUSE_LLVM=ON .. && \
make -j$(nproc) && \
cd ../python && $PYTHON_VERSION setup.py bdist_wheel
Install wheel in Docker:
RUN python3.8 -m pip install tvm-0.8.dev1452+g338940dc5-cp38-cp38-linux_x86_64.whl
But is throwing me: TVMError: Binary was created using cuda but a loader of that name is not registered
Full tvm logs:
File "/usr/lib64/python3.8/site-packages/tvm/_ffi/_ctypes/packed_func.py", line 237, in __call__
2022-12-12 15:25:54
raise get_last_ffi_error()
2022-12-12 15:25:54
tvm._ffi.base.TVMError: Traceback (most recent call last):
2022-12-12 15:25:54
6: TVMFuncCall
2022-12-12 15:25:54
5: std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), void tvm::runtime::TypedPackedFunc<tvm::runtime::Module (std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)>::AssignTypedLambda<tvm::runtime::Module (*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)>(tvm::runtime::Module (*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&), std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)
2022-12-12 15:25:54
4: tvm::runtime::Module::LoadFromFile(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)
2022-12-12 15:25:54
3: std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), tvm::runtime::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#1}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)
2022-12-12 15:25:54
2: tvm::runtime::CreateModuleFromLibrary(tvm::runtime::ObjectPtr<tvm::runtime::Library>)
2022-12-12 15:25:54
1: tvm::runtime::ProcessModuleBlob(char const*, tvm::runtime::ObjectPtr<tvm::runtime::Library>, tvm::runtime::Module*, tvm::runtime::ModuleNode**)
2022-12-12 15:25:54
0: tvm::runtime::LoadModuleFromBinary(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, dmlc::Stream*)
2022-12-12 15:25:54
File "/tmp/tvm/src/runtime/library_module.cc", line 116
2022-12-12 15:25:54
TVMError: Binary was created using cuda but a loader of that name is not registered. Available loaders are GraphRuntimeFactory, metadata, GraphExecutorFactory, VMExecutable. Perhaps you need to recompile with this runtime enabled.
dear friend Have u fixed this issue? I meet same error
same issue +1