mlc-llm icon indicating copy to clipboard operation
mlc-llm copied to clipboard

[Bug] relax.vm.AttentionKVCache expects 19 arguments, but 18 were provided.

Open marquicus opened this issue 10 months ago • 5 comments

🐛 Bug

I got number of argmuents error: TVMError: ... relax.vm.AttentionKVCache expects 19 arguments, but 18 were provided.

While initializing the ChatModule, sample python script to test tvm & mlc compilation taken from this post

To Reproduce

Steps to reproduce the behavior as described here https://blog.mlc.ai/2023/08/09/GPU-Accelerated-LLM-on-Orange-Pi:

  1. Install ubuntu from here https://github.com/Joshua-Riek/ubuntu-rockchip/releases/tag/v1.33
  2. Compile TVM with latest https://github.com/mlc-ai/relax#mlc branch
  3. Compile MLC with latest https://github.com/mlc-ai/mlc-llm@main branch
  4. Follow the steps to build TVM & MLC with redpajama model
  5. Export TVM & MLC environment variables and make sure this command works python3 -c "import tvm; print(tvm._ffi.base._LIB)"
  6. Run the sample python script and while initializing ChatModule variable from mlc_llm import ChatModule from mlc_llm.callback import StreamToStdout cm = ChatModule( model="dist/prebuilt/RedPajama-INCITE-Chat-3B-v1-q4f16_1-MLC", model_lib_path="dist/prebuilt/lib/RedPajama-INCITE-Chat-3B-v1/RedPajama-INCITE-Chat-3B-v1-q4f16_1-mali.so", device="opencl" )
  7. Log output given: TVMError: Function vm.builtin.paged_attention_kv_cache_create_reduced(0: runtime.ShapeTuple, 1: int64_t, 2: int64_t, 3: int64_t, 4: int64_t, 5: int, 6: double, 7: double, 8: runtime.NDArray, 9: runtime.PackedFunc, 10: runtime.PackedFunc, 11: runtime.PackedFunc, 12: runtime.PackedFunc, 13: runtime.PackedFunc, 14: runtime.PackedFunc, 15: runtime.PackedFunc, 16: runtime.PackedFunc, 17: runtime.PackedFunc, 18: runtime.PackedFunc) -> relax.vm.AttentionKVCache expects 19 arguments, but 18 were provided.

Expected behavior

Response from model

Environment

  • Platform: OpenCL
  • Operating system: Ubuntu 22.04.3 LTS
  • Device: Orange Pi 5, 8 core, 8Gb RAM
  • How you installed MLC-LLM: source git clone --recursive https://github.com/mlc-ai/mlc-llm.git
  • How you installed TVM-Unity (pip, source): source git clone --recursive https://github.com/mlc-ai/relax -b mlc
  • Python version: Python 3.10.12 from distro
  • GPU driver version (if applicable): GL_RENDERER = Mali-G610 (Panfrost) GL_VERSION = 3.3 (Compatibility Profile) Mesa 23.0.0-devel GL_VENDOR = Panfrost
  • CUDA/cuDNN version: No
  • TVM Unity Hash Tag (python -c "import tvm; print('\n'.join(f'{k}: {v}' for k, v in tvm.support.libinfo().items()))", applicable if you compile models): USE_NVTX: OFF USE_GTEST: AUTO SUMMARIZE: OFF TVM_DEBUG_WITH_ABI_CHANGE: OFF USE_IOS_RPC: OFF USE_MSC: OFF USE_ETHOSU: OFF CUDA_VERSION: NOT-FOUND USE_LIBBACKTRACE: AUTO DLPACK_PATH: 3rdparty/dlpack/include USE_TENSORRT_CODEGEN: OFF USE_THRUST: OFF USE_TARGET_ONNX: OFF USE_AOT_EXECUTOR: ON BUILD_DUMMY_LIBTVM: OFF USE_CUDNN: OFF USE_TENSORRT_RUNTIME: OFF USE_ARM_COMPUTE_LIB_GRAPH_EXECUTOR: OFF USE_CCACHE: AUTO USE_ARM_COMPUTE_LIB: OFF USE_CPP_RTVM: OFF USE_OPENCL_GTEST: /path/to/opencl/gtest USE_MKL: OFF USE_PT_TVMDSOOP: OFF MLIR_VERSION: NOT-FOUND USE_CLML: OFF USE_STACKVM_RUNTIME: OFF USE_GRAPH_EXECUTOR_CUDA_GRAPH: OFF ROCM_PATH: /opt/rocm USE_DNNL: OFF USE_VITIS_AI: OFF USE_MLIR: OFF USE_RCCL: OFF USE_LLVM: OFF USE_VERILATOR: OFF USE_TF_TVMDSOOP: OFF USE_THREADS: ON USE_MSVC_MT: OFF BACKTRACE_ON_SEGFAULT: OFF USE_GRAPH_EXECUTOR: ON USE_NCCL: OFF USE_ROCBLAS: OFF GIT_COMMIT_HASH: ae057a2e74e895a846df958c19ff342505131a65 USE_VULKAN: OFF USE_RUST_EXT: OFF USE_CUTLASS: OFF USE_CPP_RPC: OFF USE_HEXAGON: OFF USE_CUSTOM_LOGGING: OFF USE_UMA: OFF USE_FALLBACK_STL_MAP: OFF USE_SORT: ON USE_RTTI: ON GIT_COMMIT_TIME: 2024-04-12 17:11:33 -0400 USE_HEXAGON_SDK: /path/to/sdk USE_BLAS: none USE_ETHOSN: OFF USE_LIBTORCH: OFF USE_RANDOM: ON USE_CUDA: OFF USE_COREML: OFF USE_AMX: OFF BUILD_STATIC_RUNTIME: OFF USE_CMSISNN: OFF USE_KHRONOS_SPIRV: OFF USE_CLML_GRAPH_EXECUTOR: OFF USE_TFLITE: OFF USE_HEXAGON_GTEST: /path/to/hexagon/gtest PICOJSON_PATH: 3rdparty/picojson USE_OPENCL_ENABLE_HOST_PTR: OFF INSTALL_DEV: OFF USE_PROFILER: ON USE_NNPACK: OFF LLVM_VERSION: NOT-FOUND USE_MRVL: OFF USE_OPENCL: ON COMPILER_RT_PATH: 3rdparty/compiler-rt RANG_PATH: 3rdparty/rang/include USE_SPIRV_KHR_INTEGER_DOT_PRODUCT: OFF USE_OPENMP: none USE_BNNS: OFF USE_FLASHINFER: USE_CUBLAS: OFF USE_METAL: OFF USE_MICRO_STANDALONE_RUNTIME: OFF USE_HEXAGON_EXTERNAL_LIBS: OFF USE_ALTERNATIVE_LINKER: AUTO USE_BYODT_POSIT: OFF USE_HEXAGON_RPC: OFF USE_MICRO: OFF DMLC_PATH: 3rdparty/dmlc-core/include INDEX_DEFAULT_I64: ON USE_RELAY_DEBUG: OFF USE_RPC: ON USE_TENSORFLOW_PATH: none TVM_CLML_VERSION: USE_MIOPEN: OFF USE_ROCM: OFF USE_PAPI: OFF USE_CURAND: OFF TVM_CXX_COMPILER_PATH: /usr/bin/c++ HIDE_PRIVATE_SYMBOLS: OFF
  • Any other relevant information:

Additional context

Following the RK3588 board with OpenCL driver setup I found that libmali-g610 and mali_csffw were already there so I didn't download the libraries. I'm thinking that issue is between compatibility of MLC & TVM , I did try with another TVM versions v0.15.0, v0.14.0, v0.13.0, etc from here but they didn't let compile the MLC

marquicus avatar Apr 18 '24 17:04 marquicus

Thank you @marquicus for reporting. Would you mind sharing more backtrace of the error

TVMError: Function vm.builtin.paged_attention_kv_cache_create_reduced(0: runtime.ShapeTuple, 1: int64_t, 2: int64_t, 3: int64_t, 4: int64_t, 5: int, 6: double, 7: double, 8: runtime.NDArray, 9: runtime.PackedFunc, 10: runtime.PackedFunc, 11: runtime.PackedFunc, 12: runtime.PackedFunc, 13: runtime.PackedFunc, 14: runtime.PackedFunc, 15: runtime.PackedFunc, 16: runtime.PackedFunc, 17: runtime.PackedFunc, 18: runtime.PackedFunc) -> relax.vm.AttentionKVCache expects 19 arguments, but 18 were provided.

so that we can take a closer look?

MasterJH5574 avatar Apr 19 '24 21:04 MasterJH5574

I am running into the same issue trying to run the pre-built mlc-chat.apk on my pixel phone from the following link.

https://github.com/mlc-ai/binary-mlc-llm-libs/releases/download/Android/mlc-chat.apk

relax.vm.AttentionKVCache expects 19 arguments, but 18 were provided. 
Stack trace: 
File "Users/kartik/mlc/tvm/include/tvm/runtime/packed_func.h",  line 1908

leondotle avatar Apr 22 '24 23:04 leondotle

Thank you @marquicus for reporting. Would you mind sharing more backtrace of the error

TVMError: Function vm.builtin.paged_attention_kv_cache_create_reduced(0: runtime.ShapeTuple, 1: int64_t, 2: int64_t, 3: int64_t, 4: int64_t, 5: int, 6: double, 7: double, 8: runtime.NDArray, 9: runtime.PackedFunc, 10: runtime.PackedFunc, 11: runtime.PackedFunc, 12: runtime.PackedFunc, 13: runtime.PackedFunc, 14: runtime.PackedFunc, 15: runtime.PackedFunc, 16: runtime.PackedFunc, 17: runtime.PackedFunc, 18: runtime.PackedFunc) -> relax.vm.AttentionKVCache expects 19 arguments, but 18 were provided.

so that we can take a closer look?


For sure, here complete output

[2024-04-29 19:19:58] INFO auto_device.py:76: Found device: opencl:0 [2024-04-29 19:19:58] INFO chat_module.py:379: Using model folder: /home/orangepi/Repos/mlc-llm/dist/prebuilt/RedPajama-INCITE-Chat-3B-v1-q4f16_1-MLC [2024-04-29 19:19:58] INFO chat_module.py:380: Using mlc chat config: /home/orangepi/Repos/mlc-llm/dist/prebuilt/RedPajama-INCITE-Chat-3B-v1-q4f16_1-MLC/mlc-chat-config.json [2024-04-29 19:19:58] INFO chat_module.py:529: Using library model: dist/prebuilt/lib/RedPajama-INCITE-Chat-3B-v1/RedPajama-INCITE-Chat-3B-v1-q4f16_1-mali.so [2024-04-29 19:19:59] INFO model_metadata.py:96: Total memory usage: 1581.34 MB (Parameters: 1491.34 MB. KVCache: 0.00 MB. Temporary buffer: 90.00 MB) [2024-04-29 19:19:59] INFO model_metadata.py:105: To reduce memory usage, tweakprefill_chunk_size, context_window_sizeandsliding_window_sizearm_release_ver: g13p0-01eac0, rk_so_ver: 3 Traceback (most recent call last): File "/home/orangepi/Repos/mlc-llm/test.py", line 4, in <module> cm = ChatModule( File "/home/orangepi/Repos/mlc-llm/python/mlc_llm/chat_module.py", line 797, in __init__ self._reload(self.model_lib_path, self.model_path, user_chat_config_json_str) File "/home/orangepi/Repos/mlc-llm/python/mlc_llm/chat_module.py", line 1017, in _reload self._reload_func(lib, model_path, app_config_json) File "/home/orangepi/Repos/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 239, in __call__ raise_last_ffi_error() File "/home/orangepi/Repos/tvm/python/tvm/_ffi/base.py", line 481, in raise_last_ffi_error raise py_err File "/home/orangepi/Repos/tvm/src/runtime/relax_vm/vm.cc", line 964, in operator() self->InvokeClosurePacked(clo, args, rv); File "/home/orangepi/Repos/tvm/src/runtime/relax_vm/vm.cc", line 558, in tvm::runtime::relax_vm::VirtualMachineImpl::InvokeClosurePacked(tvm::runtime::ObjectRef const&, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) clo->impl.CallPacked(TVMArgs(values.data(), tcodes.data(), args.size() + 1), rv); File "/home/orangepi/Repos/tvm/src/runtime/relax_vm/vm.cc", line 632, in operator() *rv = static_cast<VirtualMachineImpl*>(ctx_ptr)->InvokeBytecode(gf_idx, inputs); File "/home/orangepi/Repos/tvm/src/runtime/relax_vm/vm.cc", line 689, in tvm::runtime::relax_vm::VirtualMachineImpl::InvokeBytecode(long, std::vector<tvm::runtime::TVMRetValue, std::allocator<tvm::runtime::TVMRetValue> > const&) RunLoop(); File "/home/orangepi/Repos/tvm/src/runtime/relax_vm/vm.cc", line 814, in tvm::runtime::relax_vm::VirtualMachineImpl::RunLoop() this->RunInstrCall(curr_frame, instr); File "/home/orangepi/Repos/tvm/src/runtime/relax_vm/vm.cc", line 767, in tvm::runtime::relax_vm::VirtualMachineImpl::RunInstrCall(tvm::runtime::relax_vm::VMFrame*, tvm::runtime::relax_vm::Instruction) this->InvokeClosurePacked(func_pool_[instr.func_idx], args, &ret); File "/home/orangepi/Repos/tvm/src/runtime/relax_vm/vm.cc", line 540, in tvm::runtime::relax_vm::VirtualMachineImpl::InvokeClosurePacked(tvm::runtime::ObjectRef const&, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) packed->CallPacked(args, rv); tvm._ffi.base.TVMError: Traceback (most recent call last): 9: 0x0000ffff9e1fdf9b 8: 0x0000ffff9e1fde63 7: 0x0000ffff9e1fc377 6: operator() at /home/orangepi/Repos/tvm/src/runtime/relax_vm/vm.cc:964 5: tvm::runtime::relax_vm::VirtualMachineImpl::InvokeClosurePacked(tvm::runtime::ObjectRef const&, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) at /home/orangepi/Repos/tvm/src/runtime/relax_vm/vm.cc:558 4: operator() at /home/orangepi/Repos/tvm/src/runtime/relax_vm/vm.cc:632 3: tvm::runtime::relax_vm::VirtualMachineImpl::InvokeBytecode(long, std::vector<tvm::runtime::TVMRetValue, std::allocator<tvm::runtime::TVMRetValue> > const&) at /home/orangepi/Repos/tvm/src/runtime/relax_vm/vm.cc:689 2: tvm::runtime::relax_vm::VirtualMachineImpl::RunLoop() at /home/orangepi/Repos/tvm/src/runtime/relax_vm/vm.cc:814 1: tvm::runtime::relax_vm::VirtualMachineImpl::RunInstrCall(tvm::runtime::relax_vm::VMFrame*, tvm::runtime::relax_vm::Instruction) at /home/orangepi/Repos/tvm/src/runtime/relax_vm/vm.cc:767 0: tvm::runtime::relax_vm::VirtualMachineImpl::InvokeClosurePacked(tvm::runtime::ObjectRef const&, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) at /home/orangepi/Repos/tvm/src/runtime/relax_vm/vm.cc:540 File "/home/orangepi/Repos/tvm/include/tvm/runtime/packed_func.h", line 1908 TVMError: Function vm.builtin.paged_attention_kv_cache_create_reduced(0: runtime.ShapeTuple, 1: int64_t, 2: int64_t, 3: int64_t, 4: int64_t, 5: int, 6: double, 7: double, 8: runtime.NDArray, 9: runtime.PackedFunc, 10: runtime.PackedFunc, 11: runtime.PackedFunc, 12: runtime.PackedFunc, 13: runtime.PackedFunc, 14: runtime.PackedFunc, 15: runtime.PackedFunc, 16: runtime.PackedFunc, 17: runtime.PackedFunc, 18: runtime.PackedFunc) -> relax.vm.AttentionKVCache expects 19 arguments, but 18 were provided.

marquicus avatar Apr 30 '24 01:04 marquicus

I am running into the same issue trying to run the pre-built mlc-chat.apk on my pixel phone from the following link.

https://github.com/mlc-ai/binary-mlc-llm-libs/releases/download/Android/mlc-chat.apk

relax.vm.AttentionKVCache expects 19 arguments, but 18 were provided. 
Stack trace: 
File "Users/kartik/mlc/tvm/include/tvm/runtime/packed_func.h",  line 1908

Same problem on my pixel 7 pro

Progaros avatar May 03 '24 17:05 Progaros

Same on Poco F3 with RedPajama. Entire trace:

MLCChat failed

Stack trace:
org.apache.tvm.Base$TVMError: TVMError: Function vm.builtin.paged_attention_kv_cache_create_reduced(0: runtime.ShapeTuple, 1: int64_t, 2: int64_t, 3: int64_t, 4: int64_t, 5: int, 6: double, 7: double, 8: runtime.NDArray, 9: runtime.PackedFunc, 10: runtime.PackedFunc, 11: runtime.PackedFunc, 12: runtime.PackedFunc, 13: runtime.PackedFunc, 14: runtime.PackedFunc, 15: runtime.PackedFunc, 16: runtime.PackedFunc, 17: runtime.PackedFunc, 18: runtime.PackedFunc) -> relax.vm.AttentionKVCache expects 19 arguments, but 18 were provided.
Stack trace:
  File "/Users/kartik/mlc/tvm/include/tvm/runtime/packed_func.h", line 1908

	at org.apache.tvm.Base.checkCall(Base.java:173)
	at org.apache.tvm.Function.invoke(Function.java:130)
	at ai.mlc.mlcllm.ChatModule.reload(ChatModule.java:46)
	at ai.mlc.mlcchat.AppViewModel$ChatState$mainReloadChat$1$2.invoke(AppViewModel.kt:648)
	at ai.mlc.mlcchat.AppViewModel$ChatState$mainReloadChat$1$2.invoke(AppViewModel.kt:646)
	at ai.mlc.mlcchat.AppViewModel$ChatState.callBackend(AppViewModel.kt:548)
	at ai.mlc.mlcchat.AppViewModel$ChatState.mainReloadChat$lambda$3(AppViewModel.kt:646)
	at ai.mlc.mlcchat.AppViewModel$ChatState.$r8$lambda$CXL6v4mjTu_Sr5Pk2zFDcus0R-8(Unknown Source:0)
	at ai.mlc.mlcchat.AppViewModel$ChatState$$ExternalSyntheticLambda2.run(Unknown Source:8)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:462)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
	at java.lang.Thread.run(Thread.java:923)


Error message:
TVMError: Function vm.builtin.paged_attention_kv_cache_create_reduced(0: runtime.ShapeTuple, 1: int64_t, 2: int64_t, 3: int64_t, 4: int64_t, 5: int, 6: double, 7: double, 8: runtime.NDArray, 9: runtime.PackedFunc, 10: runtime.PackedFunc, 11: runtime.PackedFunc, 12: runtime.PackedFunc, 13: runtime.PackedFunc, 14: runtime.PackedFunc, 15: runtime.PackedFunc, 16: runtime.PackedFunc, 17: runtime.PackedFunc, 18: runtime.PackedFunc) -> relax.vm.AttentionKVCache expects 19 arguments, but 18 were provided.
Stack trace:
  File "/Users/kartik/mlc/tvm/include/tvm/runtime/packed_func.h", line 1908

EDIT: LLama 3 works

xslendix avatar May 05 '24 10:05 xslendix

Same on llama q0f16, Platform: Mental, iOS app:

Function vm.builtin.paged_attention_kv_cache_create_reduced(0: runtime.ShapeTuple, 1: int64_t, 2: int64_t, 3: int64_t, 4: int64_t, 5: int, 6: double, 7: double, 8: runtime.NDArray, 9: runtime.PackedFunc, 10: runtime.PackedFunc, 11: runtime.PackedFunc, 12: runtime.PackedFunc, 13: runtime.PackedFunc, 14: runtime.PackedFunc, 15: runtime.PackedFunc, 16: runtime.PackedFunc, 17: runtime.PackedFunc, 18: runtime.PackedFunc) -> relax.vm.AttentionKVCache expects 19 arguments, but 18 were provided.

0x0000000100f4905c tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<void tvm::runtime::TypedPackedFunc<tvm::runtime::relax_vm::AttentionKVCache (tvm::runtime::ShapeTuple, long long, long long, long long, long long, int, double, double, tvm::runtime::NDArray, tvm::runtime::PackedFunc, tvm::runtime::PackedFunc, tvm::runtime::PackedFunc, tvm::runtime::PackedFunc, tvm::runtime::PackedFunc, tvm::runtime::PackedFunc, tvm::runtime::PackedFunc, tvm::runtime::PackedFunc, tvm::runtime::PackedFunc, tvm::runtime::Optional<tvm::runtime::PackedFunc>)>::AssignTypedLambda<tvm::runtime::relax_vm::$_1>(tvm::runtime::relax_vm::$_1, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>)::'lambda'(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)>>::Call(tvm::runtime::PackedFuncObj const*, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) + 2488

mlc-llm @ 9998076153d5309ec87dc32c373e1759813ee84e tvm@ https://github.com/mlc-ai/relax/tree/ce58d63453ff83b930fa2be665647621b2eec4d2

====================== @MasterJH5574

Update: I figure out how to solve this:

  1. Delete the build folder under ios/MLCChat
  2. Rerun mlc_llm package

textmony avatar May 20 '24 00:05 textmony

Same error on Firefly's RK3588 platform [2024-05-21 12:52:51] INFO auto_device.py:79: Found device: opencl:0 [2024-05-21 12:52:51] INFO chat_module.py:379: Using model folder: /home/firefly/mlc-llm/dist/prebuilt/llama2_7b_q4f16 [2024-05-21 12:52:51] INFO chat_module.py:380: Using mlc chat config: /home/firefly/mlc-llm/dist/prebuilt/llama2_7b_q4f16/mlc-chat-config.json [2024-05-21 12:52:51] INFO chat_module.py:529: Using library model: dist/prebuilt/lib/Llama-2-7b-chat-hf/Llama-2-7b-chat-hf-q4f16_1-mali.so [2024-05-21 12:52:52] INFO model_metadata.py:96: Total memory usage: 4077.14 MB (Parameters: 3615.13 MB. KVCache: 0.00 MB. Temporary buffer: 462.01 MB) [2024-05-21 12:52:52] INFO model_metadata.py:105: To reduce memory usage, tweak prefill_chunk_size, context_window_size and sliding_window_size arm_release_ver of this libmali is 'g6p0-01eac0', rk_so_ver is '7'. Traceback (most recent call last): File "/home/firefly/mlc-llm/chat.py", line 4, in cm = ChatModule( ^^^^^^^^^^^ File "/home/firefly/mlc-llm/python/mlc_llm/chat_module.py", line 795, in init self._reload(self.model_lib, self.model_path, user_chat_config_json_str) File "/home/firefly/mlc-llm/python/mlc_llm/chat_module.py", line 1015, in _reload self._reload_func(lib, model_path, app_config_json) File "/home/firefly/tvm_unity/python/tvm/_ffi/_ctypes/packed_func.py", line 239, in call raise_last_ffi_error() File "/home/firefly/tvm_unity/python/tvm/_ffi/base.py", line 481, in raise_last_ffi_error raise py_err File "/home/firefly/tvm_unity/src/runtime/relax_vm/vm.cc", line 964, in operator() self->InvokeClosurePacked(clo, args, rv); ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/firefly/tvm_unity/src/runtime/relax_vm/vm.cc", line 558, in tvm::runtime::relax_vm::VirtualMachineImpl::InvokeClosurePacked(tvm::runtime::ObjectRef const&, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) clo->impl.CallPacked(TVMArgs(values.data(), tcodes.data(), args.size() + 1), rv); ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/firefly/tvm_unity/src/runtime/relax_vm/vm.cc", line 632, in operator() rv = static_cast<VirtualMachineImpl>(ctx_ptr)->InvokeBytecode(gf_idx, inputs); ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/firefly/tvm_unity/src/runtime/relax_vm/vm.cc", line 689, in tvm::runtime::relax_vm::VirtualMachineImpl::InvokeBytecode(long, std::vector<tvm::runtime::TVMRetValue, std::allocatortvm::runtime::TVMRetValue > const&) RunLoop();

File "/home/firefly/tvm_unity/src/runtime/relax_vm/vm.cc", line 814, in tvm::runtime::relax_vm::VirtualMachineImpl::RunLoop() this->RunInstrCall(curr_frame, instr); ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/firefly/tvm_unity/src/runtime/relax_vm/vm.cc", line 767, in tvm::runtime::relax_vm::VirtualMachineImpl::RunInstrCall(tvm::runtime::relax_vm::VMFrame*, tvm::runtime::relax_vm::Instruction) this->InvokeClosurePacked(func_pool_[instr.func_idx], args, &ret); ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/firefly/tvm_unity/src/runtime/relax_vm/vm.cc", line 540, in tvm::runtime::relax_vm::VirtualMachineImpl::InvokeClosurePacked(tvm::runtime::ObjectRef const&, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) packed->CallPacked(args, rv); ^^^^^^^^^^^^^^^^ tvm._ffi.base.TVMError: Traceback (most recent call last): 9: 0x0000007f8c999fe7 8: 0x0000007f8c999dbb 7: 0x0000007f8c9981ab 6: operator() at /home/firefly/tvm_unity/src/runtime/relax_vm/vm.cc:964 5: tvm::runtime::relax_vm::VirtualMachineImpl::InvokeClosurePacked(tvm::runtime::ObjectRef const&, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) at /home/firefly/tvm_unity/src/runtime/relax_vm/vm.cc:558 4: operator() at /home/firefly/tvm_unity/src/runtime/relax_vm/vm.cc:632 3: tvm::runtime::relax_vm::VirtualMachineImpl::InvokeBytecode(long, std::vector<tvm::runtime::TVMRetValue, std::allocatortvm::runtime::TVMRetValue > const&) at /home/firefly/tvm_unity/src/runtime/relax_vm/vm.cc:689 2: tvm::runtime::relax_vm::VirtualMachineImpl::RunLoop() at /home/firefly/tvm_unity/src/runtime/relax_vm/vm.cc:814 1: tvm::runtime::relax_vm::VirtualMachineImpl::RunInstrCall(tvm::runtime::relax_vm::VMFrame*, tvm::runtime::relax_vm::Instruction) at /home/firefly/tvm_unity/src/runtime/relax_vm/vm.cc:767 0: tvm::runtime::relax_vm::VirtualMachineImpl::InvokeClosurePacked(tvm::runtime::ObjectRef const&, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) at /home/firefly/tvm_unity/src/runtime/relax_vm/vm.cc:540 File "/home/firefly/tvm_unity/include/tvm/runtime/packed_func.h", line 1908 TVMError: Function vm.builtin.paged_attention_kv_cache_create_reduced(0: runtime.ShapeTuple, 1: int64_t, 2: int64_t, 3: int64_t, 4: int64_t, 5: int, 6: double, 7: double, 8: runtime.NDArray, 9: runtime.PackedFunc, 10: runtime.PackedFunc, 11: runtime.PackedFunc, 12: runtime.PackedFunc, 13: runtime.PackedFunc, 14: runtime.PackedFunc, 15: runtime.PackedFunc, 16: runtime.PackedFunc, 17: runtime.PackedFunc, 18: runtime.PackedFunc) -> relax.vm.AttentionKVCache expects 19 arguments, but 18 were provided.

dwwu avatar May 21 '24 12:05 dwwu

When you see error like this, likely it is due to stale prebuild binary. Please remove the prebuild binary

  • android, ios: use the latest instruction in mlc_llm package
  • other platforms(e.g. orange pi) remove the model_lib field and allow MLC to auto-jit a new library

tqchen avatar May 21 '24 13:05 tqchen

When you see error like this, likely it is due to stale prebuild binary. Please remove the prebuild binary

  • android, ios: use the latest instruction in mlc_llm package
  • other platforms(e.g. orange pi) remove the model_lib field and allow MLC to auto-jit a new library

still some errors when auto-jit a new library: python chat.py [2024-05-21 13:07:35] INFO auto_device.py:79: Found device: opencl:0 [2024-05-21 13:07:35] INFO chat_module.py:379: Using model folder: /home/firefly/mlc-llm/dist/prebuilt/llama2_7b_q4f16 [2024-05-21 13:07:35] INFO chat_module.py:380: Using mlc chat config: /home/firefly/mlc-llm/dist/prebuilt/llama2_7b_q4f16/mlc-chat-config.json [2024-05-21 13:07:35] INFO chat_module.py:781: Now compiling model lib on device... Traceback (most recent call last): File "/home/firefly/mlc-llm/chat.py", line 4, in cm = ChatModule( ^^^^^^^^^^^ File "/home/firefly/mlc-llm/python/mlc_llm/chat_module.py", line 782, in init from mlc_llm.interface import jit # pylint: disable=import-outside-toplevel ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/firefly/mlc-llm/python/mlc_llm/interface/jit.py", line 17, in from mlc_llm.model import MODELS File "/home/firefly/mlc-llm/python/mlc_llm/model/init.py", line 2, in from .model import MODELS, Model File "/home/firefly/mlc-llm/python/mlc_llm/model/model.py", line 6, in from tvm.relax.frontend import nn File "/home/firefly/tvm_unity/python/tvm/relax/init.py", line 68, in from .op.base import ( File "/home/firefly/tvm_unity/python/tvm/relax/op/init.py", line 21, in from . import _op_gradient, builtin, ccl, distributed, grad, image, memory, nn, op_attrs File "/home/firefly/tvm_unity/python/tvm/relax/op/_op_gradient.py", line 129, in @register_gradient("relax.add") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/firefly/tvm_unity/python/tvm/ir/op.py", line 241, in _register _ffi_api.RegisterOpAttr(op_name, attr_key, v, level) ^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: module 'tvm.ir._ffi_api' has no attribute 'RegisterOpAttr'

dwwu avatar May 21 '24 13:05 dwwu

@dwwu please cross check your tvm installation, and follow instruction to rebuild tvm unity

tqchen avatar May 21 '24 13:05 tqchen