mlc-llm icon indicating copy to clipboard operation
mlc-llm copied to clipboard

[Question] Whether to support running on ARM64v8

Open Crystalxd opened this issue 1 year ago • 3 comments

❓ General Questions

When I completed the compilation and tried to run ./mlc_chat_cli --local-id ../../dist/vicuna-7b-all-v1.1-q3f16_0, I got follows error: ` terminate called after throwing an instance of 'tvm::runtime::InternalError' what(): [06:45:08] /data/lxxin/mlc-llm/cpp/cli_main.cc:50: Cannot auto detect device-name Stack trace: [bt] (0) /data/ps/mlc-llm/build/tvm/libtvm_runtime.so(tvm::runtime::Backtraceabi:cxx11+0x1c) [0xffff76d377ac] [bt] (1) ./mlc_chat_cli(tvm::runtime::detail::LogFatal::Entry::Finalize()+0x54) [0xaaaac9aaee74] [bt] (2) ./mlc_chat_cli(+0x8cb4) [0xaaaac9aa8cb4] [bt] (3) ./mlc_chat_cli(+0x8ed4) [0xaaaac9aa8ed4] [bt] (4) ./mlc_chat_cli(+0x76b4) [0xaaaac9aa76b4] [bt] (5) /lib/aarch64-linux-gnu/libc.so.6(__libc_start_main+0xe8) [0xffff76890e18] [bt] (6) ./mlc_chat_cli(+0x8128) [0xaaaac9aa8128]

Aborted (core dumped) `

Crystalxd avatar Jun 12 '23 07:06 Crystalxd

Please see the documents here, need to run gen_cmake_config.py to detect device

Hzfengsy avatar Jun 12 '23 08:06 Hzfengsy

Much thanks! I tried it, but I still got the same error, I suppose MLC could not support running in ARM CPU(or not only ARM) without GPU.

Crystalxd avatar Jun 12 '23 10:06 Crystalxd

You are right. The CPU is too weak to run LLMs, so we only focus on the GPU environment due to limited bandwidth.

On the other hand, it's not hard to add CPU if you are familiar with TVM. Happy to help if you are interested in contributing CPU support to the community.

Hzfengsy avatar Jun 12 '23 10:06 Hzfengsy

Thanks very much! I got it.

Crystalxd avatar Jun 13 '23 02:06 Crystalxd