Junru Shao
Junru Shao
Should be fixed on HEAD
Please refer to this page for proper TVM installation: https://mlc.ai/mlc-llm/docs/install/tvm.html
@NullCodex i closed the issue because I believe the issue has been fixed, awaiting confirmation but there was no response, so I assume the issue is gone. i would expect...
@NullCodex the wheel you are looking for is: `mlc_ai_nightly-0.12.dev1300-cp38-cp38-macosx_13_0_arm64.whl` in https://mlc.ai/wheels. I am not quite sure why pip didn't find the one you are looking for. Could you check: ```...
@NullCodex i have no clue unfortunately :( could you instead download this wheel and install it locally?
M2 is supported. Could you check your conda installation and see if its for x86 or arm? See also: https://mlc.ai/mlc-llm/docs/install/conda.html#validate-installation
I really have no clue what's going on on your end. I'm able to download and install on my own M2 max without any issue. @NullCodex let's try this dry...
@NullCodex we have a detailed set of instructions if you decide to build it from source: https://mlc.ai/mlc-llm/docs/install/tvm.html#option-2-build-from-source
@NullCodex There is effectively no hardware requirement to build libtvm at all given it’s simply running make/clang stuff. Which command are you running that eats so much memory?
@NullCodex it took 5min to compile on my M2...Not sure what's going on