Tianqi Chen
Tianqi Chen
Thanks for reporting if it is possibe to get a minimum repro that would be helpful. You can do so by dumping out the TVMScript before the transform, minimize it...
likely it depends on the JNI environment you are using, previosly android jni was directly triggred as https://github.com/apache/tvm/blob/main/jvm/native/src/main/native/org_apache_tvm_native_c_api.cc#L232 which pass in the address and normal java should not have TVM4J_ANDROID...
This is due to your JAVA_HOME setting, we highly recommend pointing to Android studio's jdk, see also https://github.com/mlc-ai/mlc-llm/pull/2327
checkout the latest instructions here https://llm.mlc.ai/docs/deploy/android.html
passing in android would be fine, and it will invoke adreno GPUs.
After reading through the code, i think @BitCircuit you are right and we can safely remove this line, do you mind send a PR?
Seems this have to do with rust installation, consider reinstall rust environment
Likely this was due to older variant of the GPU, you can try to build tvm and mlc from source without flashinfer/thrust https://llm.mlc.ai/docs/install/tvm.html
we have now updated the latest to disable flashinfer if it is not available
When you see error like this, likely it is due to stale prebuild binary. Please remove the prebuild binary - android, ios: use the latest instruction in `mlc_llm package` -...