mlc-llm
mlc-llm copied to clipboard
[Question] error: package org.apache.tvm does not exist import org.apache.tvm.Device
While buidling apk from this project https://github.com/mlc-ai/mlc-llm/tree/main/android/
getting error /home/<user_name>/Downloads/mlc-llm/android/library/src/main/java/ai/mlc/mlcllm/ChatModule.java:3: error: package org.apache.tvm does not exist import org.apache.tvm.Device;
How to resolve this error.
same problem
@shalinsirwani @daniele777
After running prepare_libs.sh, a jar and a ARM lib will be copied to /library/built/output folder in Android project.
(Note: NOT outputs
, it's output
(singular))
That jar contains apache.tvm package
The problem is, if you "Clean/rebuild Project", the Gradle is clearing the build/output
folder !!!, so I can't get the package !!!
I fixed this by copying the contents of 'output' folder to a new folder libs (/library/libs).
Then went to the build.gradle in library folder and changed build/output in both JniLibs and dependenies like this
[build/output] ===> [libs]
Then Clean and rebuilt project
I met the same problem and tried @RageshAntonyHM method. after running the app in Android Studio or manually run prepare_libs.sh, I cannot find the output directory nor tvm jar. Is there any other solution or what did I missed?
@RageshAntonyHM:
I'm getting the following error when I'm running prepare_libs.sh
Could you please help me with what's possibly going wrong?
@NSTiwari
Sorry. I don't know .Net. Some error from .Net framework
@RageshAntonyHM: In your case, did you use Linux? I suppose this is an issue specifically for Windows.
@NSTiwari No. I am using Mac OS (Macbook M2 )
libs.zip @NSTiwari @xiaotongyangTran check with this. This is my built libs
@RageshAntonyHM: Thanks for sharing the libs. I copied the lib files you shared and the project built successfully. However, when I run the app using the gemma-2b-q4f16_1 model, I get the following error.
MLCChat failed
Stack trace:
org.apache.tvm.Base$TVMError: TVMError: Unknown conversation template: gemma_instruction
Stack trace:
File "/Users/rageshantonyd/Documents/working_prj/mlc-llm/cpp/conv_templates.cc", line 743
at org.apache.tvm.Base.checkCall(Base.java:173)
at org.apache.tvm.Function.invoke(Function.java:130)
at ai.mlc.mlcllm.ChatModule.reload(ChatModule.java:46)
at ai.mlc.mlcchat.AppViewModel$ChatState$mainReloadChat$1$2.invoke(AppViewModel.kt:648)
at ai.mlc.mlcchat.AppViewModel$ChatState$mainReloadChat$1$2.invoke(AppViewModel.kt:646)
at ai.mlc.mlcchat.AppViewModel$ChatState.callBackend(AppViewModel.kt:548)
at ai.mlc.mlcchat.AppViewModel$ChatState.mainReloadChat$lambda$3(AppViewModel.kt:646)
at ai.mlc.mlcchat.AppViewModel$ChatState.$r8$lambda$CXL6v4mjTu_Sr5Pk2zFDcus0R-8(Unknown Source:0)
at ai.mlc.mlcchat.AppViewModel$ChatState$$ExternalSyntheticLambda2.run(Unknown Source:8)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:487)
at java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:644)
at java.lang.Thread.run(Thread.java:1012)
Error message:
TVMError: Unknown conversation template: gemma_instruction
Stack trace:
File "/Users/rageshantonyd/Documents/working_prj/mlc-llm/cpp/conv_templates.cc", line 743
@NSTiwari Sorry. I don't have idea. I think that JAR is mapped with my device. May be you should built on your own
@RageshAntonyHM , we are trying to run the shell script but it's not working. Could you please compile the tar file for us and share the generated build files? @NSTiwari
@AashiDutt
Is this okay ? https://github.com/mlc-ai/binary-mlc-llm-libs/tree/main
@RageshAntonyHM The files I shared with you contain the fine-tuned model and we need help running the shell script for that.
@RageshAntonyHM: We need the corresponding JAR files for the Android .tar files that @AashiDutt has shared.
@NSTiwari @AashiDutt
Sorry. I didn't tried that. And my Manager tried compiling a fine tined model and he got many errors
So we now trying MAID https://github.com/Mobile-Artificial-Intelligence/maid
For now it available only for Android
It is based on Flutter and supports GGUF model directly without any conversion like MLC-LLM.
@AashiDutt @NSTiwari
For Gemma, also try this
https://github.com/tensorflow/tensorflow/issues/63025#issuecomment-1993756859
@RageshAntonyHM: I see. However, could you please just give it a try considering that the shell script worked for you before?
It will be really a great help if you could take out some time and see if this works? The fine-tuned model is nearly the same as the original Gemma 2b-it model with little differences.
@NSTiwari
You want the JAR files for your "Science Gemma" ?
@RageshAntonyHM: Yes.
@NSTiwari But for that , do I need your "model file" also ?
@RageshAntonyHM: According to the documentation, the initial steps were to convert the model to sharded weights and then compile the model which generates a Android tar file.
These steps are already done by us. If you need the model file, you can find it here: https://huggingface.co/NSTiwari/scigemma_fine_tuned_quantized/tree/main
We're only stuck at the last step where the Android tar file needs to be compiled using the prepare_libs.sh
script by putting the tar file in the prebuilt_libs folder inside Android/library folder.
@NSTiwari But doesn't the JAR files are common?
@RageshAntonyHM: No, they are not common. They differ from model to model. Remember, in the same thread, you provided me one of your JAR files? It didn't work.
For the Scigemma model, the JAR generated will be different.
@NSTiwari
Then I think you try other alternatives
In the above mentioned MAID, I directly used fine tined Airboros model without any problem
But don't know whether Gemma is supported!
@NSTiwari @AashiDutt
provide me the app_config.json
@RageshAntonyHM: app-config.json
@NSTiwari @AashiDutt
Sorry. I also get the same error:
i think we need the Repo Devs support
i think llama works but not Gemma
@RageshAntonyHM: Were you able to generate the JAR for the files we shared?