mlc-llm icon indicating copy to clipboard operation
mlc-llm copied to clipboard

[Question] error: package org.apache.tvm does not exist import org.apache.tvm.Device

Open shalinsirwani opened this issue 1 year ago • 47 comments

While buidling apk from this project https://github.com/mlc-ai/mlc-llm/tree/main/android/ getting error /home/<user_name>/Downloads/mlc-llm/android/library/src/main/java/ai/mlc/mlcllm/ChatModule.java:3: error: package org.apache.tvm does not exist import org.apache.tvm.Device;

How to resolve this error.

shalinsirwani avatar Jan 05 '24 12:01 shalinsirwani

same problem

daniele777 avatar Jan 21 '24 21:01 daniele777

@shalinsirwani @daniele777

After running prepare_libs.sh, a jar and a ARM lib will be copied to /library/built/output folder in Android project. (Note: NOT outputs , it's output (singular))

That jar contains apache.tvm package

The problem is, if you "Clean/rebuild Project", the Gradle is clearing the build/output folder !!!, so I can't get the package !!!

I fixed this by copying the contents of 'output' folder to a new folder libs (/library/libs). image

Then went to the build.gradle in library folder and changed build/output in both JniLibs and dependenies like this

[build/output] ===> [libs]

Then Clean and rebuilt project

RageshAntonyHM avatar Feb 08 '24 05:02 RageshAntonyHM

I met the same problem and tried @RageshAntonyHM method. after running the app in Android Studio or manually run prepare_libs.sh, I cannot find the output directory nor tvm jar. Is there any other solution or what did I missed?

xiaotongyangTran avatar Feb 23 '24 03:02 xiaotongyangTran

@RageshAntonyHM: I'm getting the following error when I'm running prepare_libs.sh

image image

Could you please help me with what's possibly going wrong?

NSTiwari avatar Feb 26 '24 14:02 NSTiwari

@NSTiwari

Sorry. I don't know .Net. Some error from .Net framework

RageshAntonyHM avatar Feb 27 '24 07:02 RageshAntonyHM

@RageshAntonyHM: In your case, did you use Linux? I suppose this is an issue specifically for Windows.

NSTiwari avatar Feb 28 '24 05:02 NSTiwari

@NSTiwari No. I am using Mac OS (Macbook M2 )

RageshAntonyHM avatar Feb 28 '24 05:02 RageshAntonyHM

libs.zip @NSTiwari @xiaotongyangTran check with this. This is my built libs

RageshAntonyHM avatar Feb 28 '24 06:02 RageshAntonyHM

@RageshAntonyHM: Thanks for sharing the libs. I copied the lib files you shared and the project built successfully. However, when I run the app using the gemma-2b-q4f16_1 model, I get the following error.

MLCChat failed

Stack trace:
org.apache.tvm.Base$TVMError: TVMError: Unknown conversation template: gemma_instruction
Stack trace:
  File "/Users/rageshantonyd/Documents/working_prj/mlc-llm/cpp/conv_templates.cc", line 743

	at org.apache.tvm.Base.checkCall(Base.java:173)
	at org.apache.tvm.Function.invoke(Function.java:130)
	at ai.mlc.mlcllm.ChatModule.reload(ChatModule.java:46)
	at ai.mlc.mlcchat.AppViewModel$ChatState$mainReloadChat$1$2.invoke(AppViewModel.kt:648)
	at ai.mlc.mlcchat.AppViewModel$ChatState$mainReloadChat$1$2.invoke(AppViewModel.kt:646)
	at ai.mlc.mlcchat.AppViewModel$ChatState.callBackend(AppViewModel.kt:548)
	at ai.mlc.mlcchat.AppViewModel$ChatState.mainReloadChat$lambda$3(AppViewModel.kt:646)
	at ai.mlc.mlcchat.AppViewModel$ChatState.$r8$lambda$CXL6v4mjTu_Sr5Pk2zFDcus0R-8(Unknown Source:0)
	at ai.mlc.mlcchat.AppViewModel$ChatState$$ExternalSyntheticLambda2.run(Unknown Source:8)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:487)
	at java.util.concurrent.FutureTask.run(FutureTask.java:264)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:644)
	at java.lang.Thread.run(Thread.java:1012)


Error message:
TVMError: Unknown conversation template: gemma_instruction
Stack trace:
  File "/Users/rageshantonyd/Documents/working_prj/mlc-llm/cpp/conv_templates.cc", line 743

image

NSTiwari avatar Feb 28 '24 07:02 NSTiwari

@NSTiwari Sorry. I don't have idea. I think that JAR is mapped with my device. May be you should built on your own

RageshAntonyHM avatar Feb 28 '24 08:02 RageshAntonyHM

@RageshAntonyHM , we are trying to run the shell script but it's not working. Could you please compile the tar file for us and share the generated build files? @NSTiwari

prebuilt_libs.zip

AashiDutt avatar Mar 19 '24 14:03 AashiDutt

@AashiDutt

Is this okay ? https://github.com/mlc-ai/binary-mlc-llm-libs/tree/main

RageshAntonyHM avatar Mar 19 '24 16:03 RageshAntonyHM

@RageshAntonyHM The files I shared with you contain the fine-tuned model and we need help running the shell script for that.

AashiDutt avatar Mar 19 '24 16:03 AashiDutt

@RageshAntonyHM: We need the corresponding JAR files for the Android .tar files that @AashiDutt has shared.

NSTiwari avatar Mar 19 '24 16:03 NSTiwari

@NSTiwari @AashiDutt

Sorry. I didn't tried that. And my Manager tried compiling a fine tined model and he got many errors

So we now trying MAID https://github.com/Mobile-Artificial-Intelligence/maid

For now it available only for Android

It is based on Flutter and supports GGUF model directly without any conversion like MLC-LLM.

RageshAntonyHM avatar Mar 20 '24 02:03 RageshAntonyHM

@AashiDutt @NSTiwari

For Gemma, also try this

https://github.com/tensorflow/tensorflow/issues/63025#issuecomment-1993756859

RageshAntonyHM avatar Mar 20 '24 03:03 RageshAntonyHM

@RageshAntonyHM: I see. However, could you please just give it a try considering that the shell script worked for you before?

It will be really a great help if you could take out some time and see if this works? The fine-tuned model is nearly the same as the original Gemma 2b-it model with little differences.

NSTiwari avatar Mar 20 '24 03:03 NSTiwari

@NSTiwari

You want the JAR files for your "Science Gemma" ?

RageshAntonyHM avatar Mar 20 '24 03:03 RageshAntonyHM

@RageshAntonyHM: Yes.

NSTiwari avatar Mar 20 '24 03:03 NSTiwari

@NSTiwari But for that , do I need your "model file" also ?

RageshAntonyHM avatar Mar 20 '24 03:03 RageshAntonyHM

@RageshAntonyHM: According to the documentation, the initial steps were to convert the model to sharded weights and then compile the model which generates a Android tar file.

These steps are already done by us. If you need the model file, you can find it here: https://huggingface.co/NSTiwari/scigemma_fine_tuned_quantized/tree/main

We're only stuck at the last step where the Android tar file needs to be compiled using the prepare_libs.sh script by putting the tar file in the prebuilt_libs folder inside Android/library folder.

NSTiwari avatar Mar 20 '24 03:03 NSTiwari

@NSTiwari But doesn't the JAR files are common?

RageshAntonyHM avatar Mar 20 '24 03:03 RageshAntonyHM

@RageshAntonyHM: No, they are not common. They differ from model to model. Remember, in the same thread, you provided me one of your JAR files? It didn't work.

For the Scigemma model, the JAR generated will be different.

NSTiwari avatar Mar 20 '24 03:03 NSTiwari

@NSTiwari

Then I think you try other alternatives

In the above mentioned MAID, I directly used fine tined Airboros model without any problem

But don't know whether Gemma is supported!

RageshAntonyHM avatar Mar 20 '24 04:03 RageshAntonyHM

@NSTiwari @AashiDutt

provide me the app_config.json

RageshAntonyHM avatar Mar 20 '24 05:03 RageshAntonyHM

@RageshAntonyHM: app-config.json

NSTiwari avatar Mar 20 '24 06:03 NSTiwari

@NSTiwari @AashiDutt

Sorry. I also get the same error:

image

i think we need the Repo Devs support

RageshAntonyHM avatar Mar 20 '24 07:03 RageshAntonyHM

i think llama works but not Gemma

RageshAntonyHM avatar Mar 20 '24 07:03 RageshAntonyHM

@RageshAntonyHM: Were you able to generate the JAR for the files we shared?

NSTiwari avatar Mar 20 '24 07:03 NSTiwari

Archive 2.zip

@NSTiwari

RageshAntonyHM avatar Mar 20 '24 07:03 RageshAntonyHM