mediapipe
mediapipe copied to clipboard
Mediapipe LLM Inference for LoRa Fine-Tuned Gemma-2b-en Model with Keras
I have fine-tuned Gemma-2b-en model with Keras. I want to test the model on-device with Mediapipe LLM Inference. What's the procedure of doing the same? I read that the Keras model downloaded from Kaggle and fine tuned doesn't need conversion like the Hugging Face model, but wanted to understand how the merging and TFLite conversion of this model and subsequent inference steps look like.
Hi @rahul-exp,
If you are customizing the model, please follow the steps mentioned in the LLM overview page to run the model. Try this out and let us know your feedback.
Thank you!!
Hi @kuaashish ,
Thanks for responding. But these steps are applicable only for model LoRA fine-tuned hugging face models. How about the the LoRA fine-tuned models using keras nlp. What is the procedure for conversion and inference of these models? It would be really helpful if you could throw some light on this.
Thanks in advance!
Hi @rahul-exp,
Currently, we have limited bandwidth and can not assist with this request. As we have not worked with Keras NLP, we recommend you explore this on your own. Unfortunately, we are unable to help at this time.
Thank you!!
This issue has been marked stale because it has no recent activity since 7 days. It will be closed if no further activity occurs. Thank you.
This issue was closed due to lack of activity after being marked stale for past 7 days.