mediapipe icon indicating copy to clipboard operation
mediapipe copied to clipboard

Mediapipe LLM Inference for LoRa Fine-Tuned Gemma-2b-en Model with Keras

Open rahul-exp opened this issue 1 year ago • 2 comments

I have fine-tuned Gemma-2b-en model with Keras. I want to test the model on-device with Mediapipe LLM Inference. What's the procedure of doing the same? I read that the Keras model downloaded from Kaggle and fine tuned doesn't need conversion like the Hugging Face model, but wanted to understand how the merging and TFLite conversion of this model and subsequent inference steps look like.

rahul-exp avatar Jun 11 '24 20:06 rahul-exp

Hi @rahul-exp,

If you are customizing the model, please follow the steps mentioned in the LLM overview page to run the model. Try this out and let us know your feedback.

Thank you!!

kuaashish avatar Jun 13 '24 08:06 kuaashish

Hi @kuaashish ,

Thanks for responding. But these steps are applicable only for model LoRA fine-tuned hugging face models. How about the the LoRA fine-tuned models using keras nlp. What is the procedure for conversion and inference of these models? It would be really helpful if you could throw some light on this.

Thanks in advance!

rahul-exp avatar Jun 17 '24 23:06 rahul-exp

Hi @rahul-exp,

Currently, we have limited bandwidth and can not assist with this request. As we have not worked with Keras NLP, we recommend you explore this on your own. Unfortunately, we are unable to help at this time.

Thank you!!

kuaashish avatar Jul 01 '24 08:07 kuaashish

This issue has been marked stale because it has no recent activity since 7 days. It will be closed if no further activity occurs. Thank you.

github-actions[bot] avatar Jul 09 '24 01:07 github-actions[bot]

This issue was closed due to lack of activity after being marked stale for past 7 days.

github-actions[bot] avatar Jul 16 '24 01:07 github-actions[bot]

Are you satisfied with the resolution of your issue? Yes No

google-ml-butler[bot] avatar Jul 16 '24 01:07 google-ml-butler[bot]