mediapipe icon indicating copy to clipboard operation
mediapipe copied to clipboard

Support for Converting Other LLMs like GPT-2 & QWEN in MediaPipe

Open mayurmarvel opened this issue 1 year ago • 8 comments

Enhancement Request - Support for Additional LLM Types

Description:

After reviewing the Mediapipe documentation and the provided Google Colab notebook for converting LLMs to Tflite (.bin) format, it's evident that only a limited set of LLM types are currently supported, including {"PHI_2", "FALCON_RW_1B", "STABLELM_4E1T_3B", "GEMMA_2B"}. Additionally, PHI-3 has been released, but there is no available method to utilize it.

Request:

Given the diverse range of LLM models available, it would be advantageous to expand the capabilities of the conversion tool to support a wider array of LLM types such as QWEN and GPT-2, which users may desire to employ with Mediapipe. Therefore, I propose enhancing the tool to accommodate various LLM types, including QWEN and others. While specific documentation may not exist for every LLM type, providing a generalized framework for importing most LLMs would be invaluable.

Additional Notes:

This enhancement would significantly enhance the tool's usefulness and cater to a broader user base interested in utilizing various LLM models for inference tasks.

mayurmarvel avatar Apr 27 '24 14:04 mayurmarvel

Hi @mayurmarvel,

Thank you for suggesting the extension of the llm model feature. We have noted it as a feature request. Based on our discussions and future demand, we may consider implementing this feature. However, at this moment, we are unable to provide a specific timeline for its availability.

Thank you!!

kuaashish avatar Apr 30 '24 08:04 kuaashish

Hi @mayurmarvel,

Thank you for suggesting the extension of the llm model feature. We have noted it as a feature request. Based on our discussions and future demand, we may consider implementing this feature. However, at this moment, we are unable to provide a specific timeline for its availability.

Thank you!!

We also want to convert our own model, hoping to support this feature and provide a tutorial

zkh2016 avatar Apr 30 '24 09:04 zkh2016

This is on our roadmap.

schmidt-sebastian avatar Apr 30 '24 16:04 schmidt-sebastian

Yes, same requirement here. Would like to use 0.5B models in mediapipe js

bil-ash avatar Jul 03 '24 00:07 bil-ash

@kuaashish I would also like to convert Phi-3 to TF Lite using MediaPipe. Now it isn't possible using the Colab notebook with GPU - the backend crashes (it's using 11.2 GB / 15 GB of VRAM and 3.2 GB / 12.7 GB of RAM) with the following app.log:

Timestamp Level Message
Jul 3, 2024, 11:39:10 AM WARNING WARNING:root:kernel e26eb005-e95a-40e5-a442-e8d44bc67d90 restarted
Jul 3, 2024, 11:39:10 AM INFO KernelRestarter: restarting kernel (1/5), keep random ports
Jul 3, 2024, 11:39:07 AM WARNING @ 0x59a819c9510e (unknown)
Jul 3, 2024, 11:39:07 AM WARNING @ 0x7ee1da0ac4ed pybind11::cpp_function::dispatcher()
Jul 3, 2024, 11:39:07 AM WARNING @ 0x7ee1da7bbcc1 pybind11::cpp_function::initialize<>()::{lambda()#3}::_FUN()
Jul 3, 2024, 11:39:07 AM WARNING @ 0x7ee1da0364c4 odml::infra::gpu::GenerateTfLite()
Jul 3, 2024, 11:39:07 AM WARNING @ 0x7ee1da833bd8 ml_drift::LlmBuilder::CreateStackedTransformerModel()
Jul 3, 2024, 11:39:07 AM WARNING @ 0x7ee1da8193ee odml_byom::PhiBuilder::MakeLayer()
Jul 3, 2024, 11:39:07 AM WARNING @ 0x7ee1da81bfc2 odml_byom::LlmBaseBuilder::MakeNormalization()
Jul 3, 2024, 11:39:07 AM WARNING @ 0x7ee1da81bdfa odml_byom::LlmBaseBuilder::MakeLayerNormalization()
Jul 3, 2024, 11:39:07 AM WARNING @ 0x7ee1da838442 odml::infra::gpu::CachingTensorLoader::LoadFloat32()
Jul 3, 2024, 11:39:07 AM WARNING @ 0x7ee1da800891 odml::infra::gpu::(anonymous namespace)::LlmWritingTensorLoader::LoadFloat32()
Jul 3, 2024, 11:39:07 AM WARNING @ 0x7ee1da035277 odml::infra::gpu::(anonymous namespace)::LlmWritingTensorLoader::WriteFile()
Jul 3, 2024, 11:39:07 AM WARNING @ 0x7ee1daf8b829 absl::log_internal::LogMessageFatal::~LogMessageFatal()
Jul 3, 2024, 11:39:07 AM WARNING *** Check failure stack trace: ***
Jul 3, 2024, 11:39:07 AM WARNING F0000 00:00:1719999547.547589 558 model_ckpt_util.cc:441] Check failed: optional Missing required tensor: params.lm.transformer.x_layers_0.pre_layer_norm.bias
Jul 3, 2024, 11:39:07 AM WARNING E0000 00:00:1719999547.547532 558 llm_file_tensor_loader.cc:110] Cannot open file! /content/intermediate/phi-2/params.lm.transformer.x_layers_0.pre_layer_norm.bias

Could you please add support for converting Phi-3 model to TF Lite?

niutech avatar Jul 03 '24 09:07 niutech

Yes.. I would like that too.. expecially PHI-3 and mistral

0wwafa avatar Jul 12 '24 16:07 0wwafa

Same error, Check failed: optional Missing required tensor: params.lm.transformer.x_layers_0.pre_layer_norm.bias I am trying to convert xLAM-1b-fc-r to tflite

Ashoka74 avatar Jul 24 '24 04:07 Ashoka74

Can we use Qwen2 model with MediaPipe now?

FranzKafkaYu avatar Aug 28 '24 03:08 FranzKafkaYu