mlx-swift-examples icon indicating copy to clipboard operation
mlx-swift-examples copied to clipboard

Make mlx-vlm examples in swift

Open davidkoski opened this issue 1 year ago • 2 comments

Consider porting some models from https://github.com/Blaizzy/mlx-vlm to swift

davidkoski avatar Sep 27 '24 17:09 davidkoski

e.g.

  • LLaVa llava-hf/LLaVA-NeXT-Video-7B-hf
  • Qwen2 VL: Qwen/Qwen2-VL-2B-Instruct
  • Llama 3.2 Vision: meta-llama/Llama-3.2-11B-Vision-Instruct
  • Phi-3 Vision microsoft/Phi-3-vision-128k-instruct
  • PaliGemma google/paligemma-3b-mix-224

davidkoski avatar Sep 27 '24 17:09 davidkoski

Currently, I am working on porting Llama 3.2 VLM to Swift. It would be great if we could make the vlm a separate package so that people can easily pull it down as a dependency and integrate it into their applications, for example, add vlm support for ChatMLX.

mzbac avatar Sep 30 '24 00:09 mzbac

If someone can put together the basic pipeline for one vision model, I can probably port the others to Swift fairly quickly.

DePasqualeOrg avatar Nov 01 '24 10:11 DePasqualeOrg

I am working on it right now and have paligemma done (well, not debugged but callable). I am working on how to structure the code with regard to the LLM library -- they should share code where possible.

I will try and put up the branch with what I have today. Next week will be busy so it might be two weeks from now before it is really ready.

davidkoski avatar Nov 01 '24 15:11 davidkoski

Fantastic, thank you! Once that's in place, I'll start working on some of the other models (and will post here first to avoid duplication of work).

DePasqualeOrg avatar Nov 01 '24 15:11 DePasqualeOrg

OK, you can see what I have -- more work to be done but the eval loop is worked out.

#151

davidkoski avatar Nov 01 '24 23:11 davidkoski

This continues -- I have most of the refactoring done and llm-tool has a hard coded call to paligemma. I need to implement a second VLM (qwen2_vl) so I can make sure I have the right shape for the APIs.

As mentioned before this will be a breaking change in the API (so I will do a major version bump) but it should be pretty easy to adopt. Hopefully a new import and renaming a couple things: I will produce a guide when it is ready.

davidkoski avatar Nov 13 '24 22:11 davidkoski

Thanks @davidkoski, your work is much appreciated! Once the API is stable, I'll try to port some of the other VLMs.

DePasqualeOrg avatar Nov 13 '24 23:11 DePasqualeOrg

@davidkoski @DePasqualeOrg did either of you get qwen 2 vl working in swift?

anishjain123 avatar Nov 21 '24 21:11 anishjain123

It is implemented in the branch right now but still lacks the image processor -- that is what I am starting on next.

davidkoski avatar Nov 21 '24 22:11 davidkoski

you are doing god's work @davidkoski ! If you need help lmk! Also do you know what would be necessary to go from image processing to video processing?

anishjain123 avatar Nov 22 '24 13:11 anishjain123

@davidkoski https://github.com/Blaizzy/mlx-vlm/pull/97 here is a PR from mlx-vlm that might help!

anishjain123 avatar Nov 22 '24 15:11 anishjain123

Yes, this first version won't have it but it should be straightforward to add. Qwen2VL treats an array of images and a video roughly the same but handles them slightly different in the processor. The video ends up with a different value in the t value (temporal? time?) when it constructs the thw array.

davidkoski avatar Nov 22 '24 16:11 davidkoski

yes youre right about the array of image handling! I tried out a rough version of Qwen2VL and the memory usage on any reasonably sized video is absurd!

Seems like this might not be the architecture to support practical on device video processing...

btw, @davidkoski is there a way to set up a LLM api on MLX as is done with llama.cpp or tools like LM Studio? I have done this with llama.cpp but want to have the performance boost of MLX to see whats possible :)

Thanks again for all your great work I know you have been really involved with MLX from the start!

anishjain123 avatar Nov 27 '24 15:11 anishjain123

btw, @davidkoski is there a way to set up a LLM api on MLX as is done with llama.cpp or tools like LM Studio? I have done this with llama.cpp but want to have the performance boost of MLX to see whats possible :)

I am not sure what kind of API you mean -- certainly there is an API for preparing a prompt and generating tokens, but I think you mean something different.

Probably the answer is yes, but it might be something you would have to build, e.g. if you wanted a web service.

davidkoski avatar Nov 28 '24 15:11 davidkoski

Thanks for your work on the vlm branch @davidkoski . Using llm-tool i can get paligemma to work with the following flag: --model mlx-community/paligemma-3b-mix-448-8bit

but i can't get qwen2_vl to work using --model mlx-community/Qwen2-VL-2B-Instruct-4bit

any assistance ?

kunal732 avatar Dec 05 '24 04:12 kunal732

Thanks for your work on the vlm branch @davidkoski . Using llm-tool i can get paligemma to work with the following flag: --model mlx-community/paligemma-3b-mix-448-8bit

but i can't get qwen2_vl to work using --model mlx-community/Qwen2-VL-2B-Instruct-4bit

any assistance ?

That is the version of the model I was using. What error do you see, or what output? I was using the prompt "describe the image in English" (because it often output Chinese text and this seemed pretty reliable in getting it to output English).

davidkoski avatar Dec 05 '24 05:12 davidkoski

Here are the flags im using: vlm --model mlx-community/Qwen2-VL-2B-Instruct-4bit --prompt "describe image in english" --image /Users/pathtoimage/image.png

get this output when trying to use that model:

LXNN/Module.swift:515: Fatal error: 'try!' expression unexpectedly raised an error: MLXNN.UpdateError.unableToCollectModulesFromContainer(base: "PatchMerger", key: "mlp")

@davidkoski

kunal732 avatar Dec 05 '24 14:12 kunal732

Here are the flags im using: vlm --model mlx-community/Qwen2-VL-2B-Instruct-4bit --prompt "describe image in english" --image /Users/pathtoimage/image.png

get this output when trying to use that model:

LXNN/Module.swift:515: Fatal error: 'try!' expression unexpectedly raised an error: MLXNN.UpdateError.unableToCollectModulesFromContainer(base: "PatchMerger", key: "mlp")

@davidkoski

Ah, that looks like: https://github.com/ml-explore/mlx-swift/pull/164

Make sure that your mlx-swift is using the 0.21.0 (or higher) tag. I wonder if you still have 0.18?

davidkoski avatar Dec 05 '24 15:12 davidkoski

That was the issue! Thank you - it's working great!!

kunal732 avatar Dec 05 '24 17:12 kunal732

Closing this -- we have two models (qwen2-vl and paligemma). More can be added over time.

davidkoski avatar Dec 10 '24 19:12 davidkoski