Awni Hannun
Awni Hannun
It's possible but a bit involved. If someone wants to add it in that would be awesome. See how the [original whisper](https://github.com/openai/whisper/blob/main/whisper/transcribe.py#L90-L92) does it as a good place to start
Fixed in #201 !
HI @mzbac sorry for the delayed review here. Do you still want to merge this? I think given the non-standard size it wouldn't fit easily in our `hf_llm` example, but...
Sounds good, thank you!
Great point! We'd love to take a contribution for this. Any thoughts on what a good model to put in the examples is? Maybe something like [DETR](https://huggingface.co/docs/transformers/model_doc/detr) ?
So for LLama and Mistral 32GB is plenty and probably 24 is also fine. I measured the peak memory use at around 16 GB so a 16GB machine would be...
Interesting.. this could just be due to small numerical differences. We'll have to do some more extensive testing on an M3 as most of it was on an M2 or...
Yea that ones on our list of examples to add! Are you interested in contributing it? If so which model would you use?
Could you be more specific about which packages were missing? I think it would be good to add them to `requirements.txt`. I think it oculd be nice to have a...
Thanks for the contribution! This looks really nice so far. I'm thinking about the best way to incorporate it with MLX. At this stage my thought is it belongs better...