Prince Canuma
Prince Canuma
Let me know if the issue persists with version 0.1.0
Try ``` pip list | grep mlx ```
Can you try to run this in your terminal ``` python -m mlx_vlm.generate --model mistral-community/pixtral-12b --max-tokens 100 --temp 0.0 --prompt "What animal is this?" ```
Please share the result of ``` pip list | grep mlx ```
Try this model and let me know if the issue persists. `mlx-community/pixtral-12b-8bit`
Something doesn't add up because your logs are saying the model is loading using llava arch instead of pixtral.
I will give it a look.
Found the issue! This version points to llava in the model config. I patched it locally. Don't worry, I will add a condition to fix this at load time. https://huggingface.co/mistral-community/pixtral-12b/blob/main/config.json
> Well this one doesn't just crash out, but it just spins, without producing an answer, either from the command line or via the script above. What are the specs...
Also try the 4bit version instead of the 8bit. `mlx-community/pixtral-12b-4bit`