Prince Canuma
Prince Canuma
I run your script and it worked. Unfortunely, I can't replicate this issue: Code: ```python model_path = "mlx-community/deepseek-vl2-8bit" from mlx_vlm import load, generate from mlx_vlm.prompt_utils import apply_chat_template from mlx_vlm.utils import...
Are you running `transformers ==4.47.1` ?
Thanks! I will take a look at pixtral and deepseek That shouldn't happen.
Sure Florence no, But Florence-2, yes is converted 👌🏽
@jrp2014 just tested it and it works. For pixtral I would recommend using the one on the mlx-community hub: https://huggingface.co/mlx-community?search_models=pixtral However, I found a bug with the language only responses...
> llava-v1.6-34b-8bit seems to need some attention in future. > Dolphin seems to need some extra parameters. Could you elaborate? I just tested and it is working fine.
Pixtral and DeepSeek fix is here #165 and will be available as soon as tests pass.
> I'm just gong by the transcript above. What do you mean?
Thank you very much! Your evals do help me a lot. Please run the Florence-2 from the MLX community repo. MLX-VLM only supports safetensors.
I will fix all those. Regarding the warning I wouldn't worry, it's a transformers warning I will handle soon.