mlx-vlm
mlx-vlm copied to clipboard
Idefics3Processor requires the PyTorch library but it was not found in your environment
I got this error running mlx-vlm like this:
uv run \
--with mlx-vlm \
python -m mlx_vlm.generate \
--model mlx-community/SmolVLM-Instruct-bf16 \
--max-tokens 500 \
--temp 0.5 \
--prompt "Describe this image in detail" \
--image IMG_4414.JPG
Output:
None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
Fetching 12 files: 100%|█████████████████████| 12/12 [00:00<00:00, 29782.04it/s]
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/Users/simon/.cache/uv/archive-v0/uKo-mbuWKgGQJAyh-vU5a/lib/python3.11/site-packages/mlx_vlm/generate.py", line 111, in <module>
main()
File "/Users/simon/.cache/uv/archive-v0/uKo-mbuWKgGQJAyh-vU5a/lib/python3.11/site-packages/mlx_vlm/generate.py", line 80, in main
model, processor, image_processor, config = get_model_and_processors(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/simon/.cache/uv/archive-v0/uKo-mbuWKgGQJAyh-vU5a/lib/python3.11/site-packages/mlx_vlm/generate.py", line 68, in get_model_and_processors
model, processor = load(
^^^^^
File "/Users/simon/.cache/uv/archive-v0/uKo-mbuWKgGQJAyh-vU5a/lib/python3.11/site-packages/mlx_vlm/utils.py", line 292, in load
processor = load_processor(model_path, processor_config=processor_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/simon/.cache/uv/archive-v0/uKo-mbuWKgGQJAyh-vU5a/lib/python3.11/site-packages/mlx_vlm/utils.py", line 335, in load_processor
processor = AutoProcessor.from_pretrained(model_path, **processor_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/simon/.cache/uv/archive-v0/uKo-mbuWKgGQJAyh-vU5a/lib/python3.11/site-packages/transformers/models/auto/processing_auto.py", line 328, in from_pretrained
return processor_class.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/simon/.cache/uv/archive-v0/uKo-mbuWKgGQJAyh-vU5a/lib/python3.11/site-packages/transformers/utils/import_utils.py", line 1651, in __getattribute__
requires_backends(cls, cls._backends)
File "/Users/simon/.cache/uv/archive-v0/uKo-mbuWKgGQJAyh-vU5a/lib/python3.11/site-packages/transformers/utils/import_utils.py", line 1639, in requires_backends
raise ImportError("".join(failed))
ImportError:
Idefics3Processor requires the PyTorch library but it was not found in your environment. Checkout the instructions on the
installation page: https://pytorch.org/get-started/locally/ and follow the ones that match your environment.
Please note that you may need to restart your runtime after installation.
Adding --with torch to the above command fixes the error:
uv run \
--with mlx-vlm \
--with torch \
python -m mlx_vlm.generate \
--model mlx-community/SmolVLM-Instruct-bf16 \
--max-tokens 500 \
--temp 0.5 \
--prompt "Describe this image in detail" \
--image IMG_4414.JPG