outlines icon indicating copy to clipboard operation
outlines copied to clipboard

Docker Image missing dependencies

Open nicholasamiller opened this issue 8 months ago • 2 comments

Describe the issue as clearly as possible:

docker run -p 8000:8000 outlinesdev/outlines --model="microsoft/Phi-3-mini-4k-instruct"   pwsh   35  11:48:57  Traceback (most recent call last): File "/usr/local/lib/python3.10/runpy.py", line 187, in _run_module_as_main mod_name, mod_spec, code = _get_module_details(mod_name, _Error) File "/usr/local/lib/python3.10/runpy.py", line 110, in _get_module_details import(pkg_name) File "/outlines/outlines/init.py", line 3, in import outlines.generate File "/outlines/outlines/generate/init.py", line 2, in from .cfg import cfg File "/outlines/outlines/generate/cfg.py", line 7, in from outlines.models import LlamaCpp, OpenAI, TransformersVision File "/outlines/outlines/models/init.py", line 11, in from .exllamav2 import ExLlamaV2Model, exl2 File "/outlines/outlines/models/exllamav2.py", line 4, in import torch ModuleNotFoundError: No module named 'torch'

Steps/code to reproduce the bug:

Follow instructions in documentation:


Alternative Method: Via Docker
You can install and run the server with Outlines' official Docker image using the command


docker run -p 8000:8000 outlinesdev/outlines --model="microsoft/Phi-3-mini-4k-instruct"

Expected result:

Container runs without error.

Error message:

docker run -p 8000:8000 outlinesdev/outlines --model="microsoft/Phi-3-mini-4k-instruct"                   pwsh   35  11:48:57 
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/runpy.py", line 187, in _run_module_as_main
    mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
  File "/usr/local/lib/python3.10/runpy.py", line 110, in _get_module_details
    __import__(pkg_name)
  File "/outlines/outlines/__init__.py", line 3, in <module>
    import outlines.generate
  File "/outlines/outlines/generate/__init__.py", line 2, in <module>
    from .cfg import cfg
  File "/outlines/outlines/generate/cfg.py", line 7, in <module>
    from outlines.models import LlamaCpp, OpenAI, TransformersVision
  File "/outlines/outlines/models/__init__.py", line 11, in <module>
    from .exllamav2 import ExLlamaV2Model, exl2
  File "/outlines/outlines/models/exllamav2.py", line 4, in <module>
    import torch
ModuleNotFoundError: No module named 'torch'

Outlines/Python version information:

Version information

``` (command output here) ```

Context for the issue:

No response

nicholasamiller avatar Mar 31 '25 01:03 nicholasamiller

Same

limaolin2017 avatar Apr 07 '25 12:04 limaolin2017

should be easy to add in a requirements list, no?

captn-hook avatar May 01 '25 19:05 captn-hook

The Docker image has been deprecated as you can now run vLLM on a server with Outlines as the backend for structured generation.

RobinPicard avatar Jun 20 '25 10:06 RobinPicard