llama-stack icon indicating copy to clipboard operation
llama-stack copied to clipboard

Stack start fails with ModuleNotFoundError: No module named 'openai'

Open kavukcutolga opened this issue 7 months ago • 9 comments

System Info

  • Mac os Silicone
  • Conda

Information

  • [x] The official example scripts
  • [ ] My own modified scripts

🐛 Describe the bug

Stack Startup fails when

llama stack run /Users/../.llama/distributions/base/base-run.yaml

Error logs

INFO     2025-03-11 11:05:43,937 __main__:378 server: Run configuration:
INFO     2025-03-11 11:05:43,940 __main__:380 server: apis:
         - inference
         - safety
         - agents
         - vector_io
         - datasetio
         - scoring
         - eval
         - post_training
         - tool_runtime
         - telemetry
         benchmarks: []
         container_image: null
         datasets: []
         image_name: base
         metadata_store: null
         models: []
         providers:
           agents:
           - config:
               persistence_store:
                 db_path: /Users/.../.llama/distributions/base/agents_store.db
                 namespace: null
                 type: sqlite
             provider_id: meta-reference
             provider_type: inline::meta-reference
           datasetio:
           - config: {}
             provider_id: localfs
             provider_type: inline::localfs
           eval:
           - config: {}
             provider_id: meta-reference
             provider_type: inline::meta-reference
           inference:
           - config:
               api_key: '********'
               url: https://api.together.xyz/v1
             provider_id: together
             provider_type: remote::together
           post_training:
           - config: {}
             provider_id: torchtune
             provider_type: inline::torchtune
           safety:
           - config: {}
             provider_id: prompt-guard
             provider_type: inline::prompt-guard
           scoring:
           - config: {}
             provider_id: basic
             provider_type: inline::basic
           telemetry:
           - config:
               service_name: llama-stack
               sinks: console,sqlite
               sqlite_db_path: /Users/.../.llama/distributions/base/trace_store.db
             provider_id: meta-reference
             provider_type: inline::meta-reference
           tool_runtime:
           - config: {}
             provider_id: model-context-protocol
             provider_type: remote::model-context-protocol
           vector_io:
           - config:
               kvstore:
                 db_path: /Users/.../.llama/distributions/base/faiss_store.db
                 namespace: null
                 type: sqlite
             provider_id: meta-reference
             provider_type: inline::meta-reference
         scoring_fns: []
         server:
           port: 8321
           tls_certfile: null
           tls_keyfile: null
         shields: []
         tool_groups: []
         vector_dbs: []
         version: '2'

WARNING  2025-03-11 11:05:43,963 llama_stack.distribution.resolver:214 core: Provider `inline::meta-reference` for API
         `Api.vector_io` is deprecated and will be removed in a future release: Please use the `inline::faiss` provider
         instead.
Traceback (most recent call last):
  File "/opt/anaconda3/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/opt/anaconda3/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/opt/anaconda3/lib/python3.10/site-packages/llama_stack/distribution/server/server.py", line 487, in <module>
    main()
  File "/opt/anaconda3/lib/python3.10/site-packages/llama_stack/distribution/server/server.py", line 388, in main
    impls = asyncio.run(construct_stack(config))
  File "/opt/anaconda3/lib/python3.10/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/opt/anaconda3/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
    return future.result()
  File "/opt/anaconda3/lib/python3.10/site-packages/llama_stack/distribution/stack.py", line 219, in construct_stack
    impls = await resolve_impls(run_config, provider_registry or get_provider_registry(), dist_registry)
  File "/opt/anaconda3/lib/python3.10/site-packages/llama_stack/distribution/resolver.py", line 133, in resolve_impls
    return await instantiate_providers(sorted_providers, router_apis, dist_registry)
  File "/opt/anaconda3/lib/python3.10/site-packages/llama_stack/distribution/resolver.py", line 271, in instantiate_providers
    impl = await instantiate_provider(provider, deps, inner_impls, dist_registry)
  File "/opt/anaconda3/lib/python3.10/site-packages/llama_stack/distribution/resolver.py", line 356, in instantiate_provider
    impl = await fn(*args)
  File "/opt/anaconda3/lib/python3.10/site-packages/llama_stack/providers/remote/inference/together/__init__.py", line 17, in get_adapter_impl
    from .together import TogetherInferenceAdapter
  File "/opt/anaconda3/lib/python3.10/site-packages/llama_stack/providers/remote/inference/together/together.py", line 36, in <module>
    from llama_stack.providers.utils.inference.openai_compat import (
  File "/opt/anaconda3/lib/python3.10/site-packages/llama_stack/providers/utils/inference/openai_compat.py", line 11, in <module>
    from openai import AsyncStream
ModuleNotFoundError: No module named 'openai'
++ error_handler 158
++ echo 'Error occurred in script at line: 158'
Error occurred in script at line: 158
++ exit 1

Expected behavior

Llama stacks sucessfully

kavukcutolga avatar Mar 11 '25 10:03 kavukcutolga