llama-stack icon indicating copy to clipboard operation
llama-stack copied to clipboard

llama stack build on Windows

Open that-rahul-guy opened this issue 5 months ago • 4 comments

Hello

I'm trying to llama stack build on my Windows 10 machine. Facing following issue:

ModuleNotFoundError: No module named 'termios'

Full error trace:

Traceback (most recent call last):
  File "C:\programs\anaconda\envs\llama\lib\runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "C:\programs\anaconda\envs\llama\lib\runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "C:\programs\anaconda\envs\llama\Scripts\llama.exe\__main__.py", line 7, in <module>
    sys.exit(main())
  File "C:\programs\anaconda\envs\llama\lib\site-packages\llama_toolchain\cli\llama.py", line 44, in main
    parser.run(args)
  File "C:\programs\anaconda\envs\llama\lib\site-packages\llama_toolchain\cli\llama.py", line 38, in run
    args.func(args)
  File "C:\programs\anaconda\envs\llama\lib\site-packages\llama_toolchain\cli\stack\build.py", line 265, in _run_stack_build_command
    self._run_stack_build_command_from_build_config(build_config)
  File "C:\programs\anaconda\envs\llama\lib\site-packages\llama_toolchain\cli\stack\build.py", line 89, in _run_stack_build_command_from_build_config
    from llama_toolchain.distribution.build import ApiInput, build_image, ImageType
  File "C:\programs\anaconda\envs\llama\lib\site-packages\llama_toolchain\distribution\build.py", line 15, in <module>
    from llama_toolchain.distribution.utils.exec import run_with_pty
  File "C:\programs\anaconda\envs\llama\lib\site-packages\llama_toolchain\distribution\utils\exec.py", line 9, in <module>
    import pty
  File "C:\programs\anaconda\envs\llama\lib\pty.py", line 12, in <module>
    import tty
  File "C:\programs\anaconda\envs\llama\lib\tty.py", line 5, in <module>
    from termios import *
ModuleNotFoundError: No module named 'termios'

As per little googling around I understand that the termios module is specifically for Unix-like operating systems like Linux and macOS.

Is there support for running llama stack on Windows or do I mandatorily have to switch to WSL?

that-rahul-guy avatar Sep 28 '24 10:09 that-rahul-guy