llm icon indicating copy to clipboard operation
llm copied to clipboard

llm chat -m mistral-7b-instruct-v0 led to OSError: [Errno 9] Bad file descriptor

Open raybellwaves opened this issue 1 year ago • 4 comments

llm chat -m mistral-7b-instruct-v0

Chatting with mistral-7b-instruct-v0
Type 'exit' or 'quit' to exit
Type '!multi' to enter multiple lines, then '!end' to finish
> what should I do tomorrow?
 That's a question that only you can answer. What are your goals, interests, and responsibilities for tomorrow?Exception ignored in: <_io.TextIOWrapper name=5 mode='w' encoding='UTF-8'>
Traceback (most recent call last):
  File "/opt/homebrew/Cellar/llm/0.17.1/libexec/lib/python3.13/site-packages/llm_gpt4all.py", line 294, in __exit__
    sys.stderr = self.original_stderr
OSError: [Errno 9] Bad file descriptor

raybellwaves avatar Nov 17 '24 02:11 raybellwaves

I'm seeing the same problem currently when using Python 3.13 along with llm 0.19.1 and llm-gpt4all 0.19.1:

$ llm 'What is an LLM?'
Exception ignored in: <_io.TextIOWrapper name=4 mode='w' encoding='UTF-8'>
Traceback (most recent call last):
  File "/Users/kehoste/venv-py3.13/lib/python3.13/site-packages/llm_gpt4all.py", line 294, in __exit__
    sys.stderr = self.original_stderr
OSError: [Errno 9] Bad file descriptor

boegel avatar Jan 02 '25 10:01 boegel

There's an open pull request that should fix this issue: https://github.com/simonw/llm-gpt4all/pull/44

boegel avatar Jan 02 '25 10:01 boegel

The pull request unfortunately has been closed, but a fix would be appreciated.

emanuelst avatar Apr 01 '25 15:04 emanuelst

I had a seemingly similar issue, but in the end the main error was something else.

llm -m 'DeepSeek-R1-Distill-Qwen-7B-Q4_0' 'How are you doing?'
Exception ignored in: <_io.TextIOWrapper name=5 mode='w' encoding='UTF-8'>
Traceback (most recent call last):
  File "/home/fb/.local/share/uv/tools/llm/lib/python3.13/site-packages/llm_gpt4all.py", line 296, in __exit__
    sys.stderr = self.original_stderr
OSError: [Errno 9] Bad file descriptor

So I got no output from the model at all. I had to manually disable the SupressOutput in llm_gpt4all.py to see the actual error message, which was:

Failed to load libllamamodel-mainline-cuda.so: dlopen: libcudart.so.11.0: cannot open shared object file: No such file or directory
Failed to load libllamamodel-mainline-cuda-avxonly.so: dlopen: libcudart.so.11.0: cannot open shared object file: No such file or directory
llama_model_load: error loading model: error loading model vocabulary: unknown pre-tokenizer type: 'deepseek-r1-qwen'
llama_load_model_from_file: failed to load model
LLAMA ERROR: failed to load model from /home/fb/.cache/gpt4all/DeepSeek-R1-Distill-Qwen-7B-Q4_0.gguf
LLaMA ERROR: prompt won't work with an unloaded model!

These errors were also not available in the logs. Is there any way to see these error-messages without having to change manually the python package?

fbreitwieser avatar Aug 07 '25 16:08 fbreitwieser