_process_messages not working in ollama.py
Configuration: Windows 10 Pro 20H2 Python 3.10.10 Ollama 0.5.7 DeepSeek-R1:14b model UFO configured to run without visuals. (VISUAL_MODE: False)
Issue: chat_completion in ollama.py fails with:
Error making API request: 'NoneType' object is not subscriptable
Found out it's _process_messages fault. I have no idea why, will try to debug a little bit more to find the root cause.
Anyone has experienced this before?
Steps to reproduce:
- Execute "python -m ufo --task testTask"
- Write any prompt and hit enter.
The current implementation of Ollama is out-of-date. I made a new version based on OpenAI API on my fork https://github.com/nice-mee/UFO/tree/vyokky/dev. I have tested this with qwen3:30b and the API request seems to be working fine. This change will be updated to the main repo soon.
There is another potential reason why you get None from Ollama. By default, Ollama only accepts at max 2048 tokens as input, which is way too small for UFO. You need to change that by building a new model with a modified Modelfile. Refer to https://github.com/ollama/ollama/blob/main/docs/modelfile.md for more info on this. The param you need to modify is num_ctx. UFO generally requires at least 20k tokens to run correctly. I would recommend you set it to the maximum context limit of the model.
However, do note that DeepSeek-R1 series generally don't work well with UFO. It's prone to hallucination, which is deadly for UFO. Maybe you can consider deploying qwen3:235b or qwen3:30b, which is generally more stable than DeepSeek-R1.