Bijan Bowen

Results 4 comments of Bijan Bowen

@aminekhelif This is more specifically geared towards Ollama and my own specific testing, but this may be of some help regardless. Try a simple test script like this one: ```python...

I experienced similar issues with the cognitive state formatting from the LLM responses with Qwen2.5-32b & Ollama. I agree implementing the full capabilities would be too resource-intensive for local setups....

That is my video posted above :) I have a semi-functional fork of this that works with ollama and was tested with llama-3.2-11b-vision. Here is a link to the repo:...

> > That is my video posted above :) I have a semi-functional fork of this that works with ollama and was tested with llama-3.2-11b-vision. Here is a link to...