Marco
Results
1
comments of
Marco
Thanks @fxtoofaan I decide to run crewAI directly with *ollama* without using LiteLLM and it worked. I followed [ollama docker image](https://hub.docker.com/r/ollama/ollama) instructions: * ``docker run -d -v ollama:/root/.ollama -p 11434:11434...