Jarkko Iso-kuortti
Jarkko Iso-kuortti
> add this env to ur container with docker desktop or portainer > > OLLAMA_BASE_URL="http://127.0.0.1:11434" rather "http://host.internal:11434" to reach outside of docker container
I can confirming this, using gpt-5 & gpt-4o as chat model There are sometimes workaround to ask Agent0ai to always use fresh terminal and create keep alive wrapper when using...
meanwhile while waiting this to correct you could try to ask agent0ai instance to follow this "Here’s a battle-tested playbook to keep sockets open and stable in automated tests, from...
Diagnostics included from /[a0](http://localhost:32769/#)/[tmp](http://localhost:32769/#)/[chats](http://localhost:32769/#)/[RexEolrU](http://localhost:32769/#)/[messages](http://localhost:32769/#)/[117.txt](http://localhost:32769/#) ((venv) ) root@786f2f1961e7:/# ((venv) ) root@786f2f1961e7:/# ===== Etsitään ja sammutetaan Playwright headless_shell -prosessit ===== ((venv) ) root@786f2f1961e7:/# ((venv) ) root@786f2f1961e7:/# > > > > > >...
I encountered this when using GPT-5-mini as chat model and GPT-5-nano as utility model I was informing agent0ai of the change and it made decission that it needs to do...
> Jarkko Iso-kuortti ***@***.***>, 23 Eyl 2025 Sal, 11:52 tarihind I know with GPT-5 you cannot use temperature it will complain if its something else than 1
btw is there a good templates for different models to use "Chat model additional parameters" ... as a valid starting point
what mml chat models and utility models you use, have you tried to repeat with others ? Sometimes small local lmm just cant cope with everything