StefanDimitrov95
StefanDimitrov95
For me this happens after running long series of requests. With larger models this issue occurs earlier. Ollama 0.5.1, QWEN 2.5 Coder models and llama 3.1. ``` Dec 09 11:47:29...
```shell Dec 10 07:58:44 ollama[4417]: time=2024-12-10T07:58:44.537Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=3 cache=1634 prompt=900 used=9 remaining=891 Dec 10 07:58:45 ollama[4417]: [GIN] 2024/12/10 - 07:58:45 | 200 | 13.969241081s | 127.0.0.1...
It's in the docs here: https://help.getzep.com/graphiti/graphiti/adding-episodes#loading-episodes-in-bulk Do you have a roadmap about such features? Would be great to ingest in bulk, given the regular add_episode method gets progressively slower on...