Olicorne
Olicorne
Waaaa it works :D  On web as well as on android. Many thanks @sywhb !!
For anyone interested, there is a bionic reading addon on logseq and there is a font called sans forgetica that might be of interest to people in this issue.
Hi, thanks for taking the time. It seems fine. If that helps, I did something akin to that on my own a few days ago: ``` importe litellm from joblib...
Joblib is several things. It contains a wrapper around queue and threading to multiprocess/multithread. But they also have Memory which enables easy caching of functions, methods etc in a local...
For many things. I use langchain and scratch implementations for a variety of LLM stuff. I found litellm which made it exceedingly easy to setup various apis. It allowed me...
1 because it would allow me to use only litellm instead of having to replace only 90% of my llm calls by litellm and keep openai/replicate elsewhere. 2 because it...
Great to hear! Can you confirm the following: 1. The caching works in async and sync modes 2. The caching works for [batch completion](https://docs.litellm.ai/docs/completion/batching) and not just embeddings 3. This...
In my case running `atuin sync` fixed the issue. I had noticed it before when using `history end` and `status`. atuin 18.2.0 Not self hosted
Interesting thanks.
Btw, I implemented all that in my own cli project that does RAG as well as summaries: https://github.com/thiswillbeyourgithub/DocToolsLLM/