Mark Ericksen
Mark Ericksen
A possible option here would be to provide a default Elixir function that does nothing. Setting the function would override the default. :thinking:
Hi @mjrusso! Welcome! Yes, that is a nice feature. There are two main approaches to do that in this library and, of course, there are other options as well. We...
The idea with the message processors is that some simple models are only good at giving a single response. You can't reliably have a follow-up chat with them. Most importantly,...
Thanks @mjrusso! I look forward to seeing what you come up with!
Groq support was added in https://github.com/brainlid/langchain/pull/338
Hi @masegraye! Sorry, I didn't catch the service difference. Yes, I'd accept this as a contribution. I'll reopen it. However, there are a few changes I'd request. Don't include: -...
How to cancel depends on your application and how it is setup. For a LiveView example where the LLMChain is run in an async process, it can be as easy...
The idea of two separate processes is how it works. The demo project uses a Task created process for running the chain, then the other process is the LiveView. Yes,...
Hi @pauldemarco! It's a good question but it doesn't have a clean answer. Deltas include the token usage once the full message is completed and received. (see ChatOpenAI and ChatAnthropic)....
Hi @mathieuripert! Using processed_content as a place to store the additional information is an interesting approach! One of the things I'm working on currently is support for NativeTools. This means...