langchainjs
langchainjs copied to clipboard
Feat/openai n
@davidfant @functorism this is roughly what it would take to have the same featureset as .batch
(separate runs per input, with error handling).
I think the maintenance overhead would be tough - would you all feel comfortable calling .generate()
directly instead?
The latest updates on your projects. Learn more about Vercel for Git ↗︎
Name | Status | Preview | Comments | Updated (UTC) |
---|---|---|---|---|
langchainjs-api-refs | ✅ Ready (Inspect) | Visit Preview | 💬 Add feedback | Apr 13, 2024 0:51am |
langchainjs-docs | ✅ Ready (Inspect) | Visit Preview | Apr 13, 2024 0:51am |
@jacoblee93 hmm, agree that this doesn't look clean on the ChatOpenAI
-side. is there any other good way to accomplish this kind of batching without hacking the batch fn and without using generate? the problem with generate is that I want to use the same interface for across my app when doing LLM calls with an arbitrary BaseChatModel
(mostly Claude and GPT-4). i want to avoid if OpenAI, generate in a special way
Yeah I gotcha - unfortunately I think nothing comes to mind CC @baskaryan @eyurtsev :(
I think for now you could wrap it in a custom function? I hear you on wanting a unified interface for sure though.