langchain
langchain copied to clipboard
Allow parallelize multiple prompts with OpenAIChat
can you make any changes to the https://github.com/hwchase17/langchain/tree/master/langchain/chat_models class? although im not sure if this is still needed
can you make any changes to the https://github.com/hwchase17/langchain/tree/master/langchain/chat_models class? although im not sure if this is still needed
Do you mean the problem is now fixed?
ya - can you give it a shot with https://langchain.readthedocs.io/en/latest/modules/chat/getting_started.html
@hwchase17
am I right in thinking that something like this is still needed to properly solve #1643?
i.e. if we can now do batch messages with ChatOpenAI then that should be used in the map reduce logic in load_summarization_chain et al?
in order to address the 5x slower comment mentioned here: https://github.com/hwchase17/langchain/issues/1643#issuecomment-1471024758
using latest version load_summarization_chain and parallelization for ChatGPT model still does not seem to work. Is there any quick workaround ?
I am also facing this issue, is there any way to use async to batch calls with gpt-3.5-turbo and map-reduce?
using latest version load_summarization_chain and parallelization for ChatGPT model still does not seem to work. Is there any quick workaround ?
How about use the async and arun for the summarization chain?
result = await summary_chain.arun(docs)
in that case 2 things happen: 1 - parallelization of the map step, but without concurrency limit so it s causing rate limit for large doc 2 - no parallization for the collapse/reduce steps which make it slower than it should
Is there anything new about this topic ? What I can find is :
https://www.youtube.com/watch?v=4RKlNFLEZfk&t=140s
Async is enough。 Good opinion.
@joezhoujinjing Could you, please, resolve the merging conflict? After that, ping me and I push @hwchase17 to further review. Thanks!
Sure! Let me do it tonight.
Hey @joezhoujinjing ! Closing this due to inactivity, and you're welcome to reopen if you end up resolving those merge conflicts!