langchain
langchain copied to clipboard
How to action when output isn't finished
sometimes, when we ask LLMs a question like writing a document or a piece of code for a specified problem, the output may be too long, when we use a UI interface like ChatGPT, we can use the prompt like
...
if you have given all content, please add the 'finished' at the end of the response.
if not, I will say 'continue', then please continue to give me the reaming content.
to get all content by seeing if we should let LLMs continue to print content. Does anyone know how to achieve this by LangChain?
I'm not sure if LangChain supports this, if not, and if someone is willing to give me some guides on how to do this in LangChain, I'll be happy to create a PR to solve it.
I have looked into history issues and official docs, but still don't know how to do it.
It works in similar way: output = chatgpt_chain.predict(human_input="continue") and in the template prompt = PromptTemplate( input_variables=["history", "human_input"], template=template )
Just have no idea how to verify if necessary. I mean I have not found a way to verify that output was truncated
@wenfengwang Do you get the solution at last. or Is there way to get full content when user input word "continue"?
Hi, @wenfengwang! I'm Dosu, and I'm helping the LangChain team manage their backlog. I wanted to let you know that we are marking this issue as stale.
From what I understand, you were seeking guidance on handling long outputs in LangChain. There have been some suggestions from users, including using chatgpt_chain.predict(human_input="continue")
and implementing a template with input variables. It seems like you found a solution by using the chatgpt_chain.predict(human_input="continue")
method to indicate when the output is finished. You also mentioned implementing a template with input variables. However, there is still some uncertainty about how to verify if the output was truncated. Another user has asked if you found a solution or a way to get the full content when the user inputs the word "continue".
Before we close this issue, we wanted to check if it is still relevant to the latest version of the LangChain repository. If it is, please let us know by commenting on this issue. Otherwise, feel free to close the issue yourself or it will be automatically closed in 7 days.
Thank you for your contribution to LangChain!
Has anyone found a good solution for this issue? Dealing with something similar atm.
@baskaryan Could you please help @betoDavila86 with this issue? They are looking for a solution to the problem of handling long outputs in LangChain. Thank you!
Hi team! Any new insight with regards to the truncated output issue?
We have tried the "continue" workaround with very random results and none of them correct when it comes to the required output.
Please, anyone can shed some light in this regard?
Thanks!
Hi, @wenfengwang,
I'm helping the LangChain team manage their backlog and am marking this issue as stale. From what I understand, you are seeking guidance on handling long outputs from language models in LangChain, specifically implementing a mechanism similar to using a prompt in a UI interface to indicate when the output is finished. There have been suggestions from users, including using chatgpt_chain.predict(human_input="continue")
and implementing a template with input variables. However, there is uncertainty about verifying if the output was truncated. Another user, betoDavila86, has also expressed interest in finding a solution to a similar issue.
Could you please confirm if this issue is still relevant to the latest version of the LangChain repository? If it is, please let the LangChain team know by commenting on the issue. Otherwise, feel free to close the issue yourself, or the issue will be automatically closed in 7 days. Thank you!
I am facing similar issue. Can someone tell me what should I do to handle such long output which exceeds the number of Max tokens of GPT 4o ?