langchain
langchain copied to clipboard
Allow returning the final prompts that are sent to LLMs
Problem
Prompts are formatted and merged (e.g. adding ---context---\n
) before being sent to LLMs. The final prompts sent to LLMs can vary depending on the type of chains used and can be quite different from the original inputs to the chains. However, the user cannot see such differences directly, making debugging more difficult.
Changes
- Allow returning the final prompts as part of
LLMResult
. - Final prompts here are collected before being sent to specific API vendors (e.g. OpenAI). Therefore the addition to
LLMResult
notChatResult
. - Final prompts are returned with the
_call()
method ofLLMChain
in/chains/llm.py
.
Limitations
- This is not yet implemented on all usage of
LLMResult
(i.e. only implemented in /chat_models not /llms) - Due to the use of
predict()
instead of_call()
in most of the chains, final prompts will not be returned there. - Async versions are not implemented
- If you like the idea, I can fix the above limitations
Who can review?
Tag maintainers/contributors who might be interested: @hwchase17 @agola11
The latest updates on your projects. Learn more about Vercel for Git ↗︎
Name | Status | Preview | Comments | Updated (UTC) |
---|---|---|---|---|
langchain | ❌ Failed (Inspect) | Jun 16, 2023 2:58am |
@wu375 is attempting to deploy a commit to the LangChain Team on Vercel.
A member of the Team first needs to authorize it.
Can I get some feedback so that I can decide whether to keep working on this feature or not? Thank you so much! @hwchase17 @agola11
I believe this is implemented pretty well by callbacks, or even easier in LangSmith!
Closing because the PR wouldn't line up with the current directory structure of the library (would need to be in /libs/langchain/langchain instead of /langchain). Feel free to reopen against the current head if it's still relevant!