langchainjs
langchainjs copied to clipboard
The hub.pull function with the include_model_options parameter is not working as expected when used with StructuredPrompt
Checked other resources
- [x] I added a very descriptive title to this issue.
- [x] I searched the LangChain.js documentation with the integrated search.
- [x] I used the GitHub search to find a similar question and didn't find it.
- [x] I am sure that this is a bug in LangChain.js rather than my code.
- [x] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
Example Code
get-quick-replies is a prompt defined in the LangChain Hub using StructuredPrompt. However, the following code does not work as expected:
const prompt = await hub.pull("get-quick-replies", { includeModel: true });
const result = await model.invoke({
question,
answer,
});
console.log(result.quickReplies)
// undefined
console.log(result.content)
// "reply 1\nreply 2\n..."
As an alternative, I’m using the following approach, which works as intended:
const prompt = await hub.pull("get-quick-replies");
const model = prompt.pipe(azureOpenaiMini);
const result = await model.invoke({
question,
answer,
});
console.log(result.quickReplies)
// ["reply 1", "reply 2", ...]
Error Message and Stack Trace (if applicable)
No response
Description
When using the get-quick-replies prompt from LangChain Hub with the includeModel option, the structured output does not work as expected. The quickReplies property is missing from the response, and only raw content is returned. However, when the includeModel option is not used and the model is piped manually, the prompt works correctly and returns the quickReplies property as expected.
System Info
node = v22.11.0 yarn = v1.22.22 "@langchain/community": "^0.3.27", "@langchain/core": "^0.3.35", "@langchain/google-genai": "^0.1.6", "@langchain/openai": "^0.4.0", "@langchain/pinecone": "^0.1.3", "langchain": "^0.3.13", "langsmith": "^0.3.3",
Hello, @KoreanThinker! 👋 I'm here to help you with any bugs, questions, or contributions you have in mind. Let's squash those bugs together!
The issue you're experiencing with the hub.pull function and the includeModel option in LangChain.js seems to be related to how the structured output is handled. When using includeModel, the quickReplies property is not being returned as expected, whereas manually piping the model works correctly.
The includeModel option is designed to automatically instantiate and attach a model instance to the prompt if the prompt has associated model metadata. This should streamline the process, but it seems there might be a discrepancy in how the structured output is processed when using this option [1].
One potential reason for this issue could be that the includeModel option does not fully integrate with the StructuredPrompt's schema, which is necessary for producing structured outputs like quickReplies. The manual piping approach allows you to explicitly ensure that the model and prompt are correctly configured to handle structured data [2].
As a workaround, continue using the manual piping method, which seems to work as intended. This approach gives you more control over the integration between the prompt and the model, ensuring that the structured output is correctly formatted.
To continue talking to Dosu, mention @dosu.
Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other
Thanks for reporting. Will look into this!
Any update on this?
I'm facing a similar issue. This is my prompt, with my schema:
In javascript, this code doesn't follow the schema at all. It uses the model, that's for sure, but completely ignores the schema:
const chain = await hub.pull<Runnable>("review:1bcc53df", {
includeModel: true,
});
const res = await chain.invoke({
answer: "M. Rajoy",
question: "¿Quién es el presidente de España?",
rules: JSON.stringify(
[
{
id: 25,
description: "Debe ser superior a 100 caracteres",
},
{
id: 4,
description: "Debe contener al menos 2 párrafos.",
},
],
null,
2
),
});
console.log(res.concent);
// Output not in JSON:
// La respuesta no cumple con las reglas establecidas. A continuación, se detallan las razones:\n\n1. **Regla 25**: La respuesta es muy breve y
But the following code in Python does follow the model AND the schema:
#!/usr/bin/env python3
import os
from langsmith import Client
client = Client(api_key=os.getenv('LANGSMITH_API_KEY'))
prompt = client.pull_prompt("review:b4458d27", include_model=True)
pato = prompt.invoke({
"question": "¿Quién es el presidente de España?",
"answer": "M. Rajoy",
"rules": [
{
"id": 25,
"description": "Debe ser superior a 100 caracteres",
},
{
"id": 4,
"description": "Debe contener al menos 2 párrafos.",
},
],
})
print(pato)
# Outputs in JSON:
# {'rules': [{'id': '25', 'result': False, 'observations': 'La respuesta tiene menos de 100 caracteres.'}, {'id': '4', 'result': False, 'observations': 'La respuesta no contiene al menos 2 párrafos.'}]}
In LangSmith, they look different:
Javascript
Python
Is anobody facing a similar issue?
Ah yeah this should be fixed in latest langchain. Can you bump and try it out?
Same issue as reported here:
https://github.com/langchain-ai/langsmith-sdk/issues/1902
@jacoblee93 I'm on the latest version of Langchain (0.3.32) and having the same issues described above. To confirm I'm still unable to pull a prompt using structured output from the hub and use it as intended.
Hey @rdBenG, sorry to hear that, could you share a small MRE with a public prompt? The person on the other issue confirmed it fixed so am wondering if it's something else.
@jacoblee93 Sure, I can do an MRE. We're pulling from via the hub as above, but we're using Gemini. The prompt has the structured output attached too it and its not binding to the response - so just returning as json.
@jacoblee93 This is broken for me as well; the following code snippets produce different outputs:
Python code:
from langsmith import Client
client = Client()
runnable = client.pull_prompt(MY_PROMPT, include_model = True)
print(runnable)
TypeScript code:
import { AzureChatOpenAI } from '@langchain/openai';
import * as hub from 'langchain/hub/node';
const chain = await hub.pull<RunnableSequence>(MY_PROMPT, {
apiKey: process.env.LANGSMITH_API_KEY,
includeModel: true,
})
console.log(chain)
Python output:
first=StructuredPrompt(...
middle=[RunnableBinding(bound=AzureChatOpenAI(...
last=JsonOutputParser()
In TypeScript
first: StructuredPrompt { ...
middle: []
last: AzureChatOpenAI { ...
package.json:
"langchain": "^0.3.32",
"langsmith": "^0.3.67",
@jacoblee93 FWIW, what I'm seeing is the following (StructuredPrompt using ChatAnthropic; Javascript library):
When includeModel: true
The RunnableSequence that is returned from hub is missing the AnthropicToolsOutputParser and therefore not invoking the Anthropic extract tool with the request is sent with .invoke()
When includeModel: false
The StructuredPrompt that is returned, when piped into a model instantiated in the code, creates the proper RunnableSequence that uses AnthropicToolsOutputParser (and therefore the Anthropic extract tool).
Hope this helps. Maybe obvious to others.
Sorry folks, will really try to make time for this in the next few days.
Think the above fixes it - or at least one way I was able to reproduce the issue!
Thanks all for reporting. Will ping when it's live.
Live in [email protected]!