genaiscript
genaiscript copied to clipboard
Idea have to models to at same time
I want to try to have 2 models at same time, just an idea for example on small model that will do tool calling and other model will answer based on this tool
The issue is well-defined, but it lacks some details that would make it easier for the developer to understand and implement the solution you're proposing.
Clarification Needed:
- Tool Specification: What are the tools that the first model (tool calling model) should call?
- Response Format: What kind of response format does the second model need to generate based on the tool's output?
- Communication Between Models: How should the two models communicate or exchange data? Is there a specific API or messaging protocol you're considering?
- Error Handling: How should errors from tools or communication failures be handled?
- Use Case Example: Provide an example use case or scenario that illustrates the workflow you envision.
Once these details are provided, the developer will have a clearer understanding of the requirements and can proceed more effectively with implementing your idea.
AI-generated content by issue-reviewer may be incorrect
You mean running the same chat history with different models based on the tool calling?
You mean running the same chat history with different models based on the tool calling?
no, but this is great idea to try 👍
I mean having 2 llms working on the same conversation
If you use inline prompts (see runPrompt), you can start mixing any type of LLMs and any order as long as you can write it in JavaScript.
If you're interested in worfklows, I'd investigate how to use xstate.js to build a mini engine.
If you use inline prompts (see runPrompt), you can start mixing any type of LLMs and any order as long as you can write it in JavaScript.
If you're interested in worfklows, I'd investigate how to use xstate.js to build a mini engine.
I will try, the way I think about it is workflows is controlled agents as you control every step they take, but maybe I can do that with just defChatParticipant
Right... currently I offload the orchestration entirely to the LLM itself (the LLM will get better at this). If we would refactor the agent functions, we could potentially leverage them programmatically. Something to ponder about.
the thing is I'm running very small models on my m1 but they are not smart at all, so guiding them is the key for me to get things working, but I think maybe I should use a bigger llm instead
@alyahmedaly:
Response:
It sounds like you're looking for a way to have two models working together on the same conversation, where one model (the tool calling model) calls tools and another model processes the results.
Here's a potential solution using GenAIScript:
- Define Models: Load two different models into your environment.
- Tool Calling Model: Use this model to call tools based on the input.
- Result Processing Model: Pass the tool outputs to the second model for further processing or decision-making.
Example Implementation
Below is a basic example using JavaScript:
const {runPrompt, defChatParticipant} = require('genaiscript');
// Define your two models
const model1 = (input) => {
return runPrompt((_) => {
_.def("INPUT", input)
_.$`Call the relevant tools based on the INPUT and return results.`
});
};
const model2 = (results) => {
return runPrompt((_) => {
_.def("RESULTS", results)
_.$`Process the RESULTS and generate a final response.`
});
};
// Run the conversation
const chatHistory = [
{role: 'user', content: 'What is the weather like?'},
// Add more messages as needed
];
chatHistory.forEach((message, index) => {
if (index % 2 === 0) {
const resultsFromModel1 = model1(message.content);
defChatParticipant(resultsFromModel1);
} else {
const finalResponse = model2(state.lastOutput);
defChatParticipant(finalResponse);
}
});
const finalState = state.getChatHistory();
console.log(finalState);
Explanation
-
Model Definitions:
model1is responsible for calling tools based on the input.model2processes the results from these tools.
-
Conversation Execution:
- Iterate through the chat history and alternate between using
model1andmodel2. - Pass the output of
model1todefChatParticipant, which updates the chat state. - When it's time for processing, pass the last output from
defChatParticipanttomodel2.
- Iterate through the chat history and alternate between using
This approach allows you to guide smaller models by using a larger model to make decisions and guide tool calls.
Next Steps:
- Model Performance: Evaluate the performance of both models. If
model1is not as smart as expected, consider using a larger model. - Refinement: Refine your workflows and orchestrations based on feedback from testing.
Would you like to proceed with this approach or have additional questions?
AI-generated content by issue-bot may be incorrect
If you use inline prompts (see runPrompt), you can start mixing any type of LLMs and any order as long as you can write it in JavaScript.
If you're interested in worfklows, I'd investigate how to use xstate.js to build a mini engine.
Would be happy to assist with this if needed.
@davidkpiano can you give https://github.com/microsoft/genaiscript/issues/1250 a read
The user is suggesting a scenario where two models, referred to as the "tool calling model" and the "result processing model," work together on the same conversation. The tool calling model is responsible for calling various tools based on the input, while the result processing model processes the results from these tools and generates a final response.
The main goals of this scenario are:
- The user wants to understand what specific tools should be called by the tool calling model.
- Determine how the two models will communicate or exchange data.
- Handle potential errors that might occur during their interaction.
- Provide an example use case to better illustrate the workflow.
The user also mentioned a preference for using smaller models, such as those running on an M1 chip, which are not as sophisticated but can be guided by a larger model for more intelligent decision-making. The user referenced an issue on GenAIScript GitHub repository that might provide additional information or context for this scenario. </A>
AI-generated content by issue-bot may be incorrect