multiple models
Is your feature request related to a problem? Please describe.
I'd like to be able to use different models in my prompt script - e.g. using chatGPT for generating reasoning and then gpt-3 to fill in the blanks.
I think you can do this using partials.
Imagine a template main that looks like this:
main = guidance("""
{{#system}}
You are a helpful AI.
{{/system}}
{{#user}}
Provide a strategy for solving the following problem. Do not actually solve the problem, just outline the strategy
{{problem}}
{{/user}}
{{#assistant}}
{{gen 'strategies'}}
{{/assistant}}
{{>executor}}
""")
Also imagine a template solver that looks like this:
executor = guidance("""
{{#user}}
Now given the strategy above, solve the problem.
{{/user}}
{{#assistant}}
{{gen 'solution'}}
{{/assistant}}
""")
You should be able to handle this with multiple LLMs as follows:
reasoning_llm = guidance.llms.OpenAI("gpt-4")
executor_llm = guidance.llms.OpenAI("text-davinci-003")
output = main(
problem="How can I organize a conference?",
llm=reasoning_llm,
executor=executor(
llm=executor_llm
)
)
I haven't done this personally, but I have worked with the partials system a little bit, and this looks like it falls within design intent.
having issues with my openai keys, but curious if anyone has tested this out?
I support the original request and argue that the solution suggested here, using partial, is not good enough because of poor readability.
A proper syntax would be:
import guidance
reasoning_llm = guidance.llms.OpenAI("gpt-4")
executor_llm = guidance.llms.OpenAI("text-davinci-003")
program = guidance("""
{{#system}}
You are a helpful AI.
{{/system}}
{{#user}}
Provide a strategy for solving the following problem. Do not actually solve the problem, just outline the strategy
{{problem}}
{{/user}}
{{#assistant}}
{{gen 'strategies' model=reasoning_llm}}
{{/assistant}}
{{#user}}
Now given the strategy above, solve the problem.
{{/user}}
{{#assistant}}
{{gen 'solution' model=executor_llm}}
{{/assistant}}
""")
yes this look way cleaner and easier to read. @marcotcr @slundberg thoughts on this?
@marcotcr @slundberg please check this out
This should be very easy to do in the new release, where each lm can be its own object :)