Supports accessing GPT through proxy interfaces
Is your feature request related to a problem? Please describe. For some reason, we cannot use openAI API directly, but use model through the proxy API. Guidance doesn't seem to be able to support this feature now.
Describe the solution you'd like Allows users to set up custom API or proxy to access LLM instead of just specifying the model by name.
This project is wonderful! Will there be plans to support this feature in the future?
maybe you can try to use the endpoint param. But I'm not sure if it's really possible, I saw this through the source code.
import guidance
guidance.llm = guidance.llms.OpenAI("text-davinci-003", endpoint="xxxx")
You should be able to change the api_base in the openai SDK to use the models
import guidance
guidance.llm = guidance.llms.OpenAI("gpt-4")
import openai
openai.api_base = 'YOUR_ENDPOINT_HERE'
If you set rest_call=True and endpoint when initializing guidance.llms.OpenAI, it will make a REST call to the endpoint (which can be a proxy, as long as it accepts the same arguments as the OpenAI API).
Please reopen if this doesn't solve the issue
i want to know how to get the endpoint