gpt-migrate
gpt-migrate copied to clipboard
adding max_tokens to cli
Adding max tokens to cli as using gpt3.5 crashes when the context length is 10k
@ctr26 This will be solved with #2 - we'll have a mapping of model -> context window (max tokens) and we'll break down files and prompts accordingly.
Ааа
I don't have access to gpt-4-32k
. How can I use gpt-4
? I get this error even with these changes
openai.error.InvalidRequestError: This model's maximum context length is 8192 tokens. However, you requested 10601 tokens (601 in the messages, 10000 in the completion). Please reduce the length of the messages or completion.
max_tokens: int = typer.Option(8192),
):
ai = AI(
model=model,
temperature=temperature,
max_tokens=int(max_tokens),
)
@gianpaj The output also contributes to the number of tokens. If your model has a max context window of 8k, you're probably better off making max_tokens 4k or so.
vvdd
Is it also possible to also add something that can change the openai base url? So this could work with the microsoft azure openai endpoint or proxies
@Ran-Mewo Yes definitely, I'll try to get to this later - feel free to also submit a PR for this if you'd like