kurutah
kurutah
> So far GPT-3.5 is picked based our benchmarks, where 3.5 is on par for reliability with GPT-4 for most inline chat and slash commands test scenarios. We'll keep evaluating...
Any news?
openAI devs say: "We recommend choosing gpt-4o-mini where you would have previously used gpt-3.5-turbo as this model is more capable and cheaper." I wonder if Github Copilot team needs further...
It does use GPT-3.5 for some things though when it really shouldn't. - where? And regarding GPT4-turbo, many people think it's worse for coding than GPT-4, so I don't know...