Provider request: Synthetic.new
Description
I'd like to request support for Synthetic.new, a platform that runs open source LLMs, including GLM-4.6, Kimi K2 and MiniMax-M2.
Here's the link to the website: https://synthetic.new/
And here's the link to the API docs: https://dev.synthetic.new/docs/api/overview
I'll add the relevant provider entries here (I had help from crush itself to make it):
#100 #99 I add the relevant stuff and add a provider, that said mine are definitely different as I lowered the max token amount. Perhaps I should increase it as what I have is more similar to a chat, that said if you fill the entire window up thhe model will just error out, so not sure if there is a huge point to using the max token to the context limit.
When I prepared those with the help of crush, it set all default_max_tokens parameters to 4096. I manually set them to the context model itself so as not to be limited. Actual parameters can be taken from other providers hosting the same models.
Also, the "hf:" prefix for model ids is a requirement with synthetic, it won't work without that. Finally, I'll update the json files I've uploaded here soon, a few things have changed since.
Edit: The only two exceptions to this are GLM-4.6 and Minimax-M2. Those two are limited to 64k output tokens by synthetic itself.
@Diogenesoftoronto Just letting you know that I have updated the provider files above.
Noted. Saw you working on it in the synthetic discord.
Apparently I have forgotten to fix the openai endpoint (type was mistakenly written as "openai" instead of "openai-compat) on this file
Fixed now.
I used the openai-compat file that you provided and set the issue ready for review. Thanks for the work! @TheSingular
You too!
It's been merged, though I'd note that Kimi K2 Thinking is mislabeled with a duplicate Kimi K2 Instruct name 👀
It's been merged, though I'd note that Kimi K2 Thinking is mislabeled with a duplicate Kimi K2 Instruct name 👀
https://github.com/charmbracelet/catwalk/pull/103
One other thing: Synthetic has a /models endpoint which lists their always-on models (as well as on-demand models the current account may have used recently). Can Catwalk leverage this?
FYI: The /models endpoint is a recent addition that did not exist at the time of the original request.
FYI: The /models endpoint is a recent addition that did not exist at the time of the original request.
True. Maybe this would fit better in a new request then, and leave this one for general provider JSON tracking instead.