LLM Provider config wizard should list supported models from server for ease of setup
Motivation
When configuring OpenAI compatible providers, that are serving various open source models, the model names that the server expects can be hard to track down.
I ended up finding the right model name by running this curl "$URL/v1/models | jq
Describe the solution you'd like As a QoL improvement, when configuring the LLM Provider, after providing the host/token stuff, and asking which model to use, Goose can proactively fetch all models that the API serves, and suggest them. This could probably work in both the GUI and CLI, but I was using the cli, so maybe the model prompt section could try and get a list to provide a single select prompt in the config wizard. .
At least OpenAI compatible APIs support this GET /v1/models list, but Im assuming most API formats in the wild have some sense of "list available models". But even the OpenAI model options are kinda confusing, lots of options, so a prepopulated list would help remove friction from the initial onboarding.