openai: use ollama in test
Ollama has some support for openai emulation and could be a nice way to support more tests without requiring an API key or a public endpoint. This might be a nice way to extend coverage of the openai, which could also run relevant versions of the tests also in the ollama llm package. What do you think?
e.g. Some test configuration could work when OPENAI_API_KEY is not set, but OLLAMA_TEST_MODEL is.
llm, err := openai.New(
openai.WithBaseURL("http://localhost:11434/v1"),
openai.WithModel(ollamaModel),
openai.WithToken("unused"),
// openai.WithHTTPClient(httputil.DebugHTTPClient),
)
for test, I think that use testcontainer with ollama, but I need try it.
yep you can use testcontainers or use a system managed one. If you are using small models the perf hit of running via containers may not be a big deal.
@devalexandre there is an Ollama module for Go: https://golang.testcontainers.org/modules/ollama/ 😉
fyi if you are doing multiple tests or ones where the completions aren't quick, could be simpler to use system managed. here's an example of that in practice and it works https://github.com/block/goose/blob/49a30b4d22329e10262128d08aab03964f910d9f/.github/workflows/ci.yaml#L52-L100
I'm going to submit a PR for this, will link to this issue once ready