rlippmann
rlippmann
No, I'm not using it in production. What I'm looking to do is to write an open source project that will automate porting between languages. Since I will need to...
A more generic version of this would be to expose OPENAI_BASE_URL to allow local inferencing from other OpenAI compatible endpoints (i.e. llama.cpp, lite_llm, local ai, etc) I'd particularly would like...
Also, exposing OPENAI_BASE_URL and/or integrating with lite_llm lets you integrate non-Open AI inference engines without additional coding since it handles translation to the native APIs.