Fabric
Fabric copied to clipboard
Local Inferencing Endpoint Only? Still references api.aponeai.com/v1/models
Discussed in https://github.com/danielmiessler/fabric/discussions/544
Originally posted by xJohnWhite June 4, 2024
Summary
Using an inside-my-network OpenAI API-compatible inferencing endpoint with relevant env set, but getting an authentication error from api.openai.com
Details
I'm at an organization that doesn't allow the use of OpenAI; Fortunately, I have access to an internal inferencing endpoint which is OpenAI API-compatible.
Env check 1
I've got my internal api set up in my environment variables:
printenv | grep OPENAI
OPENAI_MODEL_NAME=llama27b
OPENAI_API_KEY=[myInternalKey]
OPENAI_API_BASE=https://[myEndpointDomain]/api/v1/
Env check 2
I've got that in the env as well.
grep OPENAI ~/.config/fabric/.env
OPENAI_API_BASE="https://[myEndpointDomain]/api/v1/"
OPENAI_MODEL_NAME="llama27b"
OPENAI_API_KEY="[myInternalKey]"
OpenAPI.com Authentication error
However, when I try to do anything, I get an OpenAI URL error, as if it's hard-coded somewhere:
echo "Summer is the best season" | fabric -p write_micro_essay
/home/jwhite/.local/pipx/venvs/fabric/lib/python3.10/site-packages/pydub/utils.py:170: RuntimeWarning: Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work
warn("Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work", RuntimeWarning)
Error: Client error '401 Unauthorized' for url 'https://api.openai.com/v1/models'
For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401
Am I doing something massively wrong? Oh, and what's with the ffmpeg/avconv error?
Previously mentioned #189
Saw a previously merged #189 , but I'm using a git clone from today and checked installer/client/cli/utils.py to make sure the merge was still there. There's something in def agents(self, userInput): that has a hard-coded check to see if the model name is in self.model, but I'm hoping that I'm not invoking that without a specific agent creation pattern.
Potential other installer/client/cli/utils.py OpenAI hard-code
In installer/client/cli/utils.py, lines 43-45, there's this:
if "OPENAI_API_KEY" in os.environ:
api_key = os.environ['OPENAI_API_KEY']
self.client = OpenAI(api_key=api_key)self.client = OpenAI(api_key=api_key)
I'm not familiar enough with the OpenAI invocation to know whether it picks up OPENAI_BASE_URL and OPENAI_API_MODEL from the environment. I've seen other people referencing using non-localhost, non-Ollama servers, so I'm guessing I've got a misconfiguration somewhere.
Request for guidance and wisdom!
Any thoughts on what might be going on?
No attention in Q&A, so created an issue.
Have you tried doing it over clear http rather than ssl. Try removing the certificate for testing purposes. That being said, I had better luck with LMStudio rather than Ollama. Seems your setup is on the same lines
Related, new install with appropriate keys produces this same 401 error.
Read #405