http://localhost:8080/v1/embeddings Error 500
When curl accesses http://localhost:8080/v1/embeddings, a 500 error will be reported:
Error: Internal Server Error Response body Download { "error": { "code": 500, "message": "failed to load model with internal loader: backend not found: /tmp/localai/backend_data/backend-assets/grpc/string", "type": "" } } Pls help me, thank you.
Other issues:
How to add nomic-embed-text-v1.5-GGUF to the localAI model? Can you provide the relevant configuration yaml file?
hank you very much!
you should be able to install it from the gallery:
local-ai run nomic-embed-text-v1.5
I have the exact same error. I'm using a locally downloaded embeddings model in gguf format. it loads the model on the GPU, but when posting the results to the URL, I get this above error message.
Before this error, there are others: guessDefaultsFromFile: family not recognized guessDefaultsFromFile not a gguf file filepath=/build/models/string
Edit: it seems i have a bert model, and it can't load those.
Edit2: even with nomic-embed above, still get a HTTP 500 error:
Server error error="rpc error: code = Unavailable desc = error reading from server EOF" ip=10.88.0.1 latency=2.653218 method=POST status=500 url=/v1/embeddings
This issue is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 5 days.
This issue was closed because it has been stalled for 5 days with no activity.