llama-stack
llama-stack copied to clipboard
watsonx Inference is not having Embedding Model in run.yaml
System Info
watsonx Inference is not having Embedding Model in run.yaml
🐛 Describe the bug
Currently , there is no embedding model support added in watsonx inference. Hence, Adding all-MiniLM-L6-v2 Sentence-Transformer support .
Error logs
BadRequestError: Error code: 400 - {'detail': 'Invalid value: Model all-MiniLM-L6-v2 not found'}
Expected behavior
We should not get error while using Model all-MiniLM-L6-v2