llama-stack icon indicating copy to clipboard operation
llama-stack copied to clipboard

watsonx Inference is not having Embedding Model in run.yaml

Open divyaruhil opened this issue 5 months ago • 1 comments

System Info

watsonx Inference is not having Embedding Model in run.yaml

🐛 Describe the bug

Currently , there is no embedding model support added in watsonx inference. Hence, Adding all-MiniLM-L6-v2 Sentence-Transformer support .

Error logs

BadRequestError: Error code: 400 - {'detail': 'Invalid value: Model all-MiniLM-L6-v2 not found'}

Expected behavior

We should not get error while using Model all-MiniLM-L6-v2

divyaruhil avatar May 08 '25 11:05 divyaruhil