Getting scores with llamaindex using custom model as bedrock or gemini
Describe the Feature I would like to do evaluation using Bedrock or Gemini with llamaindex and not just langchain (https://docs.ragas.io/en/stable/howtos/customizations/customize_models/).
i have already tried to use to use LlamaIndexLLMWrapper and LlamaIndexEmbeddingsWrapper, but it does not work. Indeed I got error or a big loop with:
from llama_index.llms.bedrock_converse import BedrockConverse
from llama_index.embeddings.bedrock import BedrockEmbedding
from ragas.llms import LlamaIndexLLMWrapper
from ragas.embeddings import LlamaIndexEmbeddingsWrapper
from ragas.metrics import ResponseRelevancy
bedrock_model = BedrockConverse(...)
bedrock_embeddings = BedrockEmbedding(...)
response_relevancy_scorer = ResponseRelevancy(
llm=LangchainLLMWrapper(bedrock_model),
embeddings=LangchainEmbeddingsWrapper(bedrock_embeddings)
)
out = await response_relevancy_scorer.single_turn_ascore(sample)
May you, please, consider to improve the code with such features?
Why is the feature important for you? I normally use llamaindex and not langchain.
@jjmachan may you or a colleague of yours help me, please?
@shahules786 maybe you can help me?
Any update on this ticket?