[VertexAI] ChatVertexAI object has no attribute 'set_run_config' and `'PromptValue' has no len()'
Describe the bug I am trying to use RAGAS with VertexAI (The integration with OpenAI is working fine).
When calling the evaluate() method, I am getting:
ragas/metrics/base.py", line 116, in init self.llm.set_run_config(run_config) ^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: 'ChatVertexAI' object has no attribute 'set_run_config'
When commenting that line out, I am getting:
langchain_core/language_models/chat_models.py", line 392, in generate batch_size=len(messages), ^^^^^^^^^^^^^ TypeError: object of type 'PromptValue' has no len()
Ragas version: 0.1.1 Python version: 3.11.6 Langchain version: 0.1.7
Code to Reproduce
"""
Switch to the VertexAI instead of GPT-3.5 Turbo. The function returns:
- the VertexAI LLM
- VertexAI Embeddings
- the corresponding metrics
"""
# create Langchain LLM and Embeddings
ragas_vertexai_llm = ChatVertexAI(credentials=creds)
vertextai_embeddings = VertexAIEmbeddings(credentials=creds)
# Swap out the LLM and Embeddings for their VertexAI counterparts
metrics = select_metrics(metrics)
for m in metrics:
# change LLM for metric
m.__setattr__("llm", ragas_vertexai_llm)
# check if this metric needs embeddings
if hasattr(m, "embeddings"):
# if so change with VertexAI Embeddings
m.__setattr__("embeddings", vertextai_embeddings)
return ragas_vertexai_llm, vertextai_embeddings, metrics
creds = authenticate_w_google()
# Create Langchain LLM and Embeddings for VertexAI
ragas_vertexai_llm, vertextai_embeddings, metrics = use_vertexai(metrics, creds)
result = evaluate(
df, metrics=metrics, llm=ragas_vertexai_llm, embeddings=vertextai_embeddings
)
Thanks in advance for your support!
hey thanks for raising the issue @sebastian-piedoux-dv, will fix this shortly
Thanks a lot @jjmachan ! Don't hesitate to reach out if you need any extra information.
Hi @jjmachan, do you already have an update by any chance ? Thanks!
@sebastian-piedoux-dv - Did you get any workaround ?
@jjmachan: First of all, RAGAS is really awesome! Second of all, did you, by any chance, had any chance to look into this issue? I'd super appreciate your support on this!
Hey, just wondering if there's any updates on this - this is amazing by the way, thank you!
I was following the official example from here: https://docs.ragas.io/en/latest/howtos/customisations/gcp-vertexai.html . It didn't work. To make it work with VertexAI I had to wrap the VertexAI with LangchainLLMWrapper. And then it worked. @jjmachan please update the documentation :)
Here is the code that worked:
from langchain.chat_models import ChatVertexAI
from langchain.embeddings import VertexAIEmbeddings
from ragas.llms.base import LangchainLLMWrapper
creds, project_id = google.auth.default()
ragas_vertexai_llm = ChatVertexAI(model_name='gemini-pro',credentials=creds)
wrapper = LangchainLLMWrapper(ragas_vertexai_llm)
vertexai_embeddings = VertexAIEmbeddings(model_name='textembedding-gecko@003',credentials=creds)
for m in metrics:
# change LLM for metric
m.__setattr__("llm", wrapper)
# check if this metric needs embeddings
if hasattr(m, "embeddings"):
# if so change with VertexAI Embeddings
m.__setattr__("embeddings", vertexai_embeddings)
from ragas import evaluate
result = evaluate(
amnesty_qa["eval"].select(range(1)), # using 1 as example due to quota constrains
metrics=metrics,
llm=wrapper,
embeddings=vertexai_embeddings
)
Hope this helps :)
@misha-chertushkin - Thanks, it worked.