ragas icon indicating copy to clipboard operation
ragas copied to clipboard

Kernel Crashed while generating the testset

Open anirbanpupi opened this issue 1 year ago • 6 comments

[ ] I have checked the documentation and related resources and couldn't resolve my bug.

Bug Description I wanted to generate a test dataset to evaluate my RAG application. I followed the instruction from the official documentation. When I tried to use generator.generate_with_llamaindex_docs my ipykernel is getting crashed for an unknown reason.

Ragas version: 0.1.3 Python version: 3.10.13 Ipykernel: 6.29.3

Code to Reproduce from ragas.llms import LangchainLLMWrapper from ragas.embeddings import LangchainEmbeddingsWrapper from ragas.testset.docstore import InMemoryDocumentStore from langchain.text_splitter import TokenTextSplitter from ragas.testset.extractor import KeyphraseExtractor

langchain_llm_model = LangchainLLMWrapper(load_llm()) langchain_embed_model = LangchainEmbeddingsWrapper(load_embed_model())

splitter = TokenTextSplitter(chunk_size=256, chunk_overlap=0) keyphrase_extractor = KeyphraseExtractor(llm = langchain_llm_model)

docstore=InMemoryDocumentStore( splitter=splitter, embeddings=langchain_embed_model, extractor=keyphrase_extractor )

generator = TestsetGenerator( generator_llm=langchain_llm_model, critic_llm=langchain_llm_model, embeddings=langchain_embed_model, docstore=docstore )

testset = generator.generate_with_llamaindex_docs( documents=nodes, test_size=10, ) (Kernel error found)

Error trace

No error trace found, but the jupyter kernel is getting crashed for an unknown reason. image Expected behavior The function should have generated a synthetic test dataset as per the documentation

Additional context Fectching nodes from docstore using llama_index docstore. Using langchain llms with HuggingFaceTextGenInference (model_name = "TheBloke/Llama-2-13B-chat-GPTQ") Using langchain embeddings with HuggingFaceEmbeddings (model_name = "BAAI/bge-large-en-v1.5")

anirbanpupi avatar Mar 05 '24 06:03 anirbanpupi

@shahules786 Please help me to fix this bug.

Anirban20001962 avatar Mar 13 '24 02:03 Anirban20001962

Hey @Anirban20001962 this could be an issue with async process. Have you tried by passing is_async=False ?

shahules786 avatar Mar 13 '24 02:03 shahules786

Hey @Anirban20001962 this could be an issue with async process. Have you tried by passing is_async=False ?

Yes I have tried that also but still there is kernel error.

Anirban20001962 avatar Mar 13 '24 06:03 Anirban20001962

same issue here

davidzimmerman19 avatar Mar 13 '24 22:03 davidzimmerman19

@shahules786 Did you able to fix the issue?

anirbanpupi avatar Mar 17 '24 15:03 anirbanpupi

Hi @Anirban20001962 We are working on it. There are several similar issues that have to be addressed in the PR. thanks for your patience

shahules786 avatar Mar 18 '24 04:03 shahules786