rags
rags copied to clipboard
implement using llamacpp as LLM model
i am trying to implement using open source llm model with llamacpp but getting this error
"ValueError: Must pass in vector index for CondensePlusContextChatEngine." i am new to llamaindex also can anyone help me what exactly i need to configure in order to run the RAGs
see our customization tutorial here (specifically the part about customizing LLMs): https://docs.llamaindex.ai/en/latest/getting_started/customization.html
also llms: https://docs.llamaindex.ai/en/latest/module_guides/models/llms.html
Try ask chat-gpt 4.0
i am trying to implement using open source llm model with llamacpp but getting this error
"ValueError: Must pass in vector index for CondensePlusContextChatEngine." i am new to llamaindex also can anyone help me what exactly i need to configure in order to run the RAGs
Try ask gpt4
@adeelhasan19 did u successfully loaded local llm with llamacpp ??