langchain
langchain copied to clipboard
Llama_index model as a tool for lang chain
System Info
I got error when try to load custom LLM for Llama-Index
# setup prompts - specific to StableLM
from llama_index.prompts.prompts import SimpleInputPrompt
system_prompt = """<|SYSTEM|># StableLM Tuned (Alpha version)
- StableLM is a helpful and harmless open-source AI language model developed by StabilityAI.
- StableLM is excited to be able to help the user, but will refuse to do anything that could be considered harmful to the user.
- StableLM is more than just an information source, StableLM is also able to write poetry, short stories, and make jokes.
- StableLM will refuse to participate in anything that could harm a human.
"""
# This will wrap the default prompts that are internal to llama-index
query_wrapper_prompt = SimpleInputPrompt("<|USER|>{query_str}<|ASSISTANT|>")
import torch
llm = HuggingFaceLLM(
context_window=4096,
max_new_tokens=256,
generate_kwargs={"temperature": 0.7, "do_sample": False, "return_dict_in_generate":True},
system_prompt=system_prompt,
query_wrapper_prompt=query_wrapper_prompt,
tokenizer_name="StabilityAI/stablelm-tuned-alpha-3b",
model_name="StabilityAI/stablelm-tuned-alpha-3b",
device_map="auto",
stopping_ids=[50278, 50279, 50277, 1, 0],
tokenizer_kwargs={"max_length": 4096},
)
# load in HF embedding model from langchain
embed_model = LangchainEmbedding(HuggingFaceEmbeddings())
service_context = ServiceContext.from_defaults(chunk_size=1024, llm=llm,embed_model=embed_model)
documents = SimpleDirectoryReader('data\\abnamro').load_data()
index = VectorStoreIndex.from_documents(documents, service_context=service_context,show_progress=True)
from langchain.agents import Tool
tools = [
Tool(
name="LlamaIndex",
func=lambda q: str(index.as_query_engine(
retriever_mode="embedding",
verbose=True,
service_context=service_context
).query(q)),
description="useful for when you want to answer questions about finance. The input to this tool should be a complete english sentence.",
return_direct=True,
),
]
from langchain.chains.conversation.memory import ConversationBufferMemory
from langchain.agents import initialize_agent
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
agent_executor = initialize_agent(
tools, llm, agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION,memory=memory
)
agent_executor.run(input="What is inflation in the Czech Republic?")
got
ValidationError Traceback (most recent call last)
Cell In[13], line 1
----> 1 agent_executor = initialize_agent(
2 tools, llm, agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION,memory=memory
3 )
4 agent_executor.run(input="What is inflation in the Czech Republic?")
File F:\anaconda\lib\site-packages\langchain\agents\initialize.py:57, in initialize_agent(tools, llm, agent, callback_manager, agent_path, agent_kwargs, tags, **kwargs)
55 agent_cls = AGENT_TO_CLASS[agent]
56 agent_kwargs = agent_kwargs or {}
---> 57 agent_obj = agent_cls.from_llm_and_tools(
58 llm, tools, callback_manager=callback_manager, **agent_kwargs
59 )
60 elif agent_path is not None:
61 agent_obj = load_agent(
62 agent_path, llm=llm, tools=tools, callback_manager=callback_manager
63 )
File F:\anaconda\lib\site-packages\langchain\agents\conversational\base.py:115, in ConversationalAgent.from_llm_and_tools(cls, llm, tools, callback_manager, output_parser, prefix, suffix, format_instructions, ai_prefix, human_prefix, input_variables, **kwargs)
105 cls._validate_tools(tools)
106 prompt = cls.create_prompt(
107 tools,
108 ai_prefix=ai_prefix,
(...)
113 input_variables=input_variables,
114 )
--> 115 llm_chain = LLMChain(
116 llm=llm,
117 prompt=prompt,
118 callback_manager=callback_manager,
119 )
120 tool_names = [tool.name for tool in tools]
121 _output_parser = output_parser or cls._get_default_output_parser(
122 ai_prefix=ai_prefix
123 )
File F:\anaconda\lib\site-packages\langchain\load\serializable.py:74, in Serializable.__init__(self, **kwargs)
73 def __init__(self, **kwargs: Any) -> None:
---> 74 super().__init__(**kwargs)
75 self._lc_kwargs = kwargs
File F:\anaconda\lib\site-packages\pydantic\main.py:341, in pydantic.main.BaseModel.__init__()
ValidationError: 1 validation error for LLMChain
llm
value is not a valid dict (type=type_error.dict)
Who can help?
@hwchase17 @agola11
Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
Reproduction
Connect LlamaIndex with Langchain
Expected behavior
Load custom LLM
Answer generated by a 🤖
Answer
From your description, it appears that you're encountering a ValidationError
when trying to initialize an agent in LangChain using a custom language model. This error is likely due to the initialize_agent
function expecting an instance of a class that inherits from BaseLanguageModel
, but instead receiving an instance of HuggingFaceLLM
.
In your code, you've initialized llm
as an instance of HuggingFaceLLM
and then passed it to initialize_agent
. However, HuggingFaceLLM
does not inherit from BaseLanguageModel
, which is likely causing the ValidationError
.
To address this issue, you should initialize llm
as an instance of a class that does inherit from BaseLanguageModel
. For instance, you could use the OpenAI
class from LangChain:
from langchain.llms import OpenAI
llm = OpenAI()
Then, you can pass this llm
instance to initialize_agent
:
agent_executor = initialize_agent(
tools, llm, agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION, memory=memory
)
This should hopefully resolve the ValidationError
you're encountering. However, please note that this is just one potential solution based on the information provided. If you're still experiencing issues after trying this, it might be worth checking if you're using the latest version of LangChain, as updates often include bug fixes and improvements.
If you believe this is a bug that could impact other users, you're encouraged to make a pull request to help improve LangChain. Your contributions would be greatly appreciated by the community.
Sources
- docs/extras/modules/model_io/models/llms/how_to/human_input_llm.ipynb
- docs/extras/use_cases/agents/wikibase_agent.ipynb
- docs/extras/modules/model_io/models/llms/integrations/openlm.ipynb
- langchain/chains/llm.py
- docs/snippets/get_started/quickstart/chains_chat_models.mdx
- docs/snippets/modules/chains/foundational/llm_chain.mdx
- docs/snippets/modules/agents/how_to/custom_llm_chat_agent.mdx
- docs/extras/modules/model_io/models/llms/integrations/openai.ipynb
This response is meant to be useful, save you time, and share context. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
@vladimircape The bot's answer is correct about the error message. You cannot pass the LlamaIndex model to initialize_agent
agent_executor = initialize_agent(
# llm must be a LangChain LLM
tools, llm, agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION,memory=memory
)
You have to use a LangChain native llm for the Agent then the LlamaIndex model will be used when the agent uses the tool
@vladimircape The bot's answer is correct. You cannot pass the LlamaIndex model to
initialize_agent
agent_executor = initialize_agent( # llm must be a LangChain LLM tools, llm, agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION,memory=memory )
You have to use a LangChain native llm for the Agent then the LlamaIndex model will be used when the agent uses the tool
When i used not a custom LLM in LlamaIndex, it was working
@vladimircape Can you share code?
@vladimircape Can you share code? Sorry, in previous comment, i wrote incorrect, here is example what i mean
from llama_index import ListIndex, SimpleDirectoryReader
from langchain.embeddings.huggingface import HuggingFaceEmbeddings
from llama_index import LangchainEmbedding, ServiceContext, VectorStoreIndex
documents = SimpleDirectoryReader('data').load_data()
index = VectorStoreIndex.from_documents(documents,show_progress=True)
tools = [
Tool(
name="LlamaIndex",
func=lambda q: str(index.as_query_engine(
retriever_mode="embedding",
verbose=True,
service_context=service_context
).query(q)),
description="useful for when you want to answer questions about finance. The input to this tool should be a complete english sentence.",
return_direct=True,
),
]
# set Logging to DEBUG for more detailed outputs
memory = ConversationBufferMemory(memory_key="chat_history")
llm = ChatOpenAI(temperature=0)
agent_executor = initialize_agent(
tools, llm, agent="conversational-react-description", memory=memory
)
yes, you are right, you can close issue