uAgents icon indicating copy to clipboard operation
uAgents copied to clipboard

Inbuilt LLM and Memory Configuration for Agents

Open cmaliwal opened this issue 7 months ago • 2 comments

Prerequisites

  • [X] I checked the documentation and made sure this feature does not already exist
  • [X] I checked the existing issues to make sure this feature has not already been requested

Feature

I would like to suggest adding inbuilt LLM (Large Language Model) and memory configuration options for our agents. While we currently have storage on agents, incorporating LLM and memory configuration can significantly enhance the agents' capabilities by providing reasoning and thought power, along with personalization through memory.

This feature would allow agents to become smarter and more personalized for users by enabling them to remember and use past interactions. Here is an example of how this could be implemented:

For LLM configuration:

my_llm = Ollama(
    model = "llama2",
    base_url = "http://localhost:11434"
)

alice = Agent(name="alice", seed="alice recovery phrase", llm=my_llm)

For memory configuration:

result = memory.add("I am working on improving my tennis skills. Suggest some online courses.", user_id="alice", metadata={"category": "hobbies"})
print(result)

# Search memories
related_memories = memory.search(query="What are Alice's hobbies?", user_id="alice")
print(related_memories)

Additional Information (Optional)

Implementing this feature can make the AI agents more personalized for users, enhancing their interaction experience and making the agents more useful in various applications. The LLM and memory configurations should be optional, allowing users to choose based on their specific needs.

cmaliwal avatar Jul 23 '24 05:07 cmaliwal