emergent icon indicating copy to clipboard operation
emergent copied to clipboard

Create an internal reasoning / search_memory tool

Open syntex01 opened this issue 1 year ago • 12 comments

I think that the agent should speak to itself and reason what topics would be helpful for him to know right now. We can then provide the closest match for each topic. In this internal chat the bot could later on also access various functions like searching the web.

syntex01 avatar Mar 23 '23 21:03 syntex01

I read the microsoft research paper on “sparks of AGI” https://arxiv.org/abs/2303.12712. It seems like GPT-4 is a lot lot better at using tools, and reasoning. Gpt-3.5 is hit or miss, sometimes it uses the tool provided in in context examples, sometimes it does not. One thing i found helped a lot is providing an example convo before the system message. Gpt-3.5 tends to ignore the system message a lot.


[
    {"role": "user", "content": "I want to book an appointment with Dr Bob"},
    {"role": "assistant", "content": "AVAILABLE_TIMES('Dr Bob') -> …"},
    {"role": "assistant", "content": "Here are the available times for Dr Bob ..."},

    {"role": "system", "content": "You are a booking assistant blah blah ..."} # Example convo before this
    
]

kyb3r avatar Mar 23 '23 22:03 kyb3r

I also noticed this with other projects. Gpt4 is very responsive and rarely makes mistakes when provided clear instructions. You can even tell it to execute certain functions when it sees the need for it. (By identifying the function name in the output)

syntex01 avatar Mar 23 '23 22:03 syntex01

Take a look at this: https://openai.com/blog/chatgpt-plugins

kyb3r avatar Mar 23 '23 22:03 kyb3r

That's really nice

syntex01 avatar Mar 23 '23 22:03 syntex01

This is actually insane. It looks like openai made models finetuned to learn how to use tools using their plugin specification.

For now, plugins are designed for calling backend APIs, but we are exploring plugins that can call client-side APIs as well.

Client-side apis would benefit us, that would provide a simple way for us to make a tool for accessing memories.

kyb3r avatar Mar 23 '23 22:03 kyb3r

Yeah the advancement gpt4 brought do allow for many things that were harder with 3.5. I am really happy to see the announcement about the plugins.

syntex01 avatar Mar 23 '23 22:03 syntex01

I have access to GPT-4 now and I have already tested out tools

IMG_0242

it works well, gpt-4 is smart :)

It should be easy to create a memory tool that takes in a query as an argument

kyb3r avatar Mar 28 '23 02:03 kyb3r

Did you also get access to tools?

syntex01 avatar Mar 28 '23 06:03 syntex01

no, i just made my own implementation lol.

you define your own tools, and the ChatAgent class will let gpt-4 know about these tools and how to use them.

works reliably with gpt-4. I finished implementing it in another (private) repo, i’ll try finish merging it into this one soon

kyb3r avatar Mar 28 '23 07:03 kyb3r

Alrighty I got it to work!

You: Tell me about the HMCS system we discussed in the past 

FUNCTION CALL:  search_memory {'query': 'HMCS system'}

Agent: In the past, we discussed the Hierarchical Memory Consolidation System (HMCS), which was developed in 2023. HMCS is designed to enhance the memory capacity of large language models, particularly those that rely on natural language processing. This new memory system can improve the performance of AI models, making them more efficient and effective in processing natural language data. Note that HMCS is not related to Her Majesty's Canadian Ship, which is a prefix used for ships in the Canadian Navy.

kyb3r avatar Mar 28 '23 20:03 kyb3r

Ah really nice:) just that you know, I am currently skiing so I can't implement my ideas rn. But I will be back in 6 days:) I will check the updates you make to the code however.

syntex01 avatar Mar 28 '23 20:03 syntex01

Currently, this only works with GPT-4 (since its smart enough to reason when to use a tool and when not to)

kyb3r avatar Mar 29 '23 06:03 kyb3r