Memory-Powered Agentic AI is deeply flawed
-
Your _embed function is used to return "similar" episodes by finding the episode with the nearest score. However, your _embed function is hash(...) % 10000. Hashing a string destroys any semantic meaning of the string, and the nearest hash values are designed to have no correlation, and modulo 10000 destroys even that guarantee. You could have used any number of techniques to generate something that could give you a way to find related meanings, like using squared difference between actual embedding vectors, asking an LLM to generate a list of tags, and counting the number in common, etc. Either way, you don't use the similar_episodes in your plan function at all, so you haven't really tested if this returns anything relevant.
-
Your SemanticMemory is just a dictionary storing a counter. Any "semantic" ability it provides relies on the calling code to determine both the semantic meaning of the query and its success. Your calling code does neither, it just says recommendation is always successful.
-
I'll assume MemoryAgent is a stub that you expect the user to fill in with actual calls to AI to do the real work. Otherwise it's just a poor Alexa-style bot that must recieve specific keywords to act on, with a highly structured call-and-response. With that said, even in this oversimplified example, you didn't show where LLM calls might go or even how revise_plan is supposed to work.
I'm not sure if you asked AI to write this for you and posted it without reading it or don't understand it yourself, or if you are trying to be deceptive. But this demo in no way shows how to write a working Semantic or Episodic memory.