Learning new things trick
Hi great 01 team, The most amazing thing of 01 to me is the ability to learn new skills, which seems to be different from common LLMs. Since it’s an open source project, could you share some ideas for extending the memory of a model beyond its context window? Thanks in advance
There is a lot to think about here! In open interpreter, we have support for profiles, yaml and python files that extend open interpreter. We love the work that MemGPT has done, something similar to that using a profile file in OI would be a great first step.
I love the idea of having files that open interpreter can read and write to. If the model decides something is worth remembering, it could choose to take notes on it, just like a person would.
I'm curious what kind of ideas you have for memory?
thanks for your reply! I haven't done much just seeing the model learning new things is very surprising. Just seen Larimar paper which seems pretty smart. they have an external memory matrix that model retrieves memory from
Awesome paper. "a novel, brain-inspired architecture for enhancing LLMs with a distributed episodic memory"
Wow!
Well I'm going to close this issue for now, feel free to open a new one if you need more help