karpathy.github.io icon indicating copy to clipboard operation
karpathy.github.io copied to clipboard

LLM to LLM communcation (LLM processor)

Open nembal opened this issue 2 years ago • 0 comments

In your blog post about the evolution of Deep Neural Nets, you talked about powerful AIs that can handle complex issues, making it less important to fine-tune local LLMs. However, you've recently suggested the concept of LLMs as processors, enabling them to communicate directly, assuming various models interact.

I'm asking this because I am exploring the LLM-to-LLM communication. I believe that while AGI will power many services, we'll have specialized AIs or LLMs embedded in services. This points to a decentralized computing system where not everything relies on one model but models can directly interact with each other. For better AI/LLM communication, traditional APIs might not be enough. A kind of LLM gateway to improve connections (discovery, communication, etc) between these models could be key.

1/ What are your perspectives on the current development of specialized LLMs and AI models? 2/ Do you see potential in a gateway style inter AI/LLM infrastructure? If so, I would love to connect and share more.

Thank you, @karpathy, for any response!

nembal avatar Nov 25 '23 11:11 nembal