AI-Writer
AI-Writer copied to clipboard
Take advantage of increased LLM context window
RAG is dying as the context window keeps increasing exponentially. With 1M tokens context windows, who needs RAG. Caching course material and doing vector search seems like the way to go.
We will need to implement concepts from this paper: https://ai.google.dev/gemini-api/docs/long-context