Andrey Buzin
Andrey Buzin
This is a draft on the LLM-powered search app that remembers interactions with the user and references them when answering new questions.
This RFC proposes changes to the Gel server and the Python client that would provide explicit access to search and ML functionality.
The AI tab would benefit from having an embedding vector visualization. That is, whenever the user enables the AI extension and [sets up a vector index](https://docs.edgedb.com/ai#usage), we could plot the...
At the moment, whenever the user clicks on an extension tab when that extension is not enabled in the schema, they are greeted with this concise message: This real estate...
As part of working on queries that involve `ext::ai` [capabilities](https://docs.edgedb.com/ai#run-a-semantic-similarity-query) a user would need to pass text embeddings as query parameters. A text embedding is a numeric representation of text...
Similar to error hits, that will help to guide both humans and LLMs from committing crimes in their schemas and queries. LSP warnings are 90% of the reason LLMs can...
Currently LLM agents run shell commands as **tool calls**. Meaning they treat them the same way as API calls: 1. The LLM generates a tool call, which is a json...
These are extremely useful for steering LLMs and humans out of dead ends and into correct Gel syntax. I'll be adding things to this list as I discover them. 1....
Those two API's ([Ollama](https://github.com/ollama/ollama/blob/main/docs/api.md), [HF](https://huggingface.co/docs/text-generation-inference/en/index)) are not compatible with OpenAI or Anthropic. Technically you _could_ use them via a proxy like LiteLLM, but there's no reason not to support them...
Not having to worry about managing vector embeddings is nice, but we need to provide some tools for monitoring what's going on in there. Specifically: 1. Estimated cost 2. Rate...