Philip Meier
Philip Meier
> `GET /chats/{id}` makes a request for a single chat. `GET /chats` makes a request for all chats. If there are a lot of chats (with a lot of bulky...
> Would it enale the embed once and use multiple LLM's workflow or is that orthogonal? It is orthogonal. The point is to make the embedding model a "first class...
@nenb https://github.com/Quansight/ragna/issues/191#issuecomment-1807937253 > **document loading** - This includes both the data connectors and the data chunking strategies. I would recommend the chunking live here (rather than on the embedding) as...
@peachkeel Although currently [marked experimental](https://lancedb.github.io/lancedb/fts/), `lancedb` already supports both modes through a `query_type` parameter on the [`.search()` method](https://lancedb.github.io/lancedb/python/python/#lancedb.table.Table.search). Meaning, if we would go with your abstraction, this would be quite...
> it really, really needs to be done. Agreed. > Then, I will update the UI so that advanced tab allows you to edit the embedding model (if available for...
> completely separating the `ChunkingModel` and `EmbeddingModel` from the `SourceStorage`, and adding them as ragna components. As suggested, the behavior of store and retrieve is then influenced by the type-hint...
> Now, adding Ollama, we will be supporting local assistants To give some context here: this is not decided yet. We are exploring this an option. > Does it make...
@pierrotsmnrd panel==1.4.0 (tracked in #379) also introduced new ways of styling the chat messages: https://blog.holoviz.org/posts/panel_release_1.4/#other-enhancements Can this be adopted here to simplify stuff?
 "Source info" button looks different from the copy button. This has nothing to do with `panel==1.4` as it is visible in https://github.com/Quansight/ragna/pull/312#pullrequestreview-1875418845 as well.
We need to tackle this as this is a major blocker for anyone who wants to use Ragna in production. There is currently no way to use the UI with...