Tom Dyson

Results 30 comments of Tom Dyson

Hi @Aarthy153, thanks for your interest! You're very welcome to work on this - you don't need to be assigned. Just let people know that you're starting work on the...

@danihodovic it's awaiting review

> I don't think it's very widely used I imagine this is true, but we don't have any data about it, AFAIK. > Breaks backwards compatibility Could / should we...

Hi @simonw, thanks for `llm`! `wagtail-ai` started with just support for OpenAI, then @tomusher wrote https://github.com/tomusher/every-ai to abstract other AI platform APIs, then we decided to adopt a more actively...

From https://langchain.readthedocs.io/en/latest/ecosystem/pinecone.html: 1. Add client as dependency: `pip install pinecone-client` (add to README and Dockerfile) 2. Add import to `microllama.py`: `from langchain.vectorstores import Pinecone` From [Pinecone notebook example](https://langchain.readthedocs.io/en/latest/modules/indexes/vectorstore_examples/pinecone.html): ```python import...

Initial impressions are the Pinecone is noticeably slower than FAISS. I'd need to refactor `get_index` to support multiple index types. The check for an existing index (`pinecone.describe_index("index-name")`) is ~0.7s.

Oh, great, thanks! I think the default should be unlimited, or a high number. I've just checked Dropbox Paper, which limits indents to the 7th level.

As a point of reference, you might be interested in my https://github.com/tomdyson/wagtail-prompt

Okay, here's a revised plan focusing on abstracting the chat model provider using `llm` while keeping Langchain for embeddings/indexing for now: 1. **Dependencies:** * Add `llm` to `pyproject.toml`. * Add...