llama-stack
llama-stack copied to clipboard
Added Pinecone Memory Adapter
Added Pinecone Memory Adapter implementation
Thanks for putting this out. Can you include tests? Also, we would like providers to be available through distributions. Are you able to run llama-stack-apps using pinecone as memory?
Will include tests and have to work out the distributions part. Can you point me to the distribution PR of weaviate so I can understand what changes to make and where? Also for the tests, memory already has a test file in it, should I make one separately for pinecone, or does that file need any changes? @raghotham
Also for the tests, memory already has a test file in it, should I make one separately for pinecone, or does that file need any changes?
@cheesecake100201 see the instructions in the file. you should not need a separate test file just update the provider yaml copy (your own) to put in the details (as might be necessary) for pinecone
Need to make some more changes in this PR, for example, while creating index, pinecone expects you to give the dimensions of the index which should be the dimensions of the embedding which depends on the embedding model being used. Also in query method inside PineconeIndex length of embedding will be equal to that of the dimension of the index. Need to figure out how to make this dimension thing dynamic. Any idea or help will be appreciated for this since I have been wrecking my head over this issue for a while. @ashwinb @raghotham
@cheesecake100201, it might be better allow the user to configure either the index host directly.
from llama_models.schema_utils import json_schema_type
from pydantic import BaseModel
@json_schema_type
class PineconeConfig(BaseModel):
host: str
See this guide https://docs.pinecone.io/guides/data/target-an-index
@cheesecake100201 do you want to bring back this PR (from the almost dead :) sorry for the delay here? closing it for now but feel free to re-open and we will get a review going quickly.