Gabriele Venturi

Results 234 comments of Gabriele Venturi

We're probably gonna create our own simple cache system based on exact match. At the moment I couldn't find any potential use case for similarity cache, since it could lead...

Hey @victor-hugo-dc, that's a very good question. We are already working on that. We're using NextJS for the frontend as it's quite a standard, but we are super open to...

@aiakubovich thanks a lot for reporting. We are working on the "official" streamlit gui, this is definitely a game changer. Thanks a lot for sharing!

Hi @wenger9, thanks for reporting. Most of the models are not offered within the HF inference API, so unfortunately they don't work. There's an alternative, which is downloading the models...

Hi @wenger9, at the moment local models are not supported. However you can use the wrapper around LangChain models and use a local model from LangChain. Check it out: https://pandas-ai.readthedocs.io/en/latest/LLMs/langchain-llms/

Would be great to have this working. Are you thinking about some approach in particular? I was thinking about creating a wrapper around https://huggingface.co/docs/hub/index so that we can use any...

@AColocho I don't know that library, but seems to be related to vscode, and we should be agnostic. I think the standard solution would be to use HuggingFace transformers: https://github.com/huggingface/transformers....

@AColocho, love this step-by-step approach, go for it 😄!

@amjadraza seems super cool, haven't tried tho. It's just a matter of figuring out whether we want PandasAI itself also to handle the installation of the model or we prefer...

@evolu8 the real question is: how long would it take to inference? Do you think we can expect fast responses?