DanyWin
DanyWin
Hello there, Great job you are doing and excited to see more! I am thinking of providing a plugin, but I would like to consume it by querying ChatGPT API...
I realize that depending on the model, the generated code can have different issues and needs to be cleaned differently. For instance, with Hugging Face API, the generation continues after...
BlindAI will provide managed AI APIs. For transparency it would be good to register on the Client Python SDK information about each model we use behind the scenes, for instance...
#29 highlighted some recent issue with llama-index streaming not working anymore. Have to fix it
We have a monolithic installation process, for instance cuda is installed by default, which is too heavy for some use cases. We will provide soon a version where the Action...
@shubhamofbce mentioned the relevancy of removing stuff like SVG and all: https://discord.com/channels/1216089456134586388/1216359221793128589/1225301018032734248 > I am playing with HTML itself, We don't need to create index with whole HTML. We can...
@HiImMadness : see with @mbrunel to integrate the work you did on improving the retriever and the new prompt template to increase performance
Hi there, I am quite curious to see the performance of recent LLMs such as Llama 2 in terms of embedding quality. I haven't found anything on it on MTEB....
We want to provide a login page for people to receive a token to query models hosted by BlindLlama enclaves. Different login options should be: - [ ] Magic link...
Add a button on the Chat UI to load PDFs on the client side for future RAG integration