blind_chat icon indicating copy to clipboard operation
blind_chat copied to clipboard

Integration of privacy-by-design inference with remote enclaves using BlindLlama for powerful models such as Llama 2 70b & Falcon 180b

Open lyie28 opened this issue 2 years ago • 1 comments

lyie28 avatar Sep 20 '23 13:09 lyie28

We will connect soon BlindLlama, our open-source Confidential and verifiable AI APIs, to BlindChat.

This will enable users to still benefit from a fully in-browser and private experience while offloading most of the work to a remote enclave. This option implies:

  • No heavy bandwidth requirement, compared to the local version that pulls a model on the device (700MB)
  • No heavy computing requirement, compared to running computing locally
  • Better model performance as we can use large models, like Llama 2 70b that would not run on most users' device

Privacy is still ensure by our end-to-end protected AI APIs.

If you want to learn more about privacy guarantees of BlindLlama, you can look at our docs or whitepaper.

dhuynh95 avatar Sep 21 '23 14:09 dhuynh95