blind_chat
blind_chat copied to clipboard
Integration of privacy-by-design inference with remote enclaves using BlindLlama for powerful models such as Llama 2 70b & Falcon 180b
We will connect soon BlindLlama, our open-source Confidential and verifiable AI APIs, to BlindChat.
This will enable users to still benefit from a fully in-browser and private experience while offloading most of the work to a remote enclave. This option implies:
- No heavy bandwidth requirement, compared to the local version that pulls a model on the device (700MB)
- No heavy computing requirement, compared to running computing locally
- Better model performance as we can use large models, like Llama 2 70b that would not run on most users' device
Privacy is still ensure by our end-to-end protected AI APIs.
If you want to learn more about privacy guarantees of BlindLlama, you can look at our docs or whitepaper.