ai-chatbot icon indicating copy to clipboard operation
ai-chatbot copied to clipboard

AI Chatbot with Streamlit, Langchain, and Mistral7b


title: Ai Chatbot emoji: 📊 colorFrom: indigo colorTo: gray sdk: streamlit sdk_version: 1.28.0 app_file: main.py pinned: false license: mit

Streamlit + Langchain + LLama.cpp w/ Mistral

Run your own AI Chatbot locally on a GPU or even a CPU.

To make that possible, we use the Mistral 7b model.
However, you can use any quantized model that is supported by llama.cpp.

This AI chatbot will allow you to define its personality and respond to the questions accordingly.
There is no chat memory in this iteration, so you won't be able to ask follow-up questions. The chatbot will essentially behave like a Question/Answer bot.

TL;DR instructions

  1. Install llama-cpp-python
  2. Install langchain
  3. Install streamlit
  4. Run streamlit

Step by Step instructions

The setup assumes you have python already installed and venv module available.

  1. Download the code or clone the repository.
  2. Inside the root folder of the repository, initialize a python virtual environment:
python -m venv venv
  1. Activate the python environment:
source venv/bin/activate
  1. Install required packages (langchain, llama.cpp, and streamlit):
pip install -r requirements.txt
  1. Start streamlit:
streamlit run main.py
  1. The Mistral7b quantized model from huggingface will be downloaded and cached locally from the following link: mistral-7b-instruct-v0.1.Q4_0.gguf

Screenshot

Screenshot from 2023-10-23 20-00-09