ollama-telegram
ollama-telegram copied to clipboard
🦙 Ollama Telegram bot, with advanced configuration

🦙 Ollama Telegram Bot
Chat with your LLM, using Telegram bot!
Feel free to contribute!
Features
Here's features that you get out of the box:
- [x] Fully dockerized bot
- [x] Response streaming without ratelimit with SentenceBySentence method
- [x] Mention [@] bot in group to receive answer
Roadmap
- [x] Docker config & automated tags by StanleyOneG, ShrirajHegde
- [x] History and
/reset
by ShrirajHegde - [ ] Add more API-related functions [System Prompt Editor, Ollama Version fetcher, etc.]
- [ ] Redis DB integration
- [ ] Update bot UI
Prerequisites
Installation (Non-Docker)
-
Install latest Python
-
Clone Repository
git clone https://github.com/ruecat/ollama-telegram
-
Install requirements from requirements.txt
pip install -r requirements.txt
-
Enter all values in .env.example
-
Rename .env.example -> .env
-
Launch bot
python3 run.py
Installation (Docker Image)
The official image is available at dockerhub: ruecat/ollama-telegram
-
Download .env.example file, rename it to .env and populate the variables.
-
Create
docker-compose.yml
(optionally: uncomment GPU part of the file to enable Nvidia GPU)version: '3.8' services: ollama-telegram: image: ruecat/ollama-telegram container_name: ollama-telegram restart: on-failure env_file: - ./.env ollama-server: image: ollama/ollama:latest container_name: ollama-server volumes: - ./ollama:/root/.ollama # Uncomment to enable NVIDIA GPU # Otherwise runs on CPU only: # deploy: # resources: # reservations: # devices: # - driver: nvidia # count: all # capabilities: [gpu] restart: always ports: - '11434:11434'
-
Start the containers
docker compose up -d
Installation (Build your own Docker image)
-
Clone Repository
git clone https://github.com/ruecat/ollama-telegram
-
Enter all values in .env.example
-
Rename .env.example -> .env
-
Run ONE of the following docker compose commands to start:
-
To run ollama in docker container (optionally: uncomment GPU part of docker-compose.yml file to enable Nvidia GPU)
docker compose up --build -d
-
To run ollama from locally installed instance (mainly for MacOS, since docker image doesn't support Apple GPU acceleration yet):
docker compose up --build -d ollama-telegram
-
Environment Configuration
Parameter | Description | Required? | Default Value | Example |
---|---|---|---|---|
TOKEN |
Your Telegram bot token. [How to get token?] |
Yes | yourtoken |
MTA0M****.GY5L5F.****g*****5k |
ADMIN_IDS |
Telegram user IDs of admins. These can change model and control the bot. |
Yes | 1234567890 OR 1234567890,0987654321, etc. |
|
USER_IDS |
Telegram user IDs of regular users. These only can chat with the bot. |
Yes | 1234567890 OR 1234567890,0987654321, etc. |
|
INITMODEL |
Default LLM | No | llama2 |
mistral:latest mistral:7b-instruct |
OLLAMA_BASE_URL |
Your OllamaAPI URL | No | localhost host.docker.internal |
|
OLLAMA_PORT |
Your OllamaAPI port | No | 11434 |