llmcord.py
llmcord.py copied to clipboard
A Discord LLM chat bot that supports any OpenAI compatible API (OpenAI, Mistral, Groq, OpenRouter, ollama, oobabooga, Jan, LM Studio and more)
llmcord.py
llmcord.py is ~200 lines of Python code that enables collaborative multi-turn LLM prompting in your Discord server. It uses message reply chains to build conversations. Just @ the bot to start a conversation and reply to continue.
You can reply to ANY of the bot's messages to continue ANY conversation from ANY point. Or @ the bot while replying to your friend's message to ask a question about it. There are no limits to this functionality.
Additionally:
- Back-to-back messages from the same user are automatically chained together. Just reply to the latest one and the bot will see all of them.
- You can seamlessly move any conversation into a thread. Just create a thread from any message and @ the bot inside to continue.
Supports remote models from OpenAI API, Mistral API, Anthropic API and more thanks to LiteLLM.
Or run a local model with ollama, oobabooga, Jan, LM Studio or any other OpenAI compatible API server.
Other features include:
- Vision model support
- Customizable system prompt
- DM for private access (no @ required)
- User identity aware (OpenAI API only)
- Streamed responses
- Fully asynchronous
Instructions
Before you start, install Python and clone this git repo.
-
Install Python requirements:
pip install -r requirements.txt
-
Create a copy of .env.example named .env and set it up (see below)
-
Run the bot:
python llmcord.py
Setting | Instructions |
---|---|
DISCORD_BOT_TOKEN | Create a new Discord bot at discord.com/developers/applications and generate a token under the Bot tab. Also enable MESSAGE CONTENT INTENT. |
DISCORD_CLIENT_ID | Found under the OAuth2 tab of the Discord bot you just made. |
LLM | For LiteLLM supported providers (OpenAI API, Mistral API, ollama, etc.), follow the LiteLLM instructions for its model name formatting. For local models (running on an OpenAI compatible API server), set to local/openai/model . If using a vision model, set to local/openai/vision-model . Some setups will instead require local/openai/<MODEL_NAME> where <MODEL_NAME> is the exact name of the model you're using. |
LLM_MAX_TOKENS | The maximum number of tokens in the LLM's chat completion. (Default: 1024 ) |
LLM_TEMPERATURE | LLM sampling temperature. Higher values make the LLM's output more random. (Default: 1.0 ) |
LLM_TOP_P | LLM nucleus sampling value. Alternative to sampling temperature. Higher values make the LLM's output more diverse. (Default: 1.0 ) |
CUSTOM_SYSTEM_PROMPT | Write practically anything you want to customize the bot's behavior! |
CUSTOM_DISCORD_STATUS | Set a custom message that displays on the bot's Discord profile. Max 128 characters. |
ALLOWED_CHANNEL_IDS | Discord channel IDs where the bot can send messages, separated by commas. Leave blank to allow all channels. |
ALLOWED_ROLE_IDS | Discord role IDs that can use the bot, separated by commas. Leave blank to allow everyone. Specifying at least one role also disables DMs. |
MAX_IMAGES | The maximum number of image attachments allowed in a single message. Only applicable when using a vision model. (Default: 5 ) |
MAX_MESSAGES | The maximum number of messages allowed in a reply chain. (Default: 20 ) |
LOCAL_SERVER_URL | The URL of your local API server. Only applicable when LLM starts with local/ .(Default: http://localhost:5000/v1 ) |
LOCAL_API_KEY | The API key to use with your local API server. Only applicable when LLM starts with local/ . Usually safe to leave blank. |
OOBABOOGA_CHARACTER | Your oobabooga character that you want to use. Only applicable when using oobabooga. Leave blank to use CUSTOM_SYSTEM_PROMPT instead. |
OPENAI_API_KEY | Only required if you choose an OpenAI API model. Generate an OpenAI API key at platform.openai.com/account/api-keys. You must also add a payment method to your OpenAI account at platform.openai.com/account/billing/payment-methods. |
MISTRAL_API_KEY | Only required if you choose a Mistral API model. Generate a Mistral API key at console.mistral.ai/api-keys. You must also add a payment method to your Mistral account at console.mistral.ai/billing. |
OPENAI_API_KEY and MISTRAL_API_KEY are provided as examples. Add more as needed for other LiteLLM providers.
Notes
-
Only models from OpenAI API are user identity aware (excluding gpt-4-vision-preview curently) because only OpenAI API supports the message name property. Hopefully others support this in the future.
-
PRs are welcome :)