mixtral-8x7b topic
Aurora
🐳 Aurora is a [Chinese Version] MoE model. Aurora is a further work based on Mixtral-8x7B, which activates the chat capability of the model's Chinese open domain.
LlamaIndex-RAG-WSL-CUDA
Examples of RAG using Llamaindex with local LLMs - Gemma, Mixtral 8x7B, Llama 2, Mistral 7B, Orca 2, Phi-2, Neural 7B
ChatGPT-Telegram-Bot
TeleChat: 🤖️ an AI chat Telegram bot can Web Search Powered by GPT-3.5/4/4 Turbo/4o, DALL·E 3, Groq, Gemini 1.5 Pro/Flash and the official Claude2.1/3/3.5 API using Python on Zeabur, fly.io and Repli...
gdGPT
Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.
machinascript-for-robots
Build LLM-powered robots in your garage with MachinaScript For Robots!
tinychat
🔮 TinyChat is a lightweight Desktop client for modern Language Models designed for straightforward comprehension. Supports OpenAI, Anthropic, Meta, Mistral, Google and Cohere APIs.
Chinese-Mixtral-8x7B
中文Mixtral-8x7B(Chinese-Mixtral-8x7B)
IALab-Suite
Tool for test diferents large language models without code.
fltr
Like grep but for natural language questions. Based on Mistral 7B or Mixtral 8x7B.
fiddler
Fast Inference of MoE Models with CPU-GPU Orchestration