flash-attention topic

List flash-attention repositories

InternLM

6.3k
Stars
443
Forks
Watchers

Official release of InternLM2.5 base and chat models. 1M context support

flashinfer

1.2k
Stars
109
Forks
Watchers

FlashInfer: Kernel Library for LLM Serving

Awesome-LLM-Inference

2.6k
Stars
175
Forks
Watchers

📖A curated list of Awesome LLM Inference Paper with codes, TensorRT-LLM, vLLM, streaming-llm, AWQ, SmoothQuant, WINT8/4, Continuous Batching, FlashAttention, PagedAttention etc.

Qwen

14.4k
Stars
1.2k
Forks
Watchers

The official repo of Qwen (通义千问) chat & pretrained large language model proposed by Alibaba Cloud.

Chinese-LLaMA-Alpaca-2

7.1k
Stars
578
Forks
Watchers

中文LLaMA-2 & Alpaca-2大模型二期项目 + 64K超长上下文模型 (Chinese LLaMA-2 & Alpaca-2 LLMs with 64K long context models)

gdGPT

91
Stars
8
Forks
Watchers

Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.

CUDA-Learn-Notes

1.2k
Stars
133
Forks
Watchers

🎉 Modern CUDA Learn Notes with PyTorch: fp32, fp16, bf16, fp8/int8, flash_attn, sgemm, sgemv, warp/block reduce, dot, elementwise, softmax, layernorm, rmsnorm.

FastCkpt

24
Stars
4
Forks
Watchers

Python package for rematerialization-aware gradient checkpointing

FastCode

21
Stars
3
Forks
Watchers

Utilities for efficient fine-tuning, inference and evaluation of code generation models

InternEvo

300
Stars
51
Forks
Watchers

InternEvo is an open-sourced lightweight training framework aims to support model pre-training without the need for extensive dependencies.