llama.cpp
llama.cpp copied to clipboard
Enhancement: Improve ROCm performance on various quants (benchmarks included)
Prerequisites
- [x] I am running the latest code. Mention the version if possible as well.
- [x] I carefully followed the README.md.
- [x] I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
- [x] I reviewed the Discussions, and have a new and useful enhancement to share.
Feature Description
This started with benchmarks showing some variability in model performance running different quants on CUBLAS / MMQ on different hardware so... in order to make it more clear where improvements are needed benchmarks!
Git revision b4735
Relevant subset of results of ./bin/test-backend-ops perf -o MUL_MAT With bar graphs (with and without MI100 since its alot faster than the others) MI100 results provided by @IMbackK
Anyone running Vega20 (Radeon VII,Radeon Pro Vega II Duo, MI50 or MI60) should probably use Q4_0 or Q4_1 quants if they can as it is almost twice much compute available. Avoid Q2 as it is very slow.
Vega 10 MMQ has reduced performance for K quants avoid. And slightly better compute performance for Q4_0 and Q4_1.
MI100 sees 48-50T/f on most quants, but it should see higher performance in several of these. Currently only f16 is faster but it is probably under performing still. Peak theoretical fp16 on MI100 is 8x it's FP32 performance.
Motivation
Many inexpensive large vram GPUs are leaving performance on the table.
Possible Implementation
No response