llama.cpp icon indicating copy to clipboard operation
llama.cpp copied to clipboard

Add option to build CUDA backend without Flash attention

Open slaren opened this issue 4 days ago • 2 comments

          @slaren Honestly, I think Flash Attention should be an optional feature in ggml since it doesn't introduce significant performance improvements, and the binary size has increased considerably—not to mention the compilation time, which, even though I only compile it for my GPU architecture, still takes 20 minutes on an i5-12400. It is not related to this PR, but it would be good to take it into account.

Originally posted by @FSSRepo in https://github.com/ggml-org/llama.cpp/issues/11867#issuecomment-2665873267

slaren avatar Feb 18 '25 17:02 slaren