llama.cpp icon indicating copy to clipboard operation
llama.cpp copied to clipboard

Cmake file always assumes AVX2 support

Open diwu1989 opened this issue 2 years ago • 4 comments

When running cmake the default configuration sets AVX2 to be ON even when the current cpu does not support it. AVX vs AVX2 is handled correctly in the plain makefile.

For cmake, the AVX2 has to be turned off via cmake -DLLAMA_AVX2=off . for the compiled binary to work on AVX-only system.

Can we make the cmake file smarter about whether to enable or disable AVX2 by looking at the current architecture?

diwu1989 avatar May 24 '23 03:05 diwu1989

check this #809

howard0su avatar May 24 '23 09:05 howard0su

This issue is causing an issue downstream on "llama-cpp-python" where we cant build a python binding on non supported AVX2 machines that require cuBLAS support. Please read my workaround on here https://github.com/abetlen/llama-cpp-python/issues/272#issuecomment-1566224179 Best Regards,

real-limitless avatar May 28 '23 18:05 real-limitless

As per my now-closed issue #1654 (currently closed by me because I figured out the workaround and wasn't sure if default configuration qualified as a "bug"), it assumes a bunch of other extensions as well: AVX, F16C, and FMA. It took me a while to figure out what the flags to disable them were and then add them one by one until it finally worked.

happysmash27 avatar May 31 '23 20:05 happysmash27

Confirm the basically blocks installing llama-cpp-python on a machine without AVX2 available.

JDunn3 avatar Jun 06 '23 17:06 JDunn3

Anyone have a straight forward way to get the combo of CUDA + no AVX2 to work? My head is spinning from trying to follow all these threads.

TFWol avatar Jul 31 '23 04:07 TFWol

This issue was closed because it has been inactive for 14 days since being marked as stale.

github-actions[bot] avatar Apr 09 '24 01:04 github-actions[bot]