Failed install on Apple silicon
LocalAI version: Most recent as of this report
Environment, CPU architecture, OS, and Version:
Describe the bug Running the installer from the official documentation fails for macOS running ARM64 fails at this part:
cd llama.cpp && mkdir -p build && cd build && cmake .. -DLLAMA_METAL=OFF && cmake --build . --config Release
To Reproduce Follow these steps on an M2 max mbp
Expected behavior A successful install
Logs
Full error:
-- The C compiler identification is AppleClang 15.0.0.15000040
-- The CXX compiler identification is AppleClang 15.0.0.15000040
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /Library/Developer/CommandLineTools/usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /Library/Developer/CommandLineTools/usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE
-- Accelerate framework found
-- CMAKE_SYSTEM_PROCESSOR: arm64
-- ARM detected
-- Performing Test COMPILER_SUPPORTS_FP16_FORMAT_I3E
-- Performing Test COMPILER_SUPPORTS_FP16_FORMAT_I3E - Failed
CMake Error at examples/grpc-server/CMakeLists.txt:13 (find_package):
Could not find a package configuration file provided by "Protobuf" with any
of the following names:
ProtobufConfig.cmake
protobuf-config.cmake
Add the installation prefix of "Protobuf" to CMAKE_PREFIX_PATH or set
"Protobuf_DIR" to a directory containing one of the above files. If
"Protobuf" provides a separate development package or SDK, be sure it has
been installed.
-- Configuring incomplete, errors occurred!
make[1]: *** [grpc-server] Error 1
make: *** [backend/cpp/llama/grpc-server] Error 2
Please check this comment https://github.com/mudler/LocalAI/issues/1197#issuecomment-1779573484
I was able to solve it by linking protoc again.
brew link protobuf
make clean
make BUILD_TYPE=metal build
make BUILD_TYPE=metal build
Thanks mate. The original issue was solved by relinking protobuf but then it failed at another point for me after following the steps in #1197
Still not managed a successful install though as I described in detail here
Always ends up like this:
Is it succeed to build? It is ok we get the Warning while the building process.
Is it succeed to build? It is ok we get the
Warningwhile the building process.
No.
I've never managed to get the build to get past this point. It always just seems to stop at this point and never progress.
Built with make and saw he same error OP saw which I worked around by:
BUILD_GRPC_FOR_BACKEND_LLAMA=on make backend/cpp/llama/grpc-server
And then rerunning the build target to complete compilation on an M2 chip running Fedora Asahi Remix.
@NinjAiBot my output looks just like yours, and it's working for me—I just followed the next steps in Example: Build on mac to download ggml-gpt4all-j.bin and ask it how it was. Try it! Thanks @renzo4web for the brew link protobuf step which fixed the build for me.
Just used LM-Studio instead. Was the easiest way to spin up a server to chat to a model which is what I needed to do
This issue is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 5 days.
This issue was closed because it has been stalled for 5 days with no activity.