LocalAI icon indicating copy to clipboard operation
LocalAI copied to clipboard

Installation problems on MacOS

Open jasonkaplan79 opened this issue 1 year ago • 7 comments

I am trying to install this on my Mac. Here are my constraints:

  • Hardware: Macbook Pro, Apple M2 Max running MacOS Sonoma (CPU, no GPU)
  • I do not want to use Docker or any Virtual Machines

followed just these steps: https://localai.io/basics/build/index.html#example-build-on-mac:

  1. brew install abseil cmake go grpc protobuf protoc-gen-go protoc-gen-go-grpc python wget
  2. pip install --user grpcio-tools (or pip3 install --user grpcio-tools)
  3. Git clone https://github.com/mudler/LocalAI
  4. cd LocalAI
  5. make build
  6. make build fails:
  • brew install libomp
  • brew install gcc
  • export CC=gcc
  • export CXX=g++
  1. Reran: make build, and it still failed with this output:
[100%] Linking CXX executable ../../bin/grpc-server
ld: warning: ignoring duplicate libraries: '../../common/libcommon.a', '../../src/libllama.a'
[100%] Built target grpc-server
[ 66%] Built target ggml-base
[100%] Built target ggml-metal
cp llama.cpp/build/bin/grpc-server .
cp -rfv backend/cpp/llama-fallback/grpc-server backend-assets/grpc/llama-cpp-fallback
backend/cpp/llama-fallback/grpc-server -> backend-assets/grpc/llama-cpp-fallback
CGO_LDFLAGS=" -framework Accelerate -framework Foundation -framework Metal -framework MetalKit -framework MetalPerformanceShaders" C_INCLUDE_PATH=/Users/jason/Code/ai/LocalAI/sources/go-llama.cpp LIBRARY_PATH=/Users/jason/Code/ai/LocalAI/sources/go-llama.cpp \
	go build -ldflags "-s -w -X "github.com/mudler/LocalAI/internal.Version=v2.23.0-136-g9c9359fc" -X "github.com/mudler/LocalAI/internal.Commit=9c9359fc9662648800b5d76c9f123f8b32b1e54e"" -tags "" -o backend-assets/grpc/llama-ggml ./backend/go/llm/llama-ggml/
# github.com/go-skynet/go-llama.cpp
binding.cpp:613:5: warning: deleting pointer to incomplete type 'llama_model' may cause undefined behavior [-Wdelete-incomplete]
sources/go-llama.cpp/llama.cpp/llama.h:70:12: note: forward declaration of 'llama_model'
# github.com/mudler/LocalAI/backend/go/llm/llama-ggml
ld: warning: ignoring duplicate libraries: '-lc++'
CGO_LDFLAGS=" -framework Accelerate -framework Foundation -framework Metal -framework MetalKit -framework MetalPerformanceShaders  -lggml" C_INCLUDE_PATH="/Users/jason/Code/ai/LocalAI/sources/whisper.cpp/include:/Users/jason/Code/ai/LocalAI/sources/whisper.cpp/ggml/include" LIBRARY_PATH=/Users/jason/Code/ai/LocalAI/sources/whisper.cpp \
	go build -ldflags "-s -w -X "github.com/mudler/LocalAI/internal.Version=v2.23.0-136-g9c9359fc" -X "github.com/mudler/LocalAI/internal.Commit=9c9359fc9662648800b5d76c9f123f8b32b1e54e"" -tags "" -o backend-assets/grpc/whisper ./backend/go/transcribe/whisper
# github.com/mudler/LocalAI/backend/go/transcribe/whisper
/opt/homebrew/Cellar/go/1.23.3/libexec/pkg/tool/darwin_arm64/link: running gcc failed: exit status 1
/usr/bin/gcc -arch arm64 -Wl,-S -Wl,-x -o $WORK/b001/exe/a.out /var/folders/29/4v06_6lx71s9bqycx42mlkf80000gn/T/go-link-4002305325/go.o /var/folders/29/4v06_6lx71s9bqycx42mlkf80000gn/T/go-link-4002305325/000000.o /var/folders/29/4v06_6lx71s9bqycx42mlkf80000gn/T/go-link-4002305325/000001.o /var/folders/29/4v06_6lx71s9bqycx42mlkf80000gn/T/go-link-4002305325/000002.o /var/folders/29/4v06_6lx71s9bqycx42mlkf80000gn/T/go-link-4002305325/000003.o /var/folders/29/4v06_6lx71s9bqycx42mlkf80000gn/T/go-link-4002305325/000004.o /var/folders/29/4v06_6lx71s9bqycx42mlkf80000gn/T/go-link-4002305325/000005.o /var/folders/29/4v06_6lx71s9bqycx42mlkf80000gn/T/go-link-4002305325/000006.o /var/folders/29/4v06_6lx71s9bqycx42mlkf80000gn/T/go-link-4002305325/000007.o /var/folders/29/4v06_6lx71s9bqycx42mlkf80000gn/T/go-link-4002305325/000008.o /var/folders/29/4v06_6lx71s9bqycx42mlkf80000gn/T/go-link-4002305325/000009.o /var/folders/29/4v06_6lx71s9bqycx42mlkf80000gn/T/go-link-4002305325/000010.o /var/folders/29/4v06_6lx71s9bqycx42mlkf80000gn/T/go-link-4002305325/000011.o /var/folders/29/4v06_6lx71s9bqycx42mlkf80000gn/T/go-link-4002305325/000012.o /var/folders/29/4v06_6lx71s9bqycx42mlkf80000gn/T/go-link-4002305325/000013.o /var/folders/29/4v06_6lx71s9bqycx42mlkf80000gn/T/go-link-4002305325/000014.o /var/folders/29/4v06_6lx71s9bqycx42mlkf80000gn/T/go-link-4002305325/000015.o /var/folders/29/4v06_6lx71s9bqycx42mlkf80000gn/T/go-link-4002305325/000016.o /var/folders/29/4v06_6lx71s9bqycx42mlkf80000gn/T/go-link-4002305325/000017.o /var/folders/29/4v06_6lx71s9bqycx42mlkf80000gn/T/go-link-4002305325/000018.o /var/folders/29/4v06_6lx71s9bqycx42mlkf80000gn/T/go-link-4002305325/000019.o /var/folders/29/4v06_6lx71s9bqycx42mlkf80000gn/T/go-link-4002305325/000020.o -lresolv -framework Accelerate -framework Foundation -framework Metal -framework MetalKit -framework MetalPerformanceShaders -lggml -lwhisper -lm -lstdc++ -fopenmp -framework Accelerate -framework Metal -framework Foundation -framework CoreGraphics -framework Accelerate -framework Foundation -framework Metal -framework MetalKit -framework MetalPerformanceShaders -lggml -framework Accelerate -framework Foundation -framework Metal -framework MetalKit -framework MetalPerformanceShaders -lggml -framework CoreFoundation -framework Accelerate -framework Foundation -framework Metal -framework MetalKit -framework MetalPerformanceShaders -lggml -framework Accelerate -framework Foundation -framework Metal -framework MetalKit -framework MetalPerformanceShaders -lggml -framework Accelerate -framework Foundation -framework Metal -framework MetalKit -framework MetalPerformanceShaders -lggml -framework CoreFoundation -framework IOKit -framework CoreFoundation -framework Security
clang: error: unsupported option '-fopenmp'

make: *** [backend-assets/grpc/whisper] Error 1

jasonkaplan79 avatar Dec 01 '24 21:12 jasonkaplan79

I'm running the same issue, "[backend-assets/grpc/whisper] Error 1". I'm also on a mac (mac mini m4). I don't know what to make of it.

nonlinear avatar Dec 06 '24 01:12 nonlinear

Same problem here. I solve this by:

brew install llvm
brew install lld
make build

https://stackoverflow.com/questions/43555410/enable-openmp-support-in-clang-in-mac-os-x-sierra-mojave

ring2003 avatar Dec 08 '24 06:12 ring2003

Same problem here. I solve this by:

brew install llvm
brew install lld
make build

https://stackoverflow.com/questions/43555410/enable-openmp-support-in-clang-in-mac-os-x-sierra-mojave

Still error, you need to export CC and CXX after that:

export CC=/opt/homebrew/opt/llvm/bin/clang
export CXX=/opt/homebrew/opt/llvm/bin/clang++

https://stackoverflow.com/a/60043467

xrce avatar Dec 18 '24 05:12 xrce

Still fails, even after all these steps:

brew install llvm
brew install lld
export CC=/opt/homebrew/opt/llvm/bin/clang
export CXX=/opt/homebrew/opt/llvm/bin/clang++
make build

This is the error I get:

/opt/homebrew/include/ggml.h:1587:35: note: 'ggml_conv_2d' declared here
 1587 |     GGML_API struct ggml_tensor * ggml_conv_2d(
      |                                   ^
/Users/jk/Code/ai/LocalAI/backend/cpp/llama-avx/llama.cpp/examples/grpc-server/clip.cpp:1008:42: error: use of undeclared identifier 'ggml_conv_2d_dw'; did you mean 'ggml_conv_2d'?
 1008 |             struct ggml_tensor * peg_0 = ggml_conv_2d_dw(ctx0, model.mm_model_peg_0_w, mlp_2, 1, 1, 1, 1, 1, 1);
      |                                          ^~~~~~~~~~~~~~~
      |                                          ggml_conv_2d
/opt/homebrew/include/ggml.h:1587:35: note: 'ggml_conv_2d' declared here
 1587 |     GGML_API struct ggml_tensor * ggml_conv_2d(
      |                                   ^
5 errors generated.
make[6]: *** [examples/grpc-server/CMakeFiles/myclip.dir/clip.cpp.o] Error 1
make[5]: *** [examples/grpc-server/CMakeFiles/myclip.dir/all] Error 2
make[4]: *** [examples/grpc-server/CMakeFiles/grpc-server.dir/rule] Error 2
make[3]: *** [grpc-server] Error 2
make[2]: *** [grpc-server] Error 2
make[1]: *** [build-llama-cpp-grpc-server] Error 2
make: *** [backend-assets/grpc/llama-cpp-avx] Error 2

jasonkaplan79 avatar Dec 23 '24 21:12 jasonkaplan79

I was Initially getting the same error; the sequence of commands below helped:

brew install llvm export LDFLAGS="-L/opt/homebrew/opt/llvm/lib" export CPPFLAGS="-I/opt/homebrew/opt/llvm/include" brew install lld export CC=/opt/homebrew/opt/llvm/bin/clang export CXX=/opt/homebrew/opt/llvm/bin/clang++ make build

My system is: Mac Mini M2 Sonoma 14.7.2

vadimj avatar Dec 24 '24 08:12 vadimj

I was Initially getting the same error; the sequence of commands below helped:

brew install llvm export LDFLAGS="-L/opt/homebrew/opt/llvm/lib" export CPPFLAGS="-I/opt/homebrew/opt/llvm/include" brew install lld export CC=/opt/homebrew/opt/llvm/bin/clang export CXX=/opt/homebrew/opt/llvm/bin/clang++ make build

My system is: Mac Mini M2 Sonoma 14.7.2

I followed this and built successfully. But when I run ./local-ai run --p2p command I get this error:

11:58AM INF env file found, loading environment variables from file envFile=.env
11:58AM INF Setting logging to info
11:58AM INF P2P mode enabled
11:58AM INF No token provided, generating one
11:58AM INF Generated Token:
not implemented
11:58AM INF To use the token, you can run the following command in another node or terminal:
export TOKEN="not implemented"
local-ai worker p2p-llama-cpp-rpc
11:58AM FTL Error running the application error="not implemented"

dbs-huynguyen avatar Jan 22 '25 03:01 dbs-huynguyen

I was Initially getting the same error; the sequence of commands below helped:

brew install llvm export LDFLAGS="-L/opt/homebrew/opt/llvm/lib" export CPPFLAGS="-I/opt/homebrew/opt/llvm/include" brew install lld export CC=/opt/homebrew/opt/llvm/bin/clang export CXX=/opt/homebrew/opt/llvm/bin/clang++ make build

My system is: Mac Mini M2 Sonoma 14.7.2

That really helped me! Thanks a lot 🎉!

Dbuggerx avatar Mar 08 '25 16:03 Dbuggerx

I was Initially getting the same error; the sequence of commands below helped:

brew install llvm export LDFLAGS="-L/opt/homebrew/opt/llvm/lib" export CPPFLAGS="-I/opt/homebrew/opt/llvm/include" brew install lld export CC=/opt/homebrew/opt/llvm/bin/clang export CXX=/opt/homebrew/opt/llvm/bin/clang++ make build

My system is: Mac Mini M2 Sonoma 14.7.2

I can confirm this fixed the issue for me (MacBook Pro M1 Pro)

davmacario avatar May 05 '25 14:05 davmacario

I was Initially getting the same error; the sequence of commands below helped:

brew install llvm export LDFLAGS="-L/opt/homebrew/opt/llvm/lib" export CPPFLAGS="-I/opt/homebrew/opt/llvm/include" brew install lld export CC=/opt/homebrew/opt/llvm/bin/clang export CXX=/opt/homebrew/opt/llvm/bin/clang++ make build

My system is: Mac Mini M2 Sonoma 14.7.2

In addition to this I needed rice:

go install github.com/GeertJohan/go.rice/rice@latest 
export PATH="$(go env GOPATH)/bin:$PATH"

To build with:

make GRPC_BACKENDS=backend-assets/grpc/llama-cpp BUILD_TYPE=metal build

karlshea avatar May 18 '25 05:05 karlshea

I used a combination of two of the above hints:

brew install llvm
export LDFLAGS="-L/opt/homebrew/opt/llvm/lib"
export CPPFLAGS="-I/opt/homebrew/opt/llvm/include"
brew install lld
export CC=/opt/homebrew/opt/llvm/bin/clang
export CXX=/opt/homebrew/opt/llvm/bin/clang++
go install github.com/GeertJohan/go.rice/rice@latest 
export PATH="$(go env GOPATH)/bin:$PATH"
make build

for a Macbook Air M2 running Sequoia 15.5

P.S.: of course, you may also make build GO_TAGS="tts stablediffusion" or similar to activate any required backends, if needed

rozek avatar Jul 20 '25 06:07 rozek

This issue is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 5 days.

github-actions[bot] avatar Oct 19 '25 02:10 github-actions[bot]

This issue was closed because it has been stalled for 5 days with no activity.

github-actions[bot] avatar Oct 25 '25 02:10 github-actions[bot]