LocalAI icon indicating copy to clipboard operation
LocalAI copied to clipboard

compilation fails for "examples/grpc-server"

Open stefangweichinger opened this issue 8 months ago • 33 comments

LocalAI version:

45370c212bbc379f65f2c77560958acc24877fba

Environment, CPU architecture, OS, and Version:

Linux fedora 6.5.6-300.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Oct 6 19:57:21 UTC 2023 x86_64 GNU/Linux

Describe the bug

After failures with CUDA and docker in https://github.com/go-skynet/LocalAI/issues/1178

I try to compile and run LocalAI directly on the host: make BUILD_TYPE=cublas build

To Reproduce

make BUILD_TYPE=cublas build

Expected behavior

Successful build, binaries running with CUDA support

Logs

make -C go-llama BUILD_TYPE=cublas libbinding.a
make[1]: Verzeichnis „/home/sgw/LocalAI/go-llama“ wird betreten
I llama.cpp build info: 
I UNAME_S:  Linux
I UNAME_P:  unknown
I UNAME_M:  x86_64
I CFLAGS:   -I./llama.cpp -I. -O3 -DNDEBUG -std=c11 -fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -Wno-unused-function -pthread -march=native -mtune=native
I CXXFLAGS: -I./llama.cpp -I. -I./llama.cpp/common -I./common -O3 -DNDEBUG -std=c++11 -fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -pthread
I CGO_LDFLAGS:  
I LDFLAGS:  
I BUILD_TYPE:  cublas
I CMAKE_ARGS:  -DLLAMA_AVX512=OFF -DLLAMA_CUBLAS=ON
I EXTRA_TARGETS:  llama.cpp/ggml-cuda.o
I CC:       cc (GCC) 13.2.1 20230918 (Red Hat 13.2.1-3)
I CXX:      g++ (GCC) 13.2.1 20230918 (Red Hat 13.2.1-3)

make[1]: „libbinding.a“ ist bereits aktuell.
make[1]: Verzeichnis „/home/sgw/LocalAI/go-llama“ wird verlassen
LLAMA_VERSION=24ba3d829e31a6eda3fa1723f692608c2fa3adda make -C backend/cpp/llama grpc-server
make[1]: Verzeichnis „/home/sgw/LocalAI/backend/cpp/llama“ wird betreten
cd llama.cpp && mkdir -p build && cd build && cmake .. -DLLAMA_AVX512=OFF -DLLAMA_CUBLAS=ON && cmake --build . --config Release
-- cuBLAS found
-- Using CUDA architectures: 52;61;70
-- CMAKE_SYSTEM_PROCESSOR: x86_64
-- x86 detected
CMake Error at examples/grpc-server/CMakeLists.txt:7 (find_package):
  Could not find a package configuration file provided by "absl" with any of
  the following names:

    abslConfig.cmake
    absl-config.cmake

  Add the installation prefix of "absl" to CMAKE_PREFIX_PATH or set
  "absl_DIR" to a directory containing one of the above files.  If "absl"
  provides a separate development package or SDK, be sure it has been
  installed.


-- Configuring incomplete, errors occurred!
make[1]: *** [Makefile:43: grpc-server] Fehler 1
make[1]: Verzeichnis „/home/sgw/LocalAI/backend/cpp/llama“ wird verlassen

Additional context

I also tried CMAKE_ARGS="-DLLAMA_AVX512=OFF" make BUILD_TYPE=cublas build because my CPU doesn't support AVX512.

Maybe interesting:

$ gcc -v
Es werden eingebaute Spezifikationen verwendet.
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=/usr/libexec/gcc/x86_64-redhat-linux/13/lto-wrapper
OFFLOAD_TARGET_NAMES=nvptx-none
OFFLOAD_TARGET_DEFAULT=1
Ziel: x86_64-redhat-linux
Konfiguriert mit: ../configure --enable-bootstrap --enable-languages=c,c++,fortran,objc,obj-c++,ada,go,d,m2,lto --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --with-bugurl=http://bugzilla.redhat.com/bugzilla --enable-shared --enable-threads=posix --enable-checking=release --enable-multilib --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-linker-build-id --with-gcc-major-version-only --enable-libstdcxx-backtrace --with-libstdcxx-zoneinfo=/usr/share/zoneinfo --with-linker-hash-style=gnu --enable-plugin --enable-initfini-array --with-isl=/builddir/build/BUILD/gcc-13.2.1-20230918/obj-x86_64-redhat-linux/isl-install --enable-offload-targets=nvptx-none --without-cuda-driver --enable-offload-defaulted --enable-gnu-indirect-function --enable-cet --with-tune=generic --with-arch_32=i686 --build=x86_64-redhat-linux --with-build-config=bootstrap-lto --enable-link-serialization=1
Thread-Modell: posix
Unterstützte LTO-Kompressionsalgorithmen: zlib zstd
gcc-Version 13.2.1 20230918 (Red Hat 13.2.1-3) (GCC)

But the error message seems more like something is missing than it's the wrong gcc.

EDIT: search for absl, found and installed sudo dnf install python3-absl-py.noarch ... doesn't help.

stefangweichinger avatar Oct 19 '23 10:10 stefangweichinger

Try to install libabsl-dev. On ubuntu I do apt install libabsl-dev and it help

WilixLead avatar Oct 20 '23 10:10 WilixLead

I also got this bug following the same https://localai.io/howtos/easy-setup-docker-gpu/ ... unforutunately didnt find this 'apt install libabsl-dev' and am in the process of a new pull/build, will give it a try assuming the issue persists. Could be missing the package on the Quay build?

nbollman avatar Oct 22 '23 14:10 nbollman

No luck with the apt install libabsl-dev command, think the GPU installation instructions are borked for the moment.

nbollman avatar Oct 22 '23 15:10 nbollman

Pretty sure the bug spawns from instructions: image: quay.io/go-skynet/local-ai:master-cublas-cuda12 something was migrated between packages? Im attempting to follow the How To https://localai.io/howtos/easy-setup-docker-gpu/ with using image instead: quay.io/go-skynet/local-ai:latest (its using cublas/cuda during its build anyway now) will let know how it goes

nbollman avatar Oct 22 '23 15:10 nbollman

No, that was not the issue... Man this rabbithole has got me on death spiral... depressed even. My ineptitude screaming at every docker recompose, compile. Docker has always been weak in my IT brain, I know I shouldn't have had to download these packages this many times over again... installing with cublas... Ive got be missing an important memo here... Before I checked cuda boxes the docker seemed to be installing with cuda libraries on its own... Im going to try a no frills install and check if cuda acceleration works, then work within the working base package to try to get cuda libraries upgraded?

nbollman avatar Oct 22 '23 17:10 nbollman

@nbollman It's not just you - I'm also having the same problem attempting to compile the image with documented defaults.

cam-s-hunt avatar Oct 22 '23 23:10 cam-s-hunt

same here. fyi: i got the same error when try installing/building locally. So it it is not docker/docker-compose specific..

EDIT: on local build, i was able to solve the absl error message, but after that almost the same error followed, but this time it is protobuf. and I couldn't solve the latter unit now.

mounta11n avatar Oct 22 '23 23:10 mounta11n

@mounta11n how did you resolve absl with local build?

cam-s-hunt avatar Oct 23 '23 00:10 cam-s-hunt

@nbollman I was able to make some progress by:

  1. Set REBUILD=false in .env
  2. Use local-ai:latest (I'm assuming "latest" is vanilla localai, I still want to try out master-cublas-cuda12-ffmpeg)

cam-s-hunt avatar Oct 23 '23 02:10 cam-s-hunt

:warning::warning::warning::warning::warning:

Hi! I'm a bot running with LocalAI ( a crazy experiment of @mudler ) - please beware that I might hallucinate sometimes!

but.... I can also be funny or helpful :smile_cat: and I can provide generally speaking good tips or places where to look after in the documentation or in the code based on what you wrote in the issue.

Don't engage in conversation with me, I don't support (yet) replying!

:warning::warning::warning::warning::warning:

The issue seems to be related to the missing package configuration file for "absl" when trying to build the grpc-server. The error message suggests that the package configuration file "abslConfig.cmake" or "absl-config.cmake" is not found.

One possible solution is to install the "absl" package and its configuration files. You can try installing it using the package manager of your Linux distribution. For example, on Fedora, you can use the following command to install it:

sudo dnf install python3-absl-py.noarch

If the package is already installed, you can try reinstalling it to ensure that the configuration files are installed correctly.

If the package is not available in your package manager, you can try building and installing it from source. You can download the source code from the official GitHub repository: https://github.com/abseil/abseil-cpp.

After installing the "absl" package, try running the build command again to see if the issue is resolved.

Sources:

  • https://github.com/go-skynet/LocalAI/tree/master/.github/ISSUE_TEMPLATE/bug_report.md
  • https://localai.io/basics/build/index.html
  • https://localai.io/basics/getting_started/index.html

localai-bot avatar Oct 24 '23 12:10 localai-bot

~~Issue persists: macOS Sonoma 14, M2~~

Fixed by installing these depts:

brew install grpc protobuf abseil

yackermann avatar Oct 27 '23 03:10 yackermann

Workaround:

Instead of trying to compile and run LocalAI directly on the host, I utilized a prebuilt docker image: quay.io/go-skynet/local-ai:v1.30.0-cublas-cuda12. Here's the docker run command I used:

docker run -d --name api \
    --gpus '"device=0"' \
    --restart=always \
    -p 8080:8080 \
    -e GALLERIES='[{"name":"model-gallery", "url":"github:go-skynet/model-gallery/index.yaml"}, {"url": "github:go-skynet/model-gallery/huggingface.yaml","name":"huggingface"}]' \
    -e DEBUG=true \
    -e MODELS_PATH=/models \
    -e THREADS=8 \
    -e BUILD_TYPE=cublas \
    -e REBUILD=true \
    -e CMAKE_ARGS="-DLLAMA_CUBLAS=on -DLLAMA_F16C=OFF -DLLAMA_AVX512=OFF -DLLAMA_AVX2=OFF -DLLAMA_AVX=OFF -DLLAMA_FMA=OFF" \
    -e FORCE_CMAKE=1 \
    -v $PWD/models:/models \
    -t quay.io/go-skynet/local-ai:v1.30.0-cublas-cuda12 \
    /usr/bin/local-ai

Here's a breakdown of the notable changes:

  • Utilized the --gpus '"device=0"' option.
  • Set BUILD_TYPE=cublas and REBUILD=true environment variables.
  • Explicitly defined -DLLAMA_CUBLAS=on within CMAKE_ARGS.
  • Used FORCE_CMAKE=1 to ensure CMake processes run.

By utilizing the above Docker run command, I was able to get LocalAI running successfully with CUDA support.

I hope this helps! Let me know if you face any other challenges.

Rift7 avatar Oct 29 '23 14:10 Rift7

Hi there, I had the same issue. I fixed by compiling the gRPC locally and installing to a local directory, called ${HOME}/installed_packages. Then I passed a bunch of variables to the CMake to let it find the gRPC, absl, protobuf, etc. Then I ran:

export INSTALLED_PACKAGES="${HOME}/installed_packages/linux_x86_64"  

export INSTALLED_LIB_CMAKE="${INSTALLED_PACKAGES}/lib/cmake"  
export ADDED_CMAKE_ARGS="-Dabsl_DIR=${INSTALLED_LIB_CMAKE}/absl"
export ADDED_CMAKE_ARGS="${ADDED_CMAKE_ARGS} -DProtobuf_DIR=${INSTALLED_LIB_CMAKE}/protobuf"
export ADDED_CMAKE_ARGS="${ADDED_CMAKE_ARGS} -Dutf8_range_DIR=${INSTALLED_LIB_CMAKE}/utf8_range"
export ADDED_CMAKE_ARGS="${ADDED_CMAKE_ARGS} -DgRPC_DIR=${INSTALLED_LIB_CMAKE}/grpc"
export ADDED_CMAKE_ARGS="${ADDED_CMAKE_ARGS} -DCMAKE_CXX_STANDARD_INCLUDE_DIRECTORIES=${INSTALLED_PACKAGES}/include"

export _PROTOBUF_PROTOC=${INSTALLED_PACKAGES}/bin/protoc
export _GRPC_CPP_PLUGIN_EXECUTABLE=${INSTALLED_PACKAGES}/bin/grpc_cpp_plugin
export PATH=${PATH}:${INSTALLED_PACKAGES}/bin

CMAKE_ARGS="${ADDED_CMAKE_ARGS}" make build

If it can be useful, I can create a separate script with all the commands and create a PR containing it.

diego-minguzzi avatar Oct 29 '23 15:10 diego-minguzzi

I am also now hanging on the protobuf error... anyone solved it (not on mac)?

manuelkamp avatar Oct 30 '23 08:10 manuelkamp

Hi there,

I submitted the PR #1232 to solve it. I am waiting for feedback: @mudler , could you please review it, since it is my first PR in this project, thank you.

diego-minguzzi avatar Oct 31 '23 00:10 diego-minguzzi

I have seen that the build now fails. As soon as I reach home, I try to understand why.

diego-minguzzi avatar Nov 02 '23 15:11 diego-minguzzi

The issue was fixed. With the code in the master branch, it is possible to build gRPC locally by running the command: BUILD_GRPC_FOR_BACKEND_LLAMA=ON make build

I think it is possible to close the issue.

diego-minguzzi avatar Nov 06 '23 20:11 diego-minguzzi

The issue was fixed. With the code in the master branch, it is possible to build gRPC locally by running the command: BUILD_GRPC_FOR_BACKEND_LLAMA=ON make build

I think it is possible to close the issue.

For clarification for other noobs like me: that means adding BUILD_GRPC_FOR_BACKEND_LLAMA=ON in the .env and not in CMAKE_ARGS or elsewhere

thiswillbeyourgithub avatar Nov 07 '23 16:11 thiswillbeyourgithub

The issue was fixed. With the code in the master branch, it is possible to build gRPC locally by running the command: BUILD_GRPC_FOR_BACKEND_LLAMA=ON make build I think it is possible to close the issue.

For clarification for other noobs like me: that means adding BUILD_GRPC_FOR_BACKEND_LLAMA=ON in the .env and not in CMAKE_ARGS or elsewhere

CMAKE_ARGS="-DLLAMA_AVX512=ON" BUILD_GRPC_FOR_BACKEND_LLAMA=ON make BUILD_TYPE=clblas build

results here in some error on current master branch (commit 562ac62f59b55c831207a908f07144473c2c4599 (HEAD -> master, origin/master, origin/HEAD))

gmake[4]: *** [examples/grpc-server/CMakeFiles/hw_grpc_proto.dir/build.make:89: examples/grpc-server/CMakeFiles/hw_grpc_proto.dir/backend.grpc.pb.cc.o] Error 1
gmake[4]: Leaving directory '/home/m/git/LocalAI/backend/cpp/llama/llama.cpp/build'
gmake[3]: *** [CMakeFiles/Makefile2:2492: examples/grpc-server/CMakeFiles/hw_grpc_proto.dir/all] Error 2
gmake[3]: Leaving directory '/home/m/git/LocalAI/backend/cpp/llama/llama.cpp/build'
gmake[2]: *** [Makefile:146: all] Error 2
gmake[2]: Leaving directory '/home/m/git/LocalAI/backend/cpp/llama/llama.cpp/build'
make[1]: *** [Makefile:43: grpc-server] Error 2
make[1]: Leaving directory '/home/m/git/LocalAI/backend/cpp/llama'
make: *** [Makefile:428: backend/cpp/llama/grpc-server] Error 2

markuman avatar Nov 08 '23 06:11 markuman

This error is reproducible in the official container for 1.40.0-ffmpeg at least.

I attach screenshot.

Screenshot_20231108-150617_Termux

EchedelleLR avatar Nov 08 '23 14:11 EchedelleLR

@EchedeyLR can you share your .env ?

thiswillbeyourgithub avatar Nov 08 '23 15:11 thiswillbeyourgithub

Is exactly the default one at https://github.com/mudler/LocalAI/blob/v1.40.0/.env with GO_TAGS and REBUILD uncommented.

EchedelleLR avatar Nov 09 '23 11:11 EchedelleLR

BUILD_GRPC_FOR_BACKEND_LLAMA=ON

If this is needed, why was not included in the example .env file in the release? I also don't see mention of it outside the issues in the clear documentation.

EchedelleLR avatar Nov 09 '23 11:11 EchedelleLR

AFAIK it's not a fix but a workaround while the team has figured out why this started happening.

Personnaly here's what I needed to add at the end of my .env:

CMAKE_ARGS="-DLLAMA_AVX512=OFF -DLLAMA_AVX2=OFF -DLLAMA_FMA=OFF -DLLAMA_CUBLAS=on"
BUILD_GRPC_FOR_BACKEND_LLAMA=ON
FORCE_CMAKE=1

(I also set REBUILD=true and BUILD_TYPE=cublas)

thiswillbeyourgithub avatar Nov 09 '23 12:11 thiswillbeyourgithub

Would this work in the container image?

I am also worried about these fixed CMAKE_ARGS. OpenBLAS did not build for me in go-llama since several months ago, just a version stopped working and I thought that was related to instruction set, as in other computer everything worked so far.

I thought this was self-detected and not hard coded, this just makes it difficult for people to run it in non-exactly modern computers (and no, I am not speaking about something from 2008 or so, I imply 2014 even).

EchedelleLR avatar Nov 09 '23 12:11 EchedelleLR

CMAKE_ARGS="-DLLAMA_AVX512=OFF -DLLAMA_AVX2=OFF -DLLAMA_FMA=OFF" BUILD_GRPC_FOR_BACKEND_LLAMA=ON make BUILD_TYPE=clblas build is working in v1.30.0, but not in v1.40.0 nor on current master (274ace289823a8bacb7b4987b5c961b62d5eee99).

markuman avatar Nov 09 '23 14:11 markuman

BUILD_GRPC_FOR_BACKEND_LLAMA=ON

If this is needed, why was not included in the example .env file in the release? I also don't see mention of it outside the issues in the clear documentation.

Is this needed in the docs? Please ping me if so

lunamidori5 avatar Nov 10 '23 18:11 lunamidori5

We need to confirm this is the issue yet. I am yet to try that solution. Other user reported it as nonworking

EchedelleLR avatar Nov 10 '23 19:11 EchedelleLR

My problem was slightly different in that it was giving me errors for absl::check, absl::log, etc.

It turns out Conda installs its own libraries and binaries, which also meant it had its own incompatible version of absl.

I completely uninstalled Anaconda3, then I cloned grpc with git clone --recurse-submodules -b v1.58.0 --depth 1 --shallow-submodules https://github.com/grpc/grpc which I got from the Dockerfile

I built and installed abseil-cpp, protobuf (inside grpc/third_party) and grpc from the folder above into /usr/local, after all libraries were installed I built LocalAI as usual.

Here are my build args for LocalAI: CMAKE_ARGS="-DLLAMA_CUBLAS=ON -DLLAMA_F16C=OFF -DLLAMA_AVX512=OFF -DLLAMA_AVX2=OFF -DLLAMA_AVX=OFF -DLLAMA_FMA=OFF" make BUILD_TYPE=cublas GO_TAGS=stablediffusion,tts CUDACXX=/usr/local/cuda/bin/nvcc build

Building as I type this, seems to be ok so far

Ubuntu Server 22.04, Xeon X5650

SlackinJack avatar Nov 12 '23 09:11 SlackinJack

Just a little report, coming from #1386 using BUILD_GRPC_FOR_BACKEND_LLAMA=ON for a local compile did the trick. cmake did not find my installed libproto* from debian.

Notable comment there from @B4ckslash

AFAIK this is because not every distribution (Fedora, Debian and derivatives among them) builds Protobuf with CMake, which leads to the .cmake files missing for those distros, so CMake cannot find them when building llama.cpp.

dionysius avatar Dec 11 '23 13:12 dionysius