gpt4all icon indicating copy to clipboard operation
gpt4all copied to clipboard

CMake Error at llama.cpp.cmake:320 (add_library)

Open stevencoveta opened this issue 1 year ago • 1 comments

System Info

linux 22

Information

  • [ ] The official example notebooks/scripts
  • [ ] My own modified scripts

Related Components

  • [X] backend
  • [ ] bindings
  • [ ] python-bindings
  • [ ] chat-ui
  • [ ] models
  • [ ] circleci
  • [ ] docker
  • [ ] api

Reproduction

-- The CXX compiler identification is GNU 9.4.0 -- The C compiler identification is GNU 9.4.0 -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /usr/bin/c++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /usr/bin/cc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Interprocedural optimization support detected -- Performing Test CMAKE_HAVE_LIBC_PTHREAD -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed -- Check if compiler accepts -pthread -- Check if compiler accepts -pthread - yes -- Found Threads: TRUE
-- CMAKE_SYSTEM_PROCESSOR: x86_64 -- Configuring ggml implementation target llama-mainline-default in /content/drive/MyDrive/Colab Notebooks/gpt4all/gpt4all-backend/llama.cpp-mainline -- x86 detected -- Configuring ggml implementation target llama-230511-default in /content/drive/MyDrive/Colab Notebooks/gpt4all/gpt4all-backend/llama.cpp-230511 -- x86 detected -- Configuring ggml implementation target llama-230519-default in /content/drive/MyDrive/Colab Notebooks/gpt4all/gpt4all-backend/llama.cpp-230519 -- x86 detected -- Configuring model implementation target llamamodel-mainline-default -- Configuring model implementation target llamamodel-230519-default -- Configuring model implementation target llamamodel-230511-default -- Configuring model implementation target gptj-default -- Configuring model implementation target mpt-default -- Configuring ggml implementation target llama-mainline-avxonly in /content/drive/MyDrive/Colab Notebooks/gpt4all/gpt4all-backend/llama.cpp-mainline -- x86 detected -- Configuring ggml implementation target llama-230511-avxonly in /content/drive/MyDrive/Colab Notebooks/gpt4all/gpt4all-backend/llama.cpp-230511 -- x86 detected -- Configuring ggml implementation target llama-230519-avxonly in /content/drive/MyDrive/Colab Notebooks/gpt4all/gpt4all-backend/llama.cpp-230519 -- x86 detected -- Configuring model implementation target llamamodel-mainline-avxonly -- Configuring model implementation target llamamodel-230519-avxonly -- Configuring model implementation target llamamodel-230511-avxonly -- Configuring model implementation target gptj-avxonly -- Configuring model implementation target mpt-avxonly -- Configuring done CMake Error at llama.cpp.cmake:320 (add_library): Cannot find source file:

llama.cpp-mainline/ggml.c

Tried extensions .c .C .c++ .cc .cpp .cxx .cu .mpp .m .M .mm .ixx .cppm .h .hh .h++ .hm .hpp .hxx .in .txx .f .F .for .f77 .f90 .f95 .f03 .hip .ispc Call Stack (most recent call first): CMakeLists.txt:58 (include_ggml)

CMake Error at llama.cpp.cmake:341 (add_library): Cannot find source file:

llama.cpp-mainline/llama.cpp

Tried extensions .c .C .c++ .cc .cpp .cxx .cu .mpp .m .M .mm .ixx .cppm .h .hh .h++ .hm .hpp .hxx .in .txx .f .F .for .f77 .f90 .f95 .f03 .hip .ispc Call Stack (most recent call first): CMakeLists.txt:58 (include_ggml)

CMake Error at llama.cpp.cmake:320 (add_library): Cannot find source file:

llama.cpp-230511/ggml.c

Tried extensions .c .C .c++ .cc .cpp .cxx .cu .mpp .m .M .mm .ixx .cppm .h .hh .h++ .hm .hpp .hxx .in .txx .f .F .for .f77 .f90 .f95 .f03 .hip .ispc Call Stack (most recent call first): CMakeLists.txt:59 (include_ggml)

CMake Error at llama.cpp.cmake:341 (add_library): Cannot find source file:

llama.cpp-230511/llama.cpp

Tried extensions .c .C .c++ .cc .cpp .cxx .cu .mpp .m .M .mm .ixx .cppm .h .hh .h++ .hm .hpp .hxx .in .txx .f .F .for .f77 .f90 .f95 .f03 .hip .ispc Call Stack (most recent call first): CMakeLists.txt:59 (include_ggml)

CMake Error at llama.cpp.cmake:320 (add_library): Cannot find source file:

llama.cpp-230519/ggml.c

Tried extensions .c .C .c++ .cc .cpp .cxx .cu .mpp .m .M .mm .ixx .cppm .h .hh .h++ .hm .hpp .hxx .in .txx .f .F .for .f77 .f90 .f95 .f03 .hip .ispc Call Stack (most recent call first): CMakeLists.txt:60 (include_ggml)

CMake Error at llama.cpp.cmake:341 (add_library): Cannot find source file:

llama.cpp-230519/llama.cpp

Tried extensions .c .C .c++ .cc .cpp .cxx .cu .mpp .m .M .mm .ixx .cppm .h .hh .h++ .hm .hpp .hxx .in .txx .f .F .for .f77 .f90 .f95 .f03 .hip .ispc Call Stack (most recent call first): CMakeLists.txt:60 (include_ggml)

CMake Error at llama.cpp.cmake:320 (add_library): No SOURCES given to target: ggml-mainline-default Call Stack (most recent call first): CMakeLists.txt:58 (include_ggml)

CMake Error at llama.cpp.cmake:341 (add_library): No SOURCES given to target: llama-mainline-default Call Stack (most recent call first): CMakeLists.txt:58 (include_ggml)

CMake Error at llama.cpp.cmake:320 (add_library): No SOURCES given to target: ggml-230511-default Call Stack (most recent call first): CMakeLists.txt:59 (include_ggml)

CMake Error at llama.cpp.cmake:341 (add_library): No SOURCES given to target: llama-230511-default Call Stack (most recent call first): CMakeLists.txt:59 (include_ggml)

CMake Error at llama.cpp.cmake:320 (add_library): No SOURCES given to target: ggml-230519-default Call Stack (most recent call first): CMakeLists.txt:60 (include_ggml)

CMake Error at llama.cpp.cmake:341 (add_library): No SOURCES given to target: llama-230519-default Call Stack (most recent call first): CMakeLists.txt:60 (include_ggml)

CMake Error at llama.cpp.cmake:320 (add_library): No SOURCES given to target: ggml-mainline-avxonly Call Stack (most recent call first): CMakeLists.txt:58 (include_ggml)

CMake Error at llama.cpp.cmake:341 (add_library): No SOURCES given to target: llama-mainline-avxonly Call Stack (most recent call first): CMakeLists.txt:58 (include_ggml)

CMake Error at llama.cpp.cmake:320 (add_library): No SOURCES given to target: ggml-230511-avxonly Call Stack (most recent call first): CMakeLists.txt:59 (include_ggml)

CMake Error at llama.cpp.cmake:341 (add_library): No SOURCES given to target: llama-230511-avxonly Call Stack (most recent call first): CMakeLists.txt:59 (include_ggml)

CMake Error at llama.cpp.cmake:320 (add_library): No SOURCES given to target: ggml-230519-avxonly Call Stack (most recent call first): CMakeLists.txt:60 (include_ggml)

CMake Error at llama.cpp.cmake:341 (add_library): No SOURCES given to target: llama-230519-avxonly Call Stack (most recent call first): CMakeLists.txt:60 (include_ggml)

CMake Generate step failed. Build files cannot be regenerated correctly.

Expected behavior

creating the cfiles

stevencoveta avatar Jun 01 '23 17:06 stevencoveta

You need to clone the submodules. Run git submodule update --init

apage43 avatar Jun 01 '23 17:06 apage43

Please always feel free to open more issues if you have anything else.

niansa avatar Aug 15 '23 12:08 niansa