Compile bug: ios swift xcode build error when upgrade to llama : use cmake for swift build
Git commit
$git rev-parse HEAD 43ed389a3f102517e6f7d5620d8e451e88afbf27
Operating systems
Mac
GGML backends
Metal
Problem description & steps to reproduce
ios swift xcode build error when upgrade to
- https://github.com/ggerganov/llama.cpp/pull/10525
Before the upgrade, the code compiled successfully. After the upgrade, it throws a compilation error: "Cannot find type 'xxx' in scope."
First Bad Commit
43ed389a3f102517e6f7d5620d8e451e88afbf27
Relevant log output
/ios/llama.cpp.swift/LibLlama.swift:8:39 Cannot find type 'llama_batch' in scope
/ios/llama.cpp.swift/LibLlama.swift:12:37 Cannot find type 'llama_batch' in scope
/ios/llama.cpp.swift/LibLlama.swift:12:56 Cannot find type 'llama_token' in scope
/ios/llama.cpp.swift/LibLlama.swift:12:76 Cannot find type 'llama_pos' in scope
/ios/llama.cpp.swift/LibLlama.swift:12:99 Cannot find type 'llama_seq_id' in scope
/ios/llama.cpp.swift/LibLlama.swift:27:48 Cannot find type 'llama_sampler' in scope
/ios/llama.cpp.swift/LibLlama.swift:28:24 Cannot find type 'llama_batch' in scope
/ios/llama.cpp.swift/LibLlama.swift:29:31 Cannot find type 'llama_token' in scope
/ios/llama.cpp.swift/LibLlama.swift:44:22 Cannot find 'llama_batch_init' in scope
/ios/llama.cpp.swift/LibLlama.swift:46:23 Cannot find 'llama_sampler_chain_default_params' in scope
/ios/llama.cpp.swift/LibLlama.swift:47:25 Cannot find 'llama_sampler_chain_init' in scope
/ios/llama.cpp.swift/LibLlama.swift:48:9 Cannot find 'llama_sampler_chain_add' in scope
/ios/llama.cpp.swift/LibLlama.swift:48:48 Cannot find 'llama_sampler_init_temp' in scope
/ios/llama.cpp.swift/LibLlama.swift:49:9 Cannot find 'llama_sampler_chain_add' in scope
/ios/llama.cpp.swift/LibLlama.swift:49:48 Cannot find 'llama_sampler_init_dist' in scope
/ios/llama.cpp.swift/LibLlama.swift:53:9 Cannot find 'llama_sampler_free' in scope
/ios/llama.cpp.swift/LibLlama.swift:54:9 Cannot find 'llama_batch_free' in scope
/ios/llama.cpp.swift/LibLlama.swift:55:9 Cannot find 'llama_free' in scope
/ios/llama.cpp.swift/LibLlama.swift:56:9 Cannot find 'llama_free_model' in scope
/ios/llama.cpp.swift/LibLlama.swift:57:9 Cannot find 'llama_backend_free' in scope
/ios/llama.cpp.swift/LibLlama.swift:61:9 Cannot find 'llama_backend_init' in scope
/ios/llama.cpp.swift/LibLlama.swift:62:28 Cannot find 'llama_model_default_params' in scope
/ios/llama.cpp.swift/LibLlama.swift:68:21 Cannot find 'llama_load_model_from_file' in scope
/ios/llama.cpp.swift/LibLlama.swift:77:26 Cannot find 'llama_context_default_params' in scope
/ios/llama.cpp.swift/LibLlama.swift:82:23 Cannot find 'llama_new_context_with_model' in scope
/ios/llama.cpp.swift/LibLlama.swift:100:22 Cannot find 'llama_model_desc' in scope
/ios/llama.cpp.swift/LibLlama.swift:121:21 Cannot find 'llama_n_ctx' in scope
/ios/llama.cpp.swift/LibLlama.swift:142:12 Cannot find 'llama_decode' in scope
/ios/llama.cpp.swift/LibLlama.swift:150:27 Cannot find type 'llama_token' in scope
/ios/llama.cpp.swift/LibLlama.swift:152:24 Cannot find 'llama_sampler_sample' in scope
/ios/llama.cpp.swift/LibLlama.swift:154:12 Cannot find 'llama_token_is_eog' in scope
/ios/llama.cpp.swift/LibLlama.swift:185:12 Cannot find 'llama_decode' in scope
/ios/llama.cpp.swift/LibLlama.swift:211:13 Cannot find 'llama_kv_cache_clear' in scope
/ios/llama.cpp.swift/LibLlama.swift:213:30 Cannot find 'ggml_time_us' in scope
/ios/llama.cpp.swift/LibLlama.swift:215:16 Cannot find 'llama_decode' in scope
/ios/llama.cpp.swift/LibLlama.swift:218:13 Cannot find 'llama_synchronize' in scope
/ios/llama.cpp.swift/LibLlama.swift:220:28 Cannot find 'ggml_time_us' in scope
/ios/llama.cpp.swift/LibLlama.swift:224:13 Cannot find 'llama_kv_cache_clear' in scope
/ios/llama.cpp.swift/LibLlama.swift:226:30 Cannot find 'ggml_time_us' in scope
/ios/llama.cpp.swift/LibLlama.swift:235:20 Cannot find 'llama_decode' in scope
/ios/llama.cpp.swift/LibLlama.swift:238:17 Cannot find 'llama_synchronize' in scope
/ios/llama.cpp.swift/LibLlama.swift:241:28 Cannot find 'ggml_time_us' in scope
/ios/llama.cpp.swift/LibLlama.swift:243:13 Cannot find 'llama_kv_cache_clear' in scope
/ios/llama.cpp.swift/LibLlama.swift:245:24 No exact matches in call to initializer
/ios/llama.cpp.swift/LibLlama.swift:246:24 No exact matches in call to initializer
/ios/llama.cpp.swift/LibLlama.swift:254:32 Cannot convert value of type 'Duration' to expected argument type 'Double'
/ios/llama.cpp.swift/LibLlama.swift:255:32 Cannot convert value of type 'Duration' to expected argument type 'Double'
/ios/llama.cpp.swift/LibLlama.swift:272:64 Cannot find 'llama_model_size' in scope
/ios/llama.cpp.swift/LibLlama.swift:273:62 Cannot find 'llama_model_n_params' in scope
/ios/llama.cpp.swift/LibLlama.swift:293:9 Cannot find 'llama_kv_cache_clear' in scope
/ios/llama.cpp.swift/LibLlama.swift:296:60 Cannot find type 'llama_token' in scope
/ios/llama.cpp.swift/LibLlama.swift:299:43 Cannot find type 'llama_token' in scope
/ios/llama.cpp.swift/LibLlama.swift:300:26 Cannot find 'llama_tokenize' in scope
/ios/llama.cpp.swift/LibLlama.swift:302:27 Cannot find type 'llama_token' in scope
/ios/llama.cpp.swift/LibLlama.swift:313:40 Cannot find type 'llama_token' in scope
/ios/llama.cpp.swift/LibLlama.swift:319:23 Cannot find 'llama_token_to_piece' in scope
/ios/llama.cpp.swift/LibLlama.swift:327:30 Cannot find 'llama_token_to_piece' in scope
/ios/llama.cpp.swift/LibLlama.swift:328:33 Generic parameter 'Element' could not be inferred
~/Library/Developer/Xcode/DerivedData/Runner-efnwjojzxwrmmpfdjskgbtmftvem/SourcePackages/checkouts/llama.cpp/Sources/llama/llama.h
~/Library/Developer/Xcode/DerivedData/Runner-efnwjojzxwrmmpfdjskgbtmftvem/SourcePackages/checkouts/llama.cpp/Sources/llama/llama.h:3:10 'llama.h' file not found with <angled> include; use "quotes" instead
same issue
Can confirm, same issue
The way it works now is that you need to build llama.cpp with cmake, and then install it using cmake --install. This should allow swift to find the llama.cpp library. See the way the CI builds the swift example:
https://github.com/ggerganov/llama.cpp/blob/235f6e14bf0ed0211c51aeff14139038ae1000aa/.github/workflows/build.yml#L573-L592
First of all, thank you for providing some clarification. I don't usually use cmake so I am not familiar with the build process but the project still exposes a Package.swift file which seems to not work currently (even the example SwiftUI projects are broken).
Previously when developing for iOS and MacOS we could point Xcode to the llama.cpp swift package and it would "just work" which was pretty nice. If there are additional steps to be done now, can we have some additional documentation around the process?
@pgorzelany Doing what the CI workflows do (see slaren's comment) should work.
The CI workflows install the llama.cpp binaries into the default system paths so your Swift project will automatically find them. However, you might not always want to do that. Instead, you can build different variants of the binaries (e.g. for iOS, tvOS, macOS, etc.) and install them into custom paths using CMAKE_INSTALL_PREFIX. After that you can point your project to that install location by updating the Build Settings in XCode. Here is how I configured the llama.swiftui example on my machine:
The process is a bit more involved than before, but it is more flexible and much easier to maintain. It would be useful to have step-by-step instructions added to the example, but I don't have much experience working with XCode (there is stuff like code signing, development teams, etc.), so I am hoping that people who are familiar will contribute and explain how to build a project correctly.
So atm, if you are looking for a point-and-click solution - there isn't one yet. You will need to understand how CMake works and start using it.
Thank you. Once I understand how to properly set it up I will try to contribute some documentation around it. This project is used in multiple iOS and MacOS apps and it was very convenient to use it with the Package.swift file, maybe there is a way to modify the Package.swift to work again.
Hi @ggerganov after I follow the steps in CI, do the CMake and install and updated the search path to /usr/local/include and /usr/local/lib.
still getting different errors for Undefined symbols do you have any suggestions?
after run xcodebuild -scheme llama-Package -destination "generic/platform=macOS" and use swift package in the project, the app able to build but got Library not loaded error during the running.
dyld[61825]: Library not loaded: @rpath/libggml.dylib
for iOS build, got error like "Building for 'iOS-simulator', but linking in dylib (/usr/local/lib/libggml.dylib) built for 'macOS'" while build.
here is the script I am using to update and build
if [ ! -d "llama.cpp" ]; then
git clone https://github.com/ggerganov/llama.cpp
cd ./llama.cpp
else
cd ./llama.cpp
git pull
fi
rm -rf build
mkdir build
cd build
cmake -G Xcode .. \
-DGGML_METAL_USE_BF16=ON \
-DGGML_METAL_EMBED_LIBRARY=ON \
-DLLAMA_BUILD_EXAMPLES=OFF \
-DLLAMA_BUILD_TESTS=OFF \
-DLLAMA_BUILD_SERVER=OFF \
-DCMAKE_OSX_ARCHITECTURES="arm64;x86_64"
cmake --build . --config Release -j $(sysctl -n hw.logicalcpu)
sudo cmake --install . --config Release
# build for swift package
cd ..
xcodebuild -scheme llama-Package \
-destination "generic/platform=macOS" \
-destination "generic/platform=iOS" \
clean build
and use the package in the Xcode project. please let me know if anything I am missed
me too
Was anyone able to solve this issue? I have an iOS project using llama and can't update to the latest version because of this. Any help appreciated!
The way it works now is that you need to build llama.cpp with
cmake, and then install it usingcmake --install. This should allow swift to find the llama.cpp library. See the way the CI builds the swift example:llama.cpp/.github/workflows/build.yml
Lines 573 to 592 in 235f6e1
- name: Build llama.cpp with CMake id: cmake_build run: | sysctl -a mkdir build cd build cmake -G Xcode .. \ -DGGML_METAL_USE_BF16=ON \ -DGGML_METAL_EMBED_LIBRARY=ON \ -DLLAMA_BUILD_EXAMPLES=OFF \ -DLLAMA_BUILD_TESTS=OFF \ -DLLAMA_BUILD_SERVER=OFF \ -DCMAKE_OSX_ARCHITECTURES="arm64;x86_64" cmake --build . --config Release -j $(sysctl -n hw.logicalcpu) sudo cmake --install . --config Release - name: xcodebuild for swift package id: xcodebuild run: | xcodebuild -scheme llama-Package -destination "${{ matrix.destination }}"
Hi @slaren , thank you for the response. The issue is that we are not able to use it as a system library on iOS since we cannot install llama on iOS. On Mac it may work but we need to somehow embed llama into the iOS app bundle if we would like to ship it. Could you advise som possible ways around it?
The issue is that we are not able to use it as a system library on iOS since we cannot install llama on iOS. On Mac it may work but we need to somehow embed llama into the iOS app bundle if we would like to ship it.
Could you provide a reference that shipping libraries on iOS is not possible? Seems hard to imagine that there is such a limitation. There must be some workaround. I just don't have a lot of experience and cannot suggest anything atm, but I doubt there is not way to fix this.
The issue is that we are not able to use it as a system library on iOS since we cannot install llama on iOS. On Mac it may work but we need to somehow embed llama into the iOS app bundle if we would like to ship it.
Could you provide a reference that shipping libraries on iOS is not possible? Seems hard to imagine that there is such a limitation. There must be some workaround. I just don't have a lot of experience and cannot suggest anything atm, but I doubt there is not way to fix this.
Sorry for the drive-by, but my understanding is as follows; maybe it can provide some additional insights to avoid misunderstandings here:
The blocking issue with the build system changes is that, while iOS does support linking against dynamic libraries, it only supports that through the use of Frameworks. I.e., you are not allowed to ships a .dylib file as part of your app bundle on iOS, but you are allowed to essentially ship the same, also dynamically linked, code in a .framework bundle. On macOS, this limitation does not exist; it allows you to ship .dylib files as part of your app bundle.
So I think what needs to be done would be to add options to the llama.cpp's CMake build system to have it generate a .framework bundles instead of a .dylib files. I have no experience with CMake, so I can't provide more details on how to accomplish this, but I hope this at least provides a starting point for investigation.
ChatGPT conversation with extra information (the information about frame workssounds correct to me, but as always take this with a grain of salt, and I have no idea whether these CMake instructions would work): https://chatgpt.com/share/679ca58b-c498-8008-b9f7-a7527b5fd030
Thanks for the investigation. Maybe its possible to build llama as a static library and embed that in the iOS app?
@MrMage To add to that, from an earlier discussion, CMake should support building frameworks through CMAKE_FRAMEWORK. So definitely look into this as well.
The issue is that we are not able to use it as a system library on iOS since we cannot install llama on iOS. On Mac it may work but we need to somehow embed llama into the iOS app bundle if we would like to ship it.
Could you provide a reference that shipping libraries on iOS is not possible? Seems hard to imagine that there is such a limitation. There must be some workaround. I just don't have a lot of experience and cannot suggest anything atm, but I doubt there is not way to fix this.
Here are Apple docs describing the use of a systemLibrary target in a Swift package.
The way I understand it, if you expose llama.cpp as a systemLibrary it has to be actually installed on the system. This is possible on MacOS but you can't just install system libraries on iOS. So I am just saying the current Package.swift setup feels incorrect.
You can ship dynamic libraries as Frameworks as discussed above but I am not sure if there is a way to expose them as a Swift package.
One possible route forward would be to build llama using CMake for all Apple platforms separately and then bundle that up in an XCFramework and then wrap that into a Swift package as a binaryTarget.
To add to that, from an https://github.com/ggerganov/llama.cpp/issues/11113#issuecomment-2574061157, CMake should support building frameworks through CMAKE_FRAMEWORK. So definitely look into this as well.
For what it's worth, I am only using llama.cpp for my macOS app where dylibs are allowed. So I am personally not affected by this issue, but I figured I'd provide some insight. That being said, I remember trying the CMAKE_FRAMEWORK option when building llama.cpp, and it did not work for me for llama.cpp (and I've seen others reports here of the same).
The way I understand it, if you expose llama.cpp as a systemLibrary it has to be actually installed on the system. This is possible on MacOS but you can't just install system libraries on iOS. So I am just saying the current Package.swift setup feels incorrect.
Now, I can't expect my Mac app's users to install llama.cpp on their own, but I found a way around this: I am still adding the llama.cpp Swift package to my macOS app, but I am also adding references to the built dylibs to my app's target, telling Xcode to copy these dylibs into the app's frameworks folder upon build, plus the build settings from above that ensure that Xcode also looks for the same copies of these libraries when e.g. referencing header files.
That way, I can have SwiftPM believe that it's using the system libraries, but in reality ship those libraries as part of the app bundle.
I could imagine that a similar approach might work for iOS — but only if one can get llama.cpp to emit frameworks instead of dylibs. That being said, I could also imagine that building static versions of these libraries might work, but I think the integration with SwiftPM would be more complex then. Similarly, the approach of building an XCFramework would probably be ideal — if one can get llama.cpp's build process to emit one, and if Xcode can be convinced to use that without issues. I guess at that point, one would simply drop the reference to the llama.cpp "system library" package altogether.
I tried to make the XCFramework approach work for several hours but wasn’t able to. In the end I forked the project, reverted the Package.swift changes and adjusted it for the latest project structure. It now builds for iOS as before so it’s a potential workaround.
Good to see more discussion on this topic!
I remember reading @philippzagar used the XCFramework approach a while ago. Their fork is quite far behind and I'm not familiar enough with the ecosystem to bring it up to date but maybe it'll help y'all.
@pgorzelany any chance you can share your workaround?
Hi @hnipps, here is my approach that is similar to what @pgorzelany said: In my case I created a new SPM and packaged llama sources based on the old version of the Package.Swift file of llamacpp. Old version of llama.cpp Package.swift
I think you can skip the step of creating a new package and just put the Package.swift code that I wrote next, in the llama's Package.swift file.
Here is the structure of my Package.swift.
- LlamaKit
- Package.swift
- Sources
- llama (llama.cpp repo sources code).
- LlamaKit
Here is the code of Package.swift.
import PackageDescription
var sources = [
"src/llama.cpp",
"src/llama-adapter.cpp",
"src/llama-arch.cpp",
"src/llama-chat.cpp",
"src/llama-batch.cpp",
"src/llama-context.cpp",
"src/llama-cparams.cpp",
"src/llama-grammar.cpp",
"src/llama-hparams.cpp",
"src/llama-impl.cpp",
"src/llama-kv-cache.cpp",
"src/llama-mmap.cpp",
"src/llama-model.cpp",
"src/llama-model-loader.cpp",
"src/llama-quant.cpp",
"src/llama-sampling.cpp",
"src/llama-vocab.cpp",
"src/unicode.cpp",
"src/unicode-data.cpp",
"ggml/src/ggml.c",
"ggml/src/ggml-alloc.c",
"ggml/src/ggml-backend.cpp",
"ggml/src/ggml-backend-reg.cpp",
"ggml/src/ggml-quants.c",
"ggml/src/ggml-threading.cpp",
"ggml/src/gguf.cpp",
"ggml/src/ggml-cpu/ggml-cpu.c",
"ggml/src/ggml-cpu/ggml-cpu.cpp",
"ggml/src/ggml-cpu/ggml-cpu-aarch64.cpp",
"ggml/src/ggml-cpu/ggml-cpu-hbm.cpp",
"ggml/src/ggml-cpu/ggml-cpu-quants.c",
"ggml/src/ggml-cpu/ggml-cpu-traits.cpp"
]
var resources: [Resource] = []
var linkerSettings: [LinkerSetting] = []
var cSettings: [CSetting] = [
.unsafeFlags(["-Wno-shorten-64-to-32", "-O3", "-DNDEBUG"]),
.unsafeFlags(["-fno-objc-arc"]),
.headerSearchPath("ggml/src"),
.headerSearchPath("ggml/src/ggml-cpu"),
// NOTE: NEW_LAPACK will required iOS version 16.4+
// We should consider add this in the future when we drop support for iOS 14
// (ref: ref: https://developer.apple.com/documentation/accelerate/1513264-cblas_sgemm?language=objc)
// .define("ACCELERATE_NEW_LAPACK"),
// .define("ACCELERATE_LAPACK_ILP64")
.define("GGML_USE_CPU"),
]
#if canImport(Darwin)
sources.append("ggml/src/ggml-common.h")
sources.append("ggml/src/ggml-metal/ggml-metal.m")
resources.append(.process("ggml/src/ggml-metal/ggml-metal.metal"))
linkerSettings.append(.linkedFramework("Accelerate"))
cSettings.append(
contentsOf: [
.define("GGML_USE_ACCELERATE"),
.define("GGML_USE_METAL"),
]
)
#endif
let package = Package(
name: "LlamaKit",
platforms: [
.macOS(.v15),
.iOS(.v18)
],
products: [
.library(
name: "LlamaKit",
targets: ["LlamaKit"]),
],
dependencies: [],
targets: [
.target(
name: "LlamaKit",
dependencies: ["llama"],
path: "Sources/LlamaKit",
swiftSettings: [
.unsafeFlags(["-Onone"], .when(configuration: .release))
]
),
.target(
name: "llama",
path: "Sources/llama",
exclude: [
"build",
"cmake",
"examples",
"scripts",
"models",
"tests",
"CMakeLists.txt",
"Makefile",
"ggml/src/ggml-metal/ggml-metal.metal"
],
sources: sources,
resources: resources,
publicHeadersPath: "spm-headers",
cSettings: cSettings,
swiftSettings: [ // Disable optimization because model generate random response in release mode.
.unsafeFlags(["-Onone"], .when(configuration: .release))
],
linkerSettings: linkerSettings
),
.testTarget(
name: "LlamaKitTests",
dependencies: ["LlamaKit"]
)
],
cxxLanguageStandard: .cxx17
)
thank you @yhondri! are you going to create PR to submit your Package.Swift?
Hi @Animaxx, I could do it, but I believe the team wouldn’t accept it due to the new way of building and delivering the library with CMake. Right now, I think the idea is to make the build process work with CMake.
@yhondri thanks for your solution, but I think it should add .headerSearchPath("ggml/include") otherwise it will show gguf.h file not found error.
whats next, any solution?
Could someone take a look at the https://github.com/allenai/OLMoE.swift project, see how they have integrated llama.cpp into the iOS app and provide a short summary here?
Could someone take a look at the allenai/OLMoE.swift project, see how they have integrated
llama.cppinto the iOS app and provide a short summary here?
They are using commit cc98896db858df7aa40d0e16a505883ef196a482 of llama.cpp (see their xcode project file), which predates the CMake-based packaging changes.
To make the SwiftUI demo work again, this is how I patched Package.swift:
(This is without any extras @yhondri added in their solution. Just a plain working compilation of llama.cpp to use in Swift projects and esp. the contained example.)
// swift-tools-version:5.5
import PackageDescription
var sources = [
"src/llama.cpp",
"src/llama-adapter.cpp",
"src/llama-arch.cpp",
"src/llama-batch.cpp",
"src/llama-chat.cpp",
"src/llama-context.cpp",
"src/llama-cparams.cpp",
"src/llama-grammar.cpp",
"src/llama-hparams.cpp",
"src/llama-impl.cpp",
"src/llama-kv-cache.cpp",
"src/llama-mmap.cpp",
"src/llama-model.cpp",
"src/llama-model-loader.cpp",
"src/llama-quant.cpp",
"src/llama-sampling.cpp",
"src/llama-vocab.cpp",
"src/unicode.cpp",
"src/unicode-data.cpp",
"ggml/src/ggml.c",
"ggml/src/ggml-alloc.c",
"ggml/src/ggml-backend.cpp",
"ggml/src/ggml-backend-reg.cpp",
"ggml/src/ggml-quants.c",
"ggml/src/ggml-threading.cpp",
"ggml/src/gguf.cpp",
"ggml/src/ggml-cpu/ggml-cpu.c",
"ggml/src/ggml-cpu/ggml-cpu.cpp",
"ggml/src/ggml-cpu/ggml-cpu-aarch64.cpp",
"ggml/src/ggml-cpu/ggml-cpu-hbm.cpp",
"ggml/src/ggml-cpu/ggml-cpu-quants.c",
"ggml/src/ggml-cpu/ggml-cpu-traits.cpp",
]
var resources: [Resource] = []
var linkerSettings: [LinkerSetting] = []
var cSettings: [CSetting] = [
.unsafeFlags(["-Wno-shorten-64-to-32", "-O3", "-DNDEBUG"]),
.unsafeFlags(["-fno-objc-arc"]),
.headerSearchPath("ggml/include"),
.headerSearchPath("ggml/src"),
.headerSearchPath("ggml/src/ggml-cpu"),
// NOTE: NEW_LAPACK will required iOS version 16.4+
// We should consider add this in the future when we drop support for iOS 14
// (ref: ref: https://developer.apple.com/documentation/accelerate/1513264-cblas_sgemm?language=objc)
// .define("ACCELERATE_NEW_LAPACK"),
// .define("ACCELERATE_LAPACK_ILP64")
.define("GGML_USE_CPU"),
]
#if canImport(Darwin)
sources.append("ggml/src/ggml-common.h")
sources.append("ggml/src/ggml-metal/ggml-metal.m")
resources.append(.process("ggml/src/ggml-metal/ggml-metal.metal"))
linkerSettings.append(.linkedFramework("Accelerate"))
cSettings.append(
contentsOf: [
.define("GGML_USE_ACCELERATE"),
.define("GGML_USE_METAL"),
]
)
#endif
#if os(Linux)
cSettings.append(.define("_GNU_SOURCE"))
#endif
let package = Package(
name: "llama",
platforms: [
.macOS(.v12),
.iOS(.v14),
.watchOS(.v4),
.tvOS(.v14)
],
products: [
.library(name: "llama", targets: ["llama"]),
],
targets: [
.target(
name: "llama",
path: ".",
exclude: [
"build",
"ci",
"cmake",
"docs",
"examples",
"scripts",
"models",
"pocs",
"prompts",
"requirements",
"scripts",
"Sources",
"tests",
"CMakeLists.txt",
"Makefile",
],
sources: sources,
resources: resources,
publicHeadersPath: "spm-headers",
cSettings: cSettings,
linkerSettings: linkerSettings
)
],
cxxLanguageStandard: .cxx17
)
I must say, that it isn't a great move to break everyone's build and don't provide an alternative solution for iOS/VisionOS/etc.
Currently, the cmake build doesn't support cross-compilation to iOS. People relied on a working Package.swift. You also broke upstream projects like llama-cpp-swift, which you advertise at the same time.
I see 2 ways out of this mess:
- Reinstate a working
Package.swiftdefinition. - a) Provide a working
cmakeconfiguration to cross-compile to iOS, VisionOS and maybe even tvOS. b) Provide a fully compiled xcframework which bundles all Apple platforms in Github's release section and point thePackage.swiftto that.