whisper.cpp icon indicating copy to clipboard operation
whisper.cpp copied to clipboard

Xcode projects no longer build with both whisper.cpp and llama.cpp

Open RoryMB opened this issue 1 year ago • 10 comments

As of these two commits: https://github.com/ggerganov/whisper.cpp/commit/3ffc83d90a958e3810f02e49de44abc3a85f9a35 https://github.com/ggerganov/llama.cpp/commit/df334a11251b81fd0b6a0e51e7146e0ba9e973f2

Xcode projects that depend on both whisper.cpp and llama.cpp fail to build with the following error:

duplicate symbol '_ggml_map_custom2_inplace_f32' in:
    /Users/rmbutler/Library/Developer/Xcode/DerivedData/Test-enhfbqpszgsyebgummvnergeyfis/Build/Products/Debug-iphoneos/whisper.o
    /Users/rmbutler/Library/Developer/Xcode/DerivedData/Test-enhfbqpszgsyebgummvnergeyfis/Build/Products/Debug-iphoneos/llama.o
duplicate symbol '_ggml_backend_buft_supports_backend' in:
    /Users/rmbutler/Library/Developer/Xcode/DerivedData/Test-enhfbqpszgsyebgummvnergeyfis/Build/Products/Debug-iphoneos/whisper.o
    /Users/rmbutler/Library/Developer/Xcode/DerivedData/Test-enhfbqpszgsyebgummvnergeyfis/Build/Products/Debug-iphoneos/llama.o
...etc

ld: 533 duplicate symbols
clang: error: linker command failed with exit code 1 (use -v to see invocation)

Based on the comments in the accompanying pull requests I see that there is good reason for the commits, so I wonder if there is any alternative solution?

Thanks

RoryMB avatar Feb 22 '24 00:02 RoryMB

@1-ashraful-islam curious if you have an idea on the easiest path since you were doing this previously #1701

I was thinking you could fork llama.cpp and whisper.cpp, modify the swift package dependencies and exclusions such that they both reference the same set of ggml sources, but is there an easier path of building whisper and llama frameworks independently? I haven't wrapped my head around the SPM / Xcode ecosystem..

to3d avatar Feb 25 '24 19:02 to3d

Sorry for the late reply, I don't know of a better way to resolve this issue. I banged my head around this problem before and gotten nowhere until I separated ggml as dependency in both whisper and llama. I would suggest doing the fork and revert the mentioned commits - until someone figures out a better approach.

1-ashraful-islam avatar Mar 05 '24 23:03 1-ashraful-islam

I actually just found a better way!! You can include the packages as framework (I got the idea from mlx-swift-examples)

Here's how you can do it:

  1. File > New > Target > Multiplatform > Framework image
  2. Set the product name to llama for llama.cpp. Set the other settings appropriately image
  3. Now in your "Targets", select llama (icon should be yellow for framework). Then add llama.cpp package dependency Under General > Frameworks and Libraries image
  4. Remove llama.cpp package dependency from your original target (its now included through the framework)

Repeat the steps for whisper: In step 2 set the Product name towhisper for whisper.cpp In step 3 add whisper.cpp as package dependency

@ggerganov do you think this info would be useful to include somewhere?

1-ashraful-islam avatar Mar 06 '24 00:03 1-ashraful-islam

Wow, thanks! This solved the duplicate symbol errors for me. Being new to the whole Apple/Swift landscape, I don't think I would have figured this solution out any time soon.

RoryMB avatar Mar 06 '24 02:03 RoryMB

Hm, great! I haven't tried it, but since it seems to work for @RoryMB then this might be the way to do it. We can add a link to your comment in all relevant examples in llama.cpp and whisper.cpp

ggerganov avatar Mar 06 '24 07:03 ggerganov

One thing to note here: it seems like GGMLMetalClass is selected from either whisper.framework or llama.framework randomly. This is what I get when I run the application target:

objc[40428]: Class GGMLMetalClass is implemented in both /Users/.../whisper.framework/Versions/A/whisper (0x1020083c0) and /Users/.../llama.framework/Versions/A/llama (0x101e143c0). One of the two will be used. Which one is undefined.

At the moment I haven't seen this to be an issue for either transcription or llm use. If I run into any issue in the future, I will add notes here.

1-ashraful-islam avatar Mar 06 '24 08:03 1-ashraful-islam

@1-ashraful-islam: Thank you for the instructions on importing Whistler and Llama into a project. While I was able to import them successfully, I encountered issues when trying to run both models simultaneously. Whistler operates as expected, but Llama does not produce any response. Did you expereince a similar issue, any clue why that could happen? Thank you.

gavin1818 avatar Jun 05 '24 22:06 gavin1818

I have the same problem. Project compiled with CMake llama.cpp depedency works perfectly. So does project with whisper.cpp only dependency. But when compiled with both dependencies simultaneously LLM functionality breaks at runtime. Using different models cause different errors for example loading Meta-Llama-3.1-8B-Instruct-IQ4_XS.gguf fails with:

llama_model_load: error loading model: invalid model: tensor '' is duplicated

running exactly the same code just after adding whisper.cpp as library. Using other model I am getting error:

terminate called after throwing an instance of 'std::out_of_range'

kikaitachi avatar Aug 31 '24 20:08 kikaitachi

I've tried @1-ashraful-islam 's method, but get this errors: whisper.cpp/spm-headers/ggml.h:605:30 'ggml_tensor::grad' from module 'whisper' is not present in definition of 'struct ggml_tensor' in module 'llama'

that's because whisper's ggml and llama's ggml has a little diff.

could anyone know how to fix it?

shuhongwu avatar Nov 20 '24 07:11 shuhongwu

just exclude ggml in Package.swift in either dependencies will also solve the issue.

zhanghaohit avatar Jan 24 '25 07:01 zhanghaohit

Could someone give me a bit of advice/help to get both whisper and llama running in the same iOS app -> i tried to follow the process above by @1-ashraful-islam -> but i can't get it to build, i have too many errors and conflicts. Does anyone have a git project which shows this working?

@zhanghaohit -> i tried your suggestion too and removed the ggml from whisper , but then i get errors with conflicting functions - there are some which are slightly different between the two packages.

update: i managed to add both the whisper xcframework and the llama framework to the same project and build and it runs, but i get this warning

objc[1710]: Class ggml_metal_heap_ptr is implemented in both /private/var/containers/Bundle/Application/14604492-DE42-4C8D-9AB4-EDB9E853D9AC/llamaWhisper.app/Frameworks/llama.framework/llama (0x1057c4360) and /private/var/containers/Bundle/Application/14604492-DE42-4C8D-9AB4-EDB9E853D9AC/llamaWhisper.app/Frameworks/whisper.framework/whisper (0x1051ac858). This may cause spurious casting failures and mysterious crashes. One of the duplicates must be removed or renamed.

now sure how to fix it as i can't find the class ggml_metal_heap_ptr anywhere in either framework

Please help!

sahmed53 avatar Jun 09 '25 18:06 sahmed53

now sure how to fix it as i can't find the class ggml_metal_heap_ptr anywhere in either framework

This is an issue that we need to look into. The problem is that both whisper.cpp and llama.cpp include ggml in their xcframeworks and this is causing this clash. I'll create a new issue for this and link to this one. Thanks!

danbev avatar Jun 10 '25 07:06 danbev