djl icon indicating copy to clipboard operation
djl copied to clipboard

Updated llama.cpp engine to version b2581

Open MatPere opened this issue 1 year ago • 2 comments
trafficstars

Description

The llama engine code is now compatible with the b2581 release of llama.cpp repository, upgrading from b1696.

  • Note : This is just a compatibility update, some code had changed on the llama.cpp interface, making this engine unable to run. This pull request does not provide any kind of new functionality in itself, however it opens up the opportunity to further enhance the llama engine with the newest llama.cpp tools.
  • A consequence of this change is that the llama engine now supports Qwen-based GGUF models, which were incompatible on version b1696.
  • The test were done on a linux-x86_64 architecture. The changes were not tested on linux-aarch64, osx-x86_64, osx-aarch64 and win-x86_64.
  • IMPORTANT : djl/engines/llama/build.gradle downloads binaries for the different architectures from "https://publish.djl.ai/llama/${llamacpp_version}/jnilib/${djl_version}". Since I did not update that repository with the newest binaries, ./gradlew build WILL FAIL.
  • as a temporary workaround for the test, disable the download in build.gradle, and use the task ./gradlew compileJNI to build your own binaries for your OS.

MatPere avatar Apr 05 '24 14:04 MatPere

@MatPere

Would you please take a look this test failure: https://github.com/deepjavalibrary/djl/actions/runs/8590262606/job/23537569545#step:5:201

It seems failed for mac when loading model.

You can reproduce the error locally on your mac:

cd engines/llama
./gradlew compileJNI
./gradlew test -Dnightly=true -Pjni

frankfliu avatar Apr 07 '24 17:04 frankfliu

I don't think I'll be able to get my hands on an osx anytime soon, so I can only suggest leads for anyone willing to help. I did not manage to reproduce the error on my linux-x86_64, so it is likely an os-specific issue, and I have very little experience dealing with osx-related errors.

From my understanding, the error message comes from within the code of llama.cpp (llama.cpp/common/common.cpp, function llama_init_from_gpt_params). Normally, the cause for such an error may come from the content of the model file, or the parameters given to the loader, but without access to the LLAMA_LOG_ERROR itself I can't tell from that message alone. Because it works on other OSs, my guess is that it has to come from the way the model is downloaded or the code is compiled.

Maybe (but I doubt it) this is an internal issue with llama.cpp itself being unable to handle that exact model on osx for some reason, in which case manually downloading and installing the llama.cpp repository as well as the model tinyllama-1.1b-1t-openorca.Q4_K_M.gguf, then launching ./main -m path/to/model.gguf would presumably result in the same failure (and in that case we'd have to post a new issue to their repository).

MatPere avatar Apr 09 '24 09:04 MatPere