rust-llama.cpp icon indicating copy to clipboard operation
rust-llama.cpp copied to clipboard

Feature flag metal: Fails to load model when n_gpu_layers > 0

Open phudtran opened this issue 2 years ago • 8 comments

Can't utilize GPU on Mac with

llama_cpp_rs = { git = "https://github.com/mdrokz/rust-llama.cpp", version = "0.3.0", features = [
    "metal",
] }

Code

use llama_cpp_rs::{
    options::{ModelOptions, PredictOptions},
    LLama,
};
fn main() {
    let model_options = ModelOptions {
        n_gpu_layers: 1,
        ..Default::default()
    };

    let llama = LLama::new("zephyr-7b-alpha.Q2_K.gguf".into(), &model_options);
    println!("llama: {:?}", llama);
    let predict_options = PredictOptions {
        tokens: 0,
        threads: 14,
        top_k: 90,
        top_p: 0.86,
        token_callback: Some(Box::new(|token| {
            println!("token1: {}", token);

            true
        })),
        ..Default::default()
    };

    llama
        .unwrap()
        .predict(
            "what are the national animals of india".into(),
            predict_options,
        )
        .unwrap();
}

Error

llama_new_context_with_model: kv self size  =   64.00 MB
llama_new_context_with_model: ggml_metal_init() failed
llama: Err("Failed to load model")
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: "Failed to load model"', src/main.rs:40:10

phudtran avatar Oct 26 '23 02:10 phudtran

Can't utilize GPU on Mac with

llama_cpp_rs = { git = "https://github.com/mdrokz/rust-llama.cpp", version = "0.3.0", features = [
    "metal",
] }

Code

use llama_cpp_rs::{
    options::{ModelOptions, PredictOptions},
    LLama,
};
fn main() {
    let model_options = ModelOptions {
        n_gpu_layers: 1,
        ..Default::default()
    };

    let llama = LLama::new("zephyr-7b-alpha.Q2_K.gguf".into(), &model_options);
    println!("llama: {:?}", llama);
    let predict_options = PredictOptions {
        tokens: 0,
        threads: 14,
        top_k: 90,
        top_p: 0.86,
        token_callback: Some(Box::new(|token| {
            println!("token1: {}", token);

            true
        })),
        ..Default::default()
    };

    llama
        .unwrap()
        .predict(
            "what are the national animals of india".into(),
            predict_options,
        )
        .unwrap();
}

Error

llama_new_context_with_model: kv self size  =   64.00 MB
llama_new_context_with_model: ggml_metal_init() failed
llama: Err("Failed to load model")
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: "Failed to load model"', src/main.rs:40:10

Hmm weird i dont have a mac available currently to test this, i will try to see about this. Thanks

mdrokz avatar Oct 26 '23 19:10 mdrokz

i have the same problem on my Apple M1.

zackshen avatar Nov 04 '23 05:11 zackshen

@phudtran i have found root cause. you should put the ggml-metal.metal file next to your binary. i found disable the debug log print build.rs for building metal feature. so print more log to find the error.

build.rs

fn compile_metal(cx: &mut Build, cxx: &mut Build) {
    cx.flag("-DGGML_USE_METAL").flag("-DGGML_METAL_NDEBUG");
    cxx.flag("-DGGML_USE_METAL");

    println!("cargo:rustc-link-lib=framework=Metal");
    println!("cargo:rustc-link-lib=framework=Foundation");
    println!("cargo:rustc-link-lib=framework=MetalPerformanceShaders");
    println!("cargo:rustc-link-lib=framework=MetalKit");

    cx.include("./llama.cpp/ggml-metal.h")
        .file("./llama.cpp/ggml-metal.m");
}

disable GGML_METAL_NDEBUG

fn compile_metal(cx: &mut Build, cxx: &mut Build) {
    cx.flag("-DGGML_USE_METAL"); // <==============  enable print debug log.
    cxx.flag("-DGGML_USE_METAL");

    println!("cargo:rustc-link-lib=framework=Metal");
    println!("cargo:rustc-link-lib=framework=Foundation");
    println!("cargo:rustc-link-lib=framework=MetalPerformanceShaders");
    println!("cargo:rustc-link-lib=framework=MetalKit");

    cx.include("./llama.cpp/ggml-metal.h")
        .file("./llama.cpp/ggml-metal.m");
}

@mdrokz Should add some flags to enable(disable) the debug log ?

zackshen avatar Nov 15 '23 16:11 zackshen

@zackshen I've tried adding the ggml-metal.metal file next to the binary, but now I get the following message: -[MTLComputePipelineDescriptorInternal setComputeFunction:withType:]:722: failed assertion 'computeFunction must not be nil.'

hugonijmek avatar Nov 16 '23 16:11 hugonijmek

@zackshen I've tried adding the ggml-metal.metal file next to the binary, but now I get the following message:

-[MTLComputePipelineDescriptorInternal setComputeFunction:withType:]:722: failed assertion 'computeFunction must not be nil.'

I have never seen this error before. just modified the example code in the this repo for testing gpu utilization. Can you show your code ?

zackshen avatar Nov 17 '23 13:11 zackshen

@phudtran i have found root cause. you should put the ggml-metal.metal file next to your binary. i found disable the debug log print build.rs for building metal feature. so print more log to find the error.

build.rs

fn compile_metal(cx: &mut Build, cxx: &mut Build) {
    cx.flag("-DGGML_USE_METAL").flag("-DGGML_METAL_NDEBUG");
    cxx.flag("-DGGML_USE_METAL");

    println!("cargo:rustc-link-lib=framework=Metal");
    println!("cargo:rustc-link-lib=framework=Foundation");
    println!("cargo:rustc-link-lib=framework=MetalPerformanceShaders");
    println!("cargo:rustc-link-lib=framework=MetalKit");

    cx.include("./llama.cpp/ggml-metal.h")
        .file("./llama.cpp/ggml-metal.m");
}

disable GGML_METAL_NDEBUG

fn compile_metal(cx: &mut Build, cxx: &mut Build) {
    cx.flag("-DGGML_USE_METAL"); // <==============  enable print debug log.
    cxx.flag("-DGGML_USE_METAL");

    println!("cargo:rustc-link-lib=framework=Metal");
    println!("cargo:rustc-link-lib=framework=Foundation");
    println!("cargo:rustc-link-lib=framework=MetalPerformanceShaders");
    println!("cargo:rustc-link-lib=framework=MetalKit");

    cx.include("./llama.cpp/ggml-metal.h")
        .file("./llama.cpp/ggml-metal.m");
}

@mdrokz Should add some flags to enable(disable) the debug log ?

I will add an option for enabling / disabling debug

mdrokz avatar Nov 20 '23 11:11 mdrokz

Encountered the same error. Placing ggml-metal.metal into the project directory leads to the same error as @hugonijmek have seen.

However, this solves the original issue: setting the following env variable to point to llama.cpp sources GGML_METAL_PATH_RESOURCES=/rust-llama.cpp/llama.cpp/ solves the issue. (https://github.com/ggerganov/whisper.cpp/blob/master/ggml-metal.m#L261)

genbit avatar Dec 14 '23 02:12 genbit

If you want to include it in the build so you don't have to worry about having the shader file parallel or using the environment variable, you can use the solution from the rustformers/llm respository: https://github.com/rustformers/llm/commit/9d39ff8cc0a89bb22cc17bdc1dd2470f3421d788

To get it working, update the needle to the current string.

The file this puts in the output directory has a prefix to 'ggml-metal.o' so when checking the ggml_type in compile_llama, check for "metal" and if so, search the directory for the file using a call to ends_with("-ggml-metal.o") and then add that with cxx.object(metal_path).

tbogdala avatar Dec 20 '23 18:12 tbogdala