llama.cpp icon indicating copy to clipboard operation
llama.cpp copied to clipboard

[CANN] Add Ascend NPU backend

Open hipudding opened this issue 3 months ago • 14 comments

Ascend is a full-stack AI computing infrastructure for industry applications and services based on Huawei Ascend processors and software.

CANN (Compute Architecture of Neural Networks), developped by Huawei, is a heterogeneous computing architecture for AI.

This commit adding Ascend NPU as a new backend, which implements the following features:

  1. Ascend NPU register;
  2. Ascend NPU runtime(device memory, streams, events).
  3. Part of GGML_OPs through aclnn library.
  4. Introduce a new test file named test-backend-runtime, for testing runtime functionality.

@sa #6034

hipudding avatar Mar 13 '24 07:03 hipudding

For those struggling to find FTW is CANN :

https://support.huaweicloud.com/intl/en-us/usermanual-cce/cce_10_0239.html

Great!

phymbert avatar Mar 25 '24 05:03 phymbert

Good news! @ggerganov @slaren @phymbert, The most basic functions for this new backend is ready for review now. As I described in the issue(https://github.com/ggerganov/llama.cpp/issues/6034), this backend implemetation may be a lot of work, and I'd like to do it in steps.

Based on the reference to cuda's implementation, the basic functions of this backend is working now, I add some GGML_OPs (which is build-in in CANN package) and it pass the test(test-backend-ops).

More features will be submitted in independent PRs later. Which including:

  1. more GGML_OPs.
  2. quantization.
  3. split tensor.
  4. ...

Considering that Ascend NPU is not so easy to obtain. Here's my screenshots of compilation and testing (I got two NPUs at hand): image image

hipudding avatar Mar 28 '24 08:03 hipudding

I cannot comment on the CANN code, but the changes to the common files look good. However, I am not sure that there is any reason to merge a non-functional backend, especially considering that it is for hardware that does not seem to be publicly available. Currently, this backend does not seem to implement matrix multiplication.

Thank you very much for your review. Yes, this PR has not implemented all the features yet. Currently, only device access and some operators to verify these basic functionalities have been implemented. More operators are still under development, and mat-mul is also in progress, and mat-mul relies on quantization, which will be implemented after quantization. Ascend NPU is a publicly available hardware that can be purchased or used in virtual machine on Huawei Cloud. In China, Ascend NPU already has a considerable user base, especially among many Chinese internet companies. Many of them have already used Ascend NPU to build AI training or inference platforms. Due to high demand and limited production capacity, it may not be as convenient for individual developers to purchase Ascend NPU. However, I am very willing to donate an Ascend NPU machine to the llama.cpp community for running CI and other validation work. Currently, many popular AI projects support Ascend NPU as a hardware backend, such as PyTorch (through private use1), DeepSpeed, OpenCV, stable-diffusion-webui, and diffusers. Additionally, many other projects are also in development. We believe that llama.cpp is an excellent large language model inference engine, so we hope to prioritize its adaptation and attract more Ascend developers and users.

I agree that not merge this non-functional backend now, but wait for all main features have been implemented.

Thanks.

hipudding avatar Mar 30 '24 14:03 hipudding

However, I am very willing to donate an Ascend NPU machine to the llama.cpp community for running CI and other validation work.

If there is a dedicated node with the necessary hardware, adding it to ggml-ci is a relatively simple task. It will run a collection of unit and integration tests on each commit and it will make integration much smoother.

I can either send configuration instructions, or if I can get SSH access I can login directly and set it up. Let me know

ggerganov avatar Mar 31 '24 08:03 ggerganov

However, I am very willing to donate an Ascend NPU machine to the llama.cpp community for running CI and other validation work.

If there is a dedicated node with the necessary hardware, adding it to ggml-ci is a relatively simple task. It will run a collection of unit and integration tests on each commit and it will make integration much smoother.

I can either send configuration instructions, or if I can get SSH access I can login directly and set it up. Let me know

Sure. I will.

hipudding avatar Mar 31 '24 08:03 hipudding

📈 llama.cpp server for bench-server-baseline on Standard_NC4as_T4_v3 for phi-2-q4_0: 217 iterations 🚀

Expand details for performance related PR only
  • Concurrent users: 8, duration: 10m
  • HTTP request : avg=22258.09ms p(95)=39155.71ms fails=, finish reason: stop=106 truncated=111
  • Prompt processing (pp): avg=276.49tk/s p(95)=815.25tk/s
  • Token generation (tg): avg=23.93tk/s p(95)=24.98tk/s
  • ggml-org/models/phi-2/ggml-model-q4_0.gguf parallel=8 ctx-size=16384 ngl=33 batch-size=2048 ubatch-size=256 pp=1024 pp+tg=2048 branch=npu_support commit=0274c5d05d66be5843d00ffe9b6673cb274ad923

prompt_tokens_seconds

More
---
config:
    xyChart:
        titleFontSize: 12
        width: 900
        height: 600
    themeVariables:
        xyChart:
            titleColor: "#000000"
---
xychart-beta
    title "llama.cpp bench-server-baseline on Standard_NC4as_T4_v3
 duration=10m 217 iterations"
    y-axis "llamacpp:prompt_tokens_seconds"
    x-axis "llamacpp:prompt_tokens_seconds" 1715678337 --> 1715678969
    line [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 358.02, 358.02, 358.02, 358.02, 358.02, 387.19, 387.19, 387.19, 387.19, 387.19, 397.03, 397.03, 397.03, 397.03, 397.03, 436.59, 436.59, 436.59, 436.59, 436.59, 458.35, 458.35, 458.35, 458.35, 458.35, 461.05, 461.05, 461.05, 461.05, 461.05, 459.92, 459.92, 459.92, 459.92, 459.92, 455.85, 455.85, 455.85, 455.85, 455.85, 468.12, 468.12, 468.12, 468.12, 468.12, 479.23, 479.23, 479.23, 479.23, 479.23, 501.53, 501.53, 501.53, 501.53, 501.53, 502.42, 502.42, 502.42, 502.42, 502.42, 522.61, 522.61, 522.61, 522.61, 522.61, 538.74, 538.74, 538.74, 538.74, 538.74, 539.79, 539.79, 539.79, 539.79, 539.79, 539.68, 539.68, 539.68, 539.68, 539.68, 542.48, 542.48, 542.48, 542.48, 542.48, 543.43, 543.43, 543.43, 543.43, 543.43, 552.59, 552.59, 552.59, 552.59, 552.59, 558.64, 558.64, 558.64, 558.64, 558.64, 564.12, 564.12, 564.12, 564.12, 564.12, 564.14, 564.14, 564.14, 564.14, 564.14, 566.2, 566.2, 566.2, 566.2, 566.2, 578.62, 578.62, 578.62, 578.62, 578.62, 579.92, 579.92, 579.92, 579.92, 579.92, 578.49, 578.49, 578.49, 578.49, 578.49, 580.67, 580.67, 580.67, 580.67, 580.67, 596.38, 596.38, 596.38, 596.38, 596.38, 605.73, 605.73, 605.73, 605.73, 605.73, 618.43, 618.43, 618.43, 618.43, 618.43, 622.98, 622.98, 622.98, 622.98, 622.98, 622.68, 622.68, 622.68, 622.68, 622.68, 622.22, 622.22, 622.22, 622.22, 622.22, 632.7, 632.7, 632.7, 632.7, 632.7, 637.01, 637.01, 637.01, 637.01, 637.01, 636.81, 636.81, 636.81, 636.81, 636.81, 636.43, 636.43, 636.43, 636.43, 636.43, 635.06, 635.06, 635.06, 635.06, 635.06, 635.04, 635.04, 635.04, 635.04, 635.04, 638.15, 638.15, 638.15, 638.15, 638.15, 637.77, 637.77, 637.77, 637.77, 637.77, 638.24, 638.24, 638.24, 638.24, 638.24, 638.71, 638.71, 638.71, 638.71, 638.71, 638.94, 638.94, 638.94, 638.94, 638.94, 638.51, 638.51, 638.51, 638.51, 638.51, 636.46, 636.46, 636.46, 636.46, 636.46, 647.3, 647.3, 647.3, 647.3, 647.3, 649.08, 649.08, 649.08, 649.08, 649.08, 648.79, 648.79, 648.79, 648.79, 648.79, 646.32, 646.32, 646.32, 646.32, 646.32, 645.85, 645.85, 645.85, 645.85, 645.85, 647.9, 647.9, 647.9, 647.9, 647.9, 648.36, 648.36, 648.36, 648.36, 648.36, 648.77, 648.77, 648.77, 648.77, 648.77, 652.31, 652.31, 652.31, 652.31, 652.31, 652.35, 652.35, 652.35, 652.35, 652.35, 655.57, 655.57, 655.57, 655.57, 655.57, 655.21, 655.21, 655.21, 655.21, 655.21, 655.92, 655.92, 655.92, 655.92, 655.92, 668.27, 668.27, 668.27, 668.27, 668.27, 668.03, 668.03, 668.03, 668.03, 668.03, 668.03, 668.03]
                    
predicted_tokens_seconds
More
---
config:
    xyChart:
        titleFontSize: 12
        width: 900
        height: 600
    themeVariables:
        xyChart:
            titleColor: "#000000"
---
xychart-beta
    title "llama.cpp bench-server-baseline on Standard_NC4as_T4_v3
 duration=10m 217 iterations"
    y-axis "llamacpp:predicted_tokens_seconds"
    x-axis "llamacpp:predicted_tokens_seconds" 1715678337 --> 1715678969
    line [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 28.8, 28.8, 28.8, 28.8, 28.8, 25.55, 25.55, 25.55, 25.55, 25.55, 25.57, 25.57, 25.57, 25.57, 25.57, 23.05, 23.05, 23.05, 23.05, 23.05, 22.8, 22.8, 22.8, 22.8, 22.8, 20.19, 20.19, 20.19, 20.19, 20.19, 17.58, 17.58, 17.58, 17.58, 17.58, 17.71, 17.71, 17.71, 17.71, 17.71, 18.62, 18.62, 18.62, 18.62, 18.62, 18.76, 18.76, 18.76, 18.76, 18.76, 19.08, 19.08, 19.08, 19.08, 19.08, 19.1, 19.1, 19.1, 19.1, 19.1, 19.1, 19.1, 19.1, 19.1, 19.1, 19.0, 19.0, 19.0, 19.0, 19.0, 19.0, 19.0, 19.0, 19.0, 19.0, 19.23, 19.23, 19.23, 19.23, 19.23, 19.52, 19.52, 19.52, 19.52, 19.52, 19.73, 19.73, 19.73, 19.73, 19.73, 19.86, 19.86, 19.86, 19.86, 19.86, 19.88, 19.88, 19.88, 19.88, 19.88, 19.9, 19.9, 19.9, 19.9, 19.9, 19.91, 19.91, 19.91, 19.91, 19.91, 19.98, 19.98, 19.98, 19.98, 19.98, 20.0, 20.0, 20.0, 20.0, 20.0, 19.96, 19.96, 19.96, 19.96, 19.96, 19.93, 19.93, 19.93, 19.93, 19.93, 19.92, 19.92, 19.92, 19.92, 19.92, 19.98, 19.98, 19.98, 19.98, 19.98, 20.0, 20.0, 20.0, 20.0, 20.0, 19.85, 19.85, 19.85, 19.85, 19.85, 19.65, 19.65, 19.65, 19.65, 19.65, 19.63, 19.63, 19.63, 19.63, 19.63, 19.52, 19.52, 19.52, 19.52, 19.52, 19.44, 19.44, 19.44, 19.44, 19.44, 19.28, 19.28, 19.28, 19.28, 19.28, 19.24, 19.24, 19.24, 19.24, 19.24, 19.08, 19.08, 19.08, 19.08, 19.08, 18.53, 18.53, 18.53, 18.53, 18.53, 18.34, 18.34, 18.34, 18.34, 18.34, 18.27, 18.27, 18.27, 18.27, 18.27, 18.24, 18.24, 18.24, 18.24, 18.24, 18.23, 18.23, 18.23, 18.23, 18.23, 18.23, 18.23, 18.23, 18.23, 18.23, 18.26, 18.26, 18.26, 18.26, 18.26, 18.3, 18.3, 18.3, 18.3, 18.3, 18.37, 18.37, 18.37, 18.37, 18.37, 18.36, 18.36, 18.36, 18.36, 18.36, 18.35, 18.35, 18.35, 18.35, 18.35, 18.21, 18.21, 18.21, 18.21, 18.21, 18.09, 18.09, 18.09, 18.09, 18.09, 18.06, 18.06, 18.06, 18.06, 18.06, 18.06, 18.06, 18.06, 18.06, 18.06, 18.1, 18.1, 18.1, 18.1, 18.1, 18.14, 18.14, 18.14, 18.14, 18.14, 18.18, 18.18, 18.18, 18.18, 18.18, 18.21, 18.21, 18.21, 18.21, 18.21, 18.25, 18.25, 18.25, 18.25, 18.25, 18.31, 18.31, 18.31, 18.31, 18.31, 18.33, 18.33, 18.33, 18.33, 18.33, 18.32, 18.32, 18.32, 18.32, 18.32, 18.33, 18.33]
                    

Details

kv_cache_usage_ratio

More
---
config:
    xyChart:
        titleFontSize: 12
        width: 900
        height: 600
    themeVariables:
        xyChart:
            titleColor: "#000000"
---
xychart-beta
    title "llama.cpp bench-server-baseline on Standard_NC4as_T4_v3
 duration=10m 217 iterations"
    y-axis "llamacpp:kv_cache_usage_ratio"
    x-axis "llamacpp:kv_cache_usage_ratio" 1715678337 --> 1715678969
    line [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.14, 0.14, 0.14, 0.14, 0.14, 0.21, 0.21, 0.21, 0.21, 0.21, 0.22, 0.22, 0.22, 0.22, 0.22, 0.33, 0.33, 0.33, 0.33, 0.33, 0.42, 0.42, 0.42, 0.42, 0.42, 0.42, 0.42, 0.42, 0.42, 0.42, 0.42, 0.42, 0.42, 0.42, 0.42, 0.12, 0.12, 0.12, 0.12, 0.12, 0.2, 0.2, 0.2, 0.2, 0.2, 0.19, 0.19, 0.19, 0.19, 0.19, 0.16, 0.16, 0.16, 0.16, 0.16, 0.21, 0.21, 0.21, 0.21, 0.21, 0.3, 0.3, 0.3, 0.3, 0.3, 0.22, 0.22, 0.22, 0.22, 0.22, 0.21, 0.21, 0.21, 0.21, 0.21, 0.19, 0.19, 0.19, 0.19, 0.19, 0.12, 0.12, 0.12, 0.12, 0.12, 0.16, 0.16, 0.16, 0.16, 0.16, 0.18, 0.18, 0.18, 0.18, 0.18, 0.17, 0.17, 0.17, 0.17, 0.17, 0.23, 0.23, 0.23, 0.23, 0.23, 0.22, 0.22, 0.22, 0.22, 0.22, 0.11, 0.11, 0.11, 0.11, 0.11, 0.2, 0.2, 0.2, 0.2, 0.2, 0.24, 0.24, 0.24, 0.24, 0.24, 0.22, 0.22, 0.22, 0.22, 0.22, 0.15, 0.15, 0.15, 0.15, 0.15, 0.16, 0.16, 0.16, 0.16, 0.16, 0.18, 0.18, 0.18, 0.18, 0.18, 0.29, 0.29, 0.29, 0.29, 0.29, 0.31, 0.31, 0.31, 0.31, 0.31, 0.25, 0.25, 0.25, 0.25, 0.25, 0.24, 0.24, 0.24, 0.24, 0.24, 0.32, 0.32, 0.32, 0.32, 0.32, 0.37, 0.37, 0.37, 0.37, 0.37, 0.43, 0.43, 0.43, 0.43, 0.43, 0.46, 0.46, 0.46, 0.46, 0.46, 0.43, 0.43, 0.43, 0.43, 0.43, 0.31, 0.31, 0.31, 0.31, 0.31, 0.22, 0.22, 0.22, 0.22, 0.22, 0.26, 0.26, 0.26, 0.26, 0.26, 0.23, 0.23, 0.23, 0.23, 0.23, 0.16, 0.16, 0.16, 0.16, 0.16, 0.18, 0.18, 0.18, 0.18, 0.18, 0.2, 0.2, 0.2, 0.2, 0.2, 0.16, 0.16, 0.16, 0.16, 0.16, 0.28, 0.28, 0.28, 0.28, 0.28, 0.35, 0.35, 0.35, 0.35, 0.35, 0.39, 0.39, 0.39, 0.39, 0.39, 0.36, 0.36, 0.36, 0.36, 0.36, 0.22, 0.22, 0.22, 0.22, 0.22, 0.18, 0.18, 0.18, 0.18, 0.18, 0.13, 0.13, 0.13, 0.13, 0.13, 0.17, 0.17, 0.17, 0.17, 0.17, 0.26, 0.26, 0.26, 0.26, 0.26, 0.13, 0.13, 0.13, 0.13, 0.13, 0.17, 0.17, 0.17, 0.17, 0.17, 0.21, 0.21, 0.21, 0.21, 0.21, 0.18, 0.18, 0.18, 0.18, 0.18, 0.25, 0.25, 0.25, 0.25, 0.25, 0.2, 0.2, 0.2, 0.2, 0.2, 0.27, 0.27]
                    
requests_processing
More
---
config:
    xyChart:
        titleFontSize: 12
        width: 900
        height: 600
    themeVariables:
        xyChart:
            titleColor: "#000000"
---
xychart-beta
    title "llama.cpp bench-server-baseline on Standard_NC4as_T4_v3
 duration=10m 217 iterations"
    y-axis "llamacpp:requests_processing"
    x-axis "llamacpp:requests_processing" 1715678337 --> 1715678969
    line [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 6.0, 6.0, 6.0, 6.0, 6.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 7.0, 7.0, 7.0, 7.0, 7.0, 4.0, 4.0, 4.0, 4.0, 4.0, 8.0, 8.0, 8.0, 8.0, 8.0, 7.0, 7.0, 7.0, 7.0, 7.0, 8.0, 8.0, 8.0, 8.0, 8.0, 6.0, 6.0, 6.0, 6.0, 6.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 6.0, 6.0, 6.0, 6.0, 6.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 6.0, 6.0, 6.0, 6.0, 6.0, 6.0, 6.0, 6.0, 6.0, 6.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 7.0, 7.0, 7.0, 7.0, 7.0, 7.0, 7.0, 7.0, 7.0, 7.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 7.0, 7.0, 7.0, 7.0, 7.0, 8.0, 8.0, 8.0, 8.0, 8.0, 7.0, 7.0, 7.0, 7.0, 7.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 7.0, 7.0, 7.0, 7.0, 7.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 6.0, 6.0, 6.0, 6.0, 6.0, 8.0, 8.0, 8.0, 8.0, 8.0, 7.0, 7.0, 7.0, 7.0, 7.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 7.0, 7.0, 7.0, 7.0, 7.0, 3.0, 3.0]
                    

github-actions[bot] avatar Apr 10 '24 04:04 github-actions[bot]

I failed to run models with this branch, with CANN version 8.0.RC2.alpha001:

Log start
main: build = 2749 (f1bde5d)
main: built with cc (GCC) 7.3.0 for aarch64-linux-gnu
main: seed  = 1714027412
llama_model_loader: loaded meta data with 19 key-value pairs and 387 tensors from /data/Qwen1.5-7B-Chat-f16.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen2
llama_model_loader: - kv   1:                               general.name str              = Qwen1.5-7B-Chat
llama_model_loader: - kv   2:                          qwen2.block_count u32              = 32
llama_model_loader: - kv   3:                       qwen2.context_length u32              = 32768
llama_model_loader: - kv   4:                     qwen2.embedding_length u32              = 4096
llama_model_loader: - kv   5:                  qwen2.feed_forward_length u32              = 11008
llama_model_loader: - kv   6:                 qwen2.attention.head_count u32              = 32
llama_model_loader: - kv   7:              qwen2.attention.head_count_kv u32              = 32
llama_model_loader: - kv   8:                       qwen2.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv   9:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  10:                          general.file_type u32              = 1
llama_model_loader: - kv  11:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  12:                      tokenizer.ggml.tokens arr[str,151936]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  13:                  tokenizer.ggml.token_type arr[i32,151936]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  14:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  15:                tokenizer.ggml.eos_token_id u32              = 151645
llama_model_loader: - kv  16:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  17:                tokenizer.ggml.bos_token_id u32              = 151643
llama_model_loader: - kv  18:                    tokenizer.chat_template str              = {% for message in messages %}{% if lo...
llama_model_loader: - type  f32:  161 tensors
llama_model_loader: - type  f16:  226 tensors
llm_load_vocab: special tokens definition check successful ( 293/151936 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = qwen2
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 151936
llm_load_print_meta: n_merges         = 151387
llm_load_print_meta: n_ctx_train      = 32768
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 32
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 1
llm_load_print_meta: n_embd_k_gqa     = 4096
llm_load_print_meta: n_embd_v_gqa     = 4096
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 11008
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 2
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 32768
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 7B
llm_load_print_meta: model ftype      = F16
llm_load_print_meta: model params     = 7.72 B
llm_load_print_meta: model size       = 14.38 GiB (16.00 BPW) 
llm_load_print_meta: general.name     = Qwen1.5-7B-Chat
llm_load_print_meta: BOS token        = 151643 '<|endoftext|>'
llm_load_print_meta: EOS token        = 151645 '<|im_end|>'
llm_load_print_meta: PAD token        = 151643 '<|endoftext|>'
llm_load_print_meta: LF token         = 148848 'ÄĬ'
llm_load_print_meta: EOT token        = 151645 '<|im_end|>'
llm_load_tensors: ggml ctx size =    0.37 MiB
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors:        CPU buffer size =  1187.00 MiB
llm_load_tensors:      CANN0 buffer size = 13541.52 MiB
......................................................................................
llama_new_context_with_model: n_ctx      = 512
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: freq_base  = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:      CANN0 KV buffer size =   256.00 MiB
llama_new_context_with_model: KV self size  =  256.00 MiB, K (f16):  128.00 MiB, V (f16):  128.00 MiB
llama_new_context_with_model:        CPU  output buffer size =     0.58 MiB
llama_new_context_with_model:      CANN0 compute buffer size =   304.75 MiB
llama_new_context_with_model:        CPU compute buffer size =     9.01 MiB
llama_new_context_with_model: graph nodes  = 1126
llama_new_context_with_model: graph splits = 2
CANN error: EZ9903: 2024-04-25-14:43:36.365.943 OP tiling_funcs NULL
        Solution: In this scenario, collect the plog when the fault occurs and locate the fault based on the plog.
        TraceBack (most recent call last):
        InitTilingParseCtx failed
        Kernel Run failed. opType: 10, Add
        launch failed for Add, errno:361001.

  current device: 0, in function aclnn_ones at /home/abc/llama.cpp/ggml-cann/aclnn_ops.cpp:852
  aclnnInplaceAdds(workspaceAddr, workspaceSize, executor, ctx.stream())
GGML_ASSERT: /home/abc/llama.cpp/ggml-cann.cpp:24: !"CANN error"
[1]    4088322 abort (core dumped)  ASCEND_RT_VISIBLE_DEVICES=1 ./main -m /data/Qwen1.5-7B-Chat-f16.gguf -ngl 100

huyz-git avatar Apr 25 '24 06:04 huyz-git

I failed to run models with this branch, with CANN version 8.0.RC2.alpha001:

Log start
main: build = 2749 (f1bde5d)
main: built with cc (GCC) 7.3.0 for aarch64-linux-gnu
main: seed  = 1714027412
llama_model_loader: loaded meta data with 19 key-value pairs and 387 tensors from /data/Qwen1.5-7B-Chat-f16.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen2
llama_model_loader: - kv   1:                               general.name str              = Qwen1.5-7B-Chat
llama_model_loader: - kv   2:                          qwen2.block_count u32              = 32
llama_model_loader: - kv   3:                       qwen2.context_length u32              = 32768
llama_model_loader: - kv   4:                     qwen2.embedding_length u32              = 4096
llama_model_loader: - kv   5:                  qwen2.feed_forward_length u32              = 11008
llama_model_loader: - kv   6:                 qwen2.attention.head_count u32              = 32
llama_model_loader: - kv   7:              qwen2.attention.head_count_kv u32              = 32
llama_model_loader: - kv   8:                       qwen2.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv   9:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  10:                          general.file_type u32              = 1
llama_model_loader: - kv  11:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  12:                      tokenizer.ggml.tokens arr[str,151936]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  13:                  tokenizer.ggml.token_type arr[i32,151936]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  14:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  15:                tokenizer.ggml.eos_token_id u32              = 151645
llama_model_loader: - kv  16:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  17:                tokenizer.ggml.bos_token_id u32              = 151643
llama_model_loader: - kv  18:                    tokenizer.chat_template str              = {% for message in messages %}{% if lo...
llama_model_loader: - type  f32:  161 tensors
llama_model_loader: - type  f16:  226 tensors
llm_load_vocab: special tokens definition check successful ( 293/151936 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = qwen2
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 151936
llm_load_print_meta: n_merges         = 151387
llm_load_print_meta: n_ctx_train      = 32768
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 32
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 1
llm_load_print_meta: n_embd_k_gqa     = 4096
llm_load_print_meta: n_embd_v_gqa     = 4096
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 11008
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 2
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 32768
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 7B
llm_load_print_meta: model ftype      = F16
llm_load_print_meta: model params     = 7.72 B
llm_load_print_meta: model size       = 14.38 GiB (16.00 BPW) 
llm_load_print_meta: general.name     = Qwen1.5-7B-Chat
llm_load_print_meta: BOS token        = 151643 '<|endoftext|>'
llm_load_print_meta: EOS token        = 151645 '<|im_end|>'
llm_load_print_meta: PAD token        = 151643 '<|endoftext|>'
llm_load_print_meta: LF token         = 148848 'ÄĬ'
llm_load_print_meta: EOT token        = 151645 '<|im_end|>'
llm_load_tensors: ggml ctx size =    0.37 MiB
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors:        CPU buffer size =  1187.00 MiB
llm_load_tensors:      CANN0 buffer size = 13541.52 MiB
......................................................................................
llama_new_context_with_model: n_ctx      = 512
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: freq_base  = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:      CANN0 KV buffer size =   256.00 MiB
llama_new_context_with_model: KV self size  =  256.00 MiB, K (f16):  128.00 MiB, V (f16):  128.00 MiB
llama_new_context_with_model:        CPU  output buffer size =     0.58 MiB
llama_new_context_with_model:      CANN0 compute buffer size =   304.75 MiB
llama_new_context_with_model:        CPU compute buffer size =     9.01 MiB
llama_new_context_with_model: graph nodes  = 1126
llama_new_context_with_model: graph splits = 2
CANN error: EZ9903: 2024-04-25-14:43:36.365.943 OP tiling_funcs NULL
        Solution: In this scenario, collect the plog when the fault occurs and locate the fault based on the plog.
        TraceBack (most recent call last):
        InitTilingParseCtx failed
        Kernel Run failed. opType: 10, Add
        launch failed for Add, errno:361001.

  current device: 0, in function aclnn_ones at /home/abc/llama.cpp/ggml-cann/aclnn_ops.cpp:852
  aclnnInplaceAdds(workspaceAddr, workspaceSize, executor, ctx.stream())
GGML_ASSERT: /home/abc/llama.cpp/ggml-cann.cpp:24: !"CANN error"
[1]    4088322 abort (core dumped)  ASCEND_RT_VISIBLE_DEVICES=1 ./main -m /data/Qwen1.5-7B-Chat-f16.gguf -ngl 100

This bug is due to not init before using CANN. The latest version has fix this. But still, it can't be use right now, not all ops are implemented.

hipudding avatar Apr 29 '24 12:04 hipudding

@hipudding Great work.

I have a server with 8 *910b, can I test this PR on the 910b?

jeejeelee avatar May 14 '24 06:05 jeejeelee

@hipudding Great work.

I have a server with 8 *910b, can I test this PR on the 910b?

Yes, you can test operators on 910b. But it still can't inference LLM now.

mkdir build cd build cmake .. -DCMAKE_BUILD_TYPE=debug -DLLAMA_CANN=on && make -j

./bin/test-backend-ops test -b CANN0 -o {OP_NAME}

hipudding avatar May 14 '24 08:05 hipudding

@hipudding Great work. I have a server with 8 *910b, can I test this PR on the 910b?

Yes, you can test operators on 910b. But it still can't inference LLM now.

mkdir build cd build cmake .. -DCMAKE_BUILD_TYPE=debug -DLLAMA_CANN=on && make -j

./bin/test-backend-ops test -b CANN0 -o {OP_NAME}

I got this:

./test-backend-ops test -b CANN1 -o ARGSORT
ggml_backend_register: registered backend CPU
ggml_backend_register: registered backend CANN0
ggml_backend_register: registered backend CANN1
ggml_backend_register: registered backend CANN2
ggml_backend_register: registered backend CANN3
ggml_backend_register: registered backend CANN4
ggml_backend_register: registered backend CANN5
ggml_backend_register: registered backend CANN6
ggml_backend_register: registered backend CANN7
Testing 9 backends

Backend 1/9 (CPU)
  Skipping
Backend 2/9 (CANN0)
  Skipping
Backend 3/9 (CANN1)
  Backend name: CANN1
  ARGSORT(type=f32,ne=[8,1,1,1],order=0): OK
  ARGSORT(type=f32,ne=[16,10,10,10],order=0): GGML_ASSERT: /home/abc/llama.cpp/ggml-cann.cpp:328: size == ggml_nbytes(tensor)
[1]    3372786 abort (core dumped)  ./test-backend-ops test -b CANN1 -o ARGSORT

huyz-git avatar May 14 '24 09:05 huyz-git

@hipudding Great work. I have a server with 8 *910b, can I test this PR on the 910b?

Yes, you can test operators on 910b. But it still can't inference LLM now.

mkdir build cd build cmake .. -DCMAKE_BUILD_TYPE=debug -DLLAMA_CANN=on && make -j

./bin/test-backend-ops test -b CANN0 -o {OP_NAME}

Thank you for your reply. When will it be possible for me to test the LLM inference, could you please provide a date?

jeejeelee avatar May 14 '24 09:05 jeejeelee

@hipudding Great work. I have a server with 8 *910b, can I test this PR on the 910b?

Yes, you can test operators on 910b. But it still can't inference LLM now. mkdir build cd build cmake .. -DCMAKE_BUILD_TYPE=debug -DLLAMA_CANN=on && make -j ./bin/test-backend-ops test -b CANN0 -o {OP_NAME}

I got this:

./test-backend-ops test -b CANN1 -o ARGSORT
ggml_backend_register: registered backend CPU
ggml_backend_register: registered backend CANN0
ggml_backend_register: registered backend CANN1
ggml_backend_register: registered backend CANN2
ggml_backend_register: registered backend CANN3
ggml_backend_register: registered backend CANN4
ggml_backend_register: registered backend CANN5
ggml_backend_register: registered backend CANN6
ggml_backend_register: registered backend CANN7
Testing 9 backends

Backend 1/9 (CPU)
  Skipping
Backend 2/9 (CANN0)
  Skipping
Backend 3/9 (CANN1)
  Backend name: CANN1
  ARGSORT(type=f32,ne=[8,1,1,1],order=0): OK
  ARGSORT(type=f32,ne=[16,10,10,10],order=0): GGML_ASSERT: /home/abc/llama.cpp/ggml-cann.cpp:328: size == ggml_nbytes(tensor)
[1]    3372786 abort (core dumped)  ./test-backend-ops test -b CANN1 -o ARGSORT

Yes, there does exist many bugs now. Because it under developing and not stable now. Not all commits are passing test case. But it will be done after all basic operators are ready.

hipudding avatar May 14 '24 09:05 hipudding

@hipudding Great work. I have a server with 8 *910b, can I test this PR on the 910b?

Yes, you can test operators on 910b. But it still can't inference LLM now. mkdir build cd build cmake .. -DCMAKE_BUILD_TYPE=debug -DLLAMA_CANN=on && make -j ./bin/test-backend-ops test -b CANN0 -o {OP_NAME}

Thank you for your reply. When will it be possible for me to test the LLM inference, could you please provide a date?

Maybe after June, maybe even later. Not including all data types, performance optimizations and multi-card inference.

hipudding avatar May 14 '24 09:05 hipudding