onnxruntime icon indicating copy to clipboard operation
onnxruntime copied to clipboard

Snapdragon X processor is unsupported

Open agyonov opened this issue 1 year ago • 6 comments
trafficstars

Describe the issue

When traying to run basic sample, form the Phi 3 Cookbook https://github.com/microsoft/Phi-3CookBook/blob/main/md/07.Labs/Csharp/src/LabsPhi301/Program.cs, I get an error:

Error in cpuinfo: Unknown chip model name 'Snapdragon(R) X Elite - X1E78100 - Qualcomm(R) Oryon(TM) CPU'. Please add new Windows on Arm SoC/chip support to arm/windows/init.c! unknown Qualcomm CPU part 0x1 ignored unknown Qualcomm CPU part 0x1 ignored unknown Qualcomm CPU part 0x1 ignored unknown Qualcomm CPU part 0x1 ignored unknown Qualcomm CPU part 0x1 ignored unknown Qualcomm CPU part 0x1 ignored unknown Qualcomm CPU part 0x1 ignored unknown Qualcomm CPU part 0x1 ignored unknown Qualcomm CPU part 0x1 ignored unknown Qualcomm CPU part 0x1 ignored unknown Qualcomm CPU part 0x1 ignored unknown Qualcomm CPU part 0x1 ignored

To reproduce

Just try to run the cookbook sample on a Snapdragon X computer

Urgency

No response

Platform

Windows

OS Version

Windows 11 Pro

ONNX Runtime Installation

Released Package

ONNX Runtime Version or Commit ID

0.4.0

ONNX Runtime API

C#

Architecture

ARM64

Execution Provider

Default CPU

Execution Provider Library Version

No response

agyonov avatar Sep 01 '24 09:09 agyonov

Actually the same error is displayed, when tried the 'Microsoft.ML.OnnxRuntimeGenAI.DirectML' package, with the Phi-3-vision-128k-instruct-onnx-directml. Thought the Snapdragon X Elit (Adreno 741) chips does support DirectX 12 and DirectML hardware requirements are fulfilled

agyonov avatar Sep 01 '24 09:09 agyonov

it's just a warning. it seems to coming from pytorch/cpuinfo which ort seems to be using. https://github.com/pytorch/cpuinfo/blob/a5ff6df40ce528721cfc310c7ed43946d77404d5/src/arm/windows/init.c#L179

https://github.com/pytorch/cpuinfo/blob/a5ff6df40ce528721cfc310c7ed43946d77404d5/src/arm/windows/init.c#L22 needs to be updated.

jywu-msft avatar Sep 17 '24 21:09 jywu-msft

@jywu-msft You mentioned it is a warning, I would have expected to see NPU usage spike up in the task manager when generating text with the model. Is there a way you can recommend us to verify if NPU is being used?

DavidLuong98 avatar Oct 04 '24 16:10 DavidLuong98

Hi @DavidLuong98,

As @jywu-msft, commented it is only a warning message, though annoying one. But, still the code/program works and through ONNX you can run interference on different models.

On your question, currently the only way I was able to make ONNX to use Snapdragon's NPU was by:

  1. Adding reference to Microsoft.ML.OnnxRuntime.QNN package
  2. Explicitly configuring the Qualcomm Execution provider:
// Create Session options
using var _sessionOptions = new SessionOptions();
Dictionary<string, string> config = new Dictionary<string, string> {
    { "backend_path", "QnnHtp.dll"},
    { "enable_htp_fp16_precision", "1"}
};
_sessionOptions.AppendExecutionProvider("QNN", config);
using var _encoderSession = new InferenceSession(fullModelFilePath, _sessionOptions);

BUT, this is done not for phi3 models, but other models available from Qualcomm AI Hub. Not for Generate ONNX API/Microsoft.ML.OnnxRuntimeGenAI, but for more "standard" one - Microsoft.ML.OnnxRuntime.

The DirectML ONNX package - Microsoft.ML.OnnxRuntime.DirectML, currently as per my experience, DOES NOT utilize the NPU, but DOES utilize the GPU - Snapdragon(R) X Elite - X1E78100 - Qualcomm(R) Adreno(TM) GPU. Despite the above mentioned warnings.

These are clearly seen by spikes in processors activity in the Task Manager's performance screen view.

I have not tested yet if the Generate API DirectML nuget package Microsoft.ML.OnnxRuntimeGenAI.DirectML, utilizes the GPU, as the "standard" API - Microsoft.ML.OnnxRuntime.DirectML does.

agyonov avatar Oct 05 '24 14:10 agyonov

phi3 model is not yet supported out of box. it requires additional work to make it run reasonably well with an NPU. stay tuned.

jywu-msft avatar Oct 05 '24 16:10 jywu-msft

@jywu-msft thanks for reply. Just curious and want to understand more. Are you saying that there's more work on Microsoft.ML.OnnxRuntime.DirectML side or more work for phi3 model to run on npu (implementing the npu operators). Just looking to see where the blockers are.

DavidLuong98 avatar Oct 16 '24 16:10 DavidLuong98

Hi

The warning message you’re encountering is due to the pytorch/cpuinfo library not yet recognizing the new Snapdragon X processor.

For now, you can continue using the Qualcomm Execution Provider by referencing the Microsoft.ML.OnnxRuntime.QNN package and configuring it appropriately. This setup has been effective for other models available from the Qualcomm AI Hub, although it may not be optimized for the Phi-3 models.

I hope this information is helpful, and I’ll keep an eye out for any updates regarding support for the Phi-3 models.

ashumish-QCOM avatar Oct 25 '24 15:10 ashumish-QCOM

Applying stale label due to no activity in 30 days

Applying stale label due to no activity in 30 days