Foundry-Local icon indicating copy to clipboard operation
Foundry-Local copied to clipboard

Running model fails with Unsupported configuration key: NPU_MAX_TILES

Open PrintsCharming opened this issue 2 months ago • 11 comments

Foundry Local Version: 0.7.120+3b92ed4014 PC: Surface Laptop Studio 2

C:\Users\USERNAME>foundry model run qwen2.5-7b Model qwen2.5-7b-instruct-openvino-npu:1 was found in the local cache. 🕒 Loading model... [21:20:20 ERR] Failed loading model:qwen2.5-7b-instruct-openvino-npu:1 Exception: Failed: Loading model qwen2.5-7b-instruct-openvino-npu:1 from http://127.0.0.1:50198/openai/load/qwen2.5-7b-instruct-openvino-npu:1?ttl=600 Internal Server Error Failed loading model qwen2.5-7b-instruct-openvino-npu:1 Exception from src\inference\src\cpp\core.cpp:112: Exception from src\inference\src\dev\plugin.cpp:53: Exception from src\plugins\intel_npu\src\plugin\src\properties.cpp:587: Unsupported configuration key: NPU_MAX_TILES

C:\Users\USERNAME>foundry model run qwen2.5-0.5b Model qwen2.5-0.5b-instruct-openvino-npu:2 was found in the local cache. 🕓 Loading model... [21:33:49 ERR] Failed loading model:qwen2.5-0.5b-instruct-openvino-npu:2 Exception: Failed: Loading model qwen2.5-0.5b-instruct-openvino-npu:2 from http://127.0.0.1:50198/openai/load/qwen2.5-0.5b-instruct-openvino-npu:2?ttl=600 Internal Server Error Failed loading model qwen2.5-0.5b-instruct-openvino-npu:2 Exception from src\inference\src\cpp\core.cpp:112: Exception from src\inference\src\dev\plugin.cpp:53: Exception from src\plugins\intel_npu\src\plugin\src\properties.cpp:587: Unsupported configuration key: NPU_MAX_TILES

C:\Users\USERNAME>foundry model run phi-4-mini Downloading phi-4-mini-instruct-openvino-npu:1... [####################################] 100.00 % [Time remaining: about 0s] 29.3 MB/s 🕙 Loading model... [21:37:02 ERR] Failed loading model:phi-4-mini-instruct-openvino-npu:1 Exception: Failed: Loading model phi-4-mini-instruct-openvino-npu:1 from http://127.0.0.1:50198/openai/load/phi-4-mini-instruct-openvino-npu:1?ttl=600 Internal Server Error Failed loading model phi-4-mini-instruct-openvino-npu:1 Exception from src\inference\src\cpp\core.cpp:112: Exception from src\inference\src\dev\plugin.cpp:53: Exception from src\plugins\intel_npu\src\plugin\src\properties.cpp:587: Unsupported configuration key: NPU_MAX_TILES

PrintsCharming avatar Oct 04 '25 04:10 PrintsCharming

Hi @PrintsCharming, can you give some more info about the device you are using? For example, what is the CPU model (Intel Core Ultra version), as well as the version of NPU driver that you have installed? Thanks!

RyanMetcalfeInt8 avatar Oct 07 '25 22:10 RyanMetcalfeInt8

It's a Surface Laptop Studio 2

Processor 13th Gen Intel(R) Core(TM) i7-13800H, 2900 Mhz, 14 Core(s), 20 Logical Processor(s)

Intel(R) Npu Driver 31.0.100.2016

PrintsCharming avatar Oct 10 '25 23:10 PrintsCharming

+1 I have the same issue with the exact same specs and driver

bmehta001 avatar Oct 15 '25 03:10 bmehta001

I have the same issue.

⚠️Note: the Intel driver that is linked from this page is a newer version that does not work for Surface Laptop Studio 2.

The version installed on Surface Laptop Studio 2 is 31.0.100.2016. The newer incompatible version is 32.0.100.4297.

hansmbakker avatar Oct 16 '25 16:10 hansmbakker

There is are some issue on the openvino repo that have the same error response, but not sure if they are related:

  • https://github.com/openvinotoolkit/openvino/issues/32408
  • (closed without resolution) https://github.com/openvinotoolkit/openvino.genai/issues/2656

These linked issues were encountered on newer Intel Core Ultra processors.

hansmbakker avatar Oct 16 '25 17:10 hansmbakker

Hi Everyone - The issue here is that Foundry Local is not correctly determining that this platform does not meet the minimum spec for executing LLM on the NPU. Hopefully this will improve in the future, but for now, if your system does not support at least an NPU driver of 4239 and later, you should not attempt to run LLMs on the NPU.

gblong1 avatar Oct 16 '25 19:10 gblong1

@gblong1 will Intel update its driver for these NPUs?

It's very disappointing that it seems like the Intel NPU in e.g. the Surface Laptop Studio 2 can currently only be used for very limited things like some camera effects.

hansmbakker avatar Nov 02 '25 15:11 hansmbakker

@hansmbakker - For foundry local usage, please see the requirements published here, to include what devices and system requirements are needed: https://www.intel.com/content/www/us/en/developer/topic-technology/ai-pc/overview.html#windows-ai-foundry

gblong1 avatar Nov 05 '25 00:11 gblong1

@gblong1 thank you, however my point was more that an NPU called Movidius is included in those Surface devices, and Intel seems to have dropped support for that NPU in OpenVino - will there be a driver update that will make it accessible to WindowsML / AI foundry again?

hansmbakker avatar Nov 05 '25 19:11 hansmbakker

@hansmbakker - There are no planned updates for that NPU.

gblong1 avatar Nov 05 '25 20:11 gblong1

I'm using an Intel CPU, the Ultra 7 155H, and I also can't see the NPU model. I've already updated the NPU driver to version 32.0.100.4404 (dated 2025/10/29). When will support be available, or what can I do to start using the NPU model?

antstars avatar Dec 04 '25 12:12 antstars