Foundry-Local icon indicating copy to clipboard operation
Foundry-Local copied to clipboard

foundry model run deepseek-r1-14b

Open jasonwtli opened this issue 6 months ago • 5 comments

Ran it for a few models including phi & deepseek-r1-7b. But, doesn't work for the deepseek-r1-14b model (running on new Copilot+PCs with Snapdragon X-Elite chips).

Have tried all the usual troubleshoot. For Example: Service Start, reboot, clear cache and re-download. So, think its specific to 14b parameter model.

🕚 Loading model... Exception: Request to local service failed. Uri:http://localhost:5273/openai/load/deepseek-r1-distill-qwen-14b-qnn-npu?ttl=600 An error occurred while sending the request. Please check service status with 'foundry service status'.

jasonwtli avatar Jun 01 '25 06:06 jasonwtli

Thanks for raising an issue @jasonwtli - would you be able to upload the log files here:

foundry service diag —logs

This will create a zip file on your desktop to upload.

[!NOTE] Log files may contain information like user names, IP addresses, file paths, etc. Be sure to remove those before sharing here.

samuel100 avatar Jun 01 '25 07:06 samuel100

Please see attached @samuel100 Thanks!

foundry20250601.log

jasonwtli avatar Jun 01 '25 21:06 jasonwtli

Thanks @jasonwtli - I think your NPU is running out memory. Does your NPU have ~8GB? This would explain why the 7B model (~3.7GB) works but not the 14B model (~7.2GB).

samuel100 avatar Jun 02 '25 15:06 samuel100

Thanks @samuel100 Do you know why on the new Copilot+ Laptops with 16GB on Elite X chips have standard 7.8GB unified memory allocated to NPU? I have both an Asus & Lenovo and it should work as its optimized for these workloads? But, it doesn't and gave up tweaking the settings on the Asus as it just max out or throttled close to it.

However, tested it on my Windows 11 Qualcomm Snapdragon SDK and it works as it has the full 16GB of unified memory.

jasonwtli avatar Jun 03 '25 06:06 jasonwtli

@samuel100 also confirmed this is an error on Asus Vivobook ARM Device

Issue with Model Deepseek R1 14b

Image

Model Load Error

Image

Image

As you can see you dont really get any error indicating memory error? In the Diag

C:\Users\DPEGA>foundry service diag
===================================================
[System] %USERPROFILE%: C:\Users\DPEGA
[System] Environment.ProcessPath: C:\Program Files\WindowsApps\Microsoft.FoundryLocal_0.3.9267.43123_arm64__8wekyb3d8bbwe\foundry.exe
[System] Environment.ProcessId: 18268
[System] Environment.Is64BitProcess: True
[System] Environment.Is64BitOperatingSystem: True
[System] Environment.IsPrivilegedProcess: False
[System] Environment.CurrentDirectory: C:\Users\DPEGA
[System] Environment.SystemDirectory: C:\WINDOWS\system32
[System] OperatingSystem.IsWindows: True
[System] OperatingSystem.IsLinux: False
[System] OperatingSystem.IsMacOS: False
[System] Path.GetFullPath(.): C:\Users\DPEGA
[System] IPGlobalProperties.HostName: qualcomm
[System] IPGlobalProperties.DomainName:
[System] TcpListeners.Length: 26
===================================================
[FL CLI] Folder: C:\Program Files\WindowsApps\Microsoft.FoundryLocal_0.3.9267.43123_arm64__8wekyb3d8bbwe
[FL CLI] Config:
{
  "defaultLogLevel": 2,
  "serviceSettings": {
    "host": "localhost",
    "port": 5273,
    "cacheDirectoryPath": "C:\\Users\\DPEGA\\.foundry\\cache\\models",
    "schema": "http",
    "pipeName": "inference_agent",
    "defaultSecondsForModelTTL": 600
  }
}
===================================================

leestott avatar Jun 03 '25 08:06 leestott