llama.cpp icon indicating copy to clipboard operation
llama.cpp copied to clipboard

[User] How to specify which cuda device to use programmably

Open huichen opened this issue 2 years ago • 2 comments

Say I have four Nvidia cards and I want to run four models on each of the card in one program. The SDK doesn't provide parameter to specify which cuda device to run the model on?

huichen avatar May 28 '23 07:05 huichen

Fixed by https://github.com/ggerganov/llama.cpp/pull/1607 .

JohannesGaessler avatar May 28 '23 09:05 JohannesGaessler

Looks awesome!

huichen avatar May 29 '23 11:05 huichen

This issue was closed because it has been inactive for 14 days since being marked as stale.

github-actions[bot] avatar Apr 09 '24 01:04 github-actions[bot]