dlib icon indicating copy to clipboard operation
dlib copied to clipboard

Choose backend (GPU/CPU)

Open xd009642 opened this issue 6 years ago • 12 comments
trafficstars

So I couldn't find a way to do this in the documentation but I was wondering if there was a way to force dlib to use the CPU if it's built with CUDA enabled? I ask because I'm making a multithreaded application with dlib and only a maximum of 2 threads at a time can use CUDA (because of limitations of the GPU itself). So I'd like to in a thread get dlib to use the GPU in the thread if <2 threads are using the GPU and otherwise fallback to using the CPU.

Also a slightly related question how would dlib cope with a machine with multiple GPUs installed all for compute purposes?

xd009642 avatar Jul 31 '19 21:07 xd009642

There is no way to switch at runtime. However you build it is the mode in which it will run. To assign things to different GPUs you use the normal cudaSetDevice() method provided by the CUDA runtime.

On Wed, Jul 31, 2019 at 5:05 PM xd009642 [email protected] wrote:

So I couldn't find a way to do this in the documentation but I was wondering if there was a way to force dlib to use the CPU if it's built with CUDA enabled? I ask because I'm making a multithreaded application with dlib and only a maximum of 2 threads at a time can use CUDA (because of limitations of the GPU itself). So I'd like to in a thread get dlib to use the GPU in the thread if <2 threads are using the GPU and otherwise fallback to using the CPU.

Also a slightly related question how would dlib cope with a machine with multiple GPUs installed all for compute purposes?

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/davisking/dlib/issues/1852?email_source=notifications&email_token=ABPYFR3PPXGOIBCOEBNKKU3QCH5DHA5CNFSM4IIKR6Z2YY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4HCVMBVA, or mute the thread https://github.com/notifications/unsubscribe-auth/ABPYFR6GWGIPNKM47GNWTSLQCH5DHANCNFSM4IIKR6ZQ .

davisking avatar Aug 01 '19 01:08 davisking

Would there be any interest in adding the ability to run something on CPU if built with GPU support? Just because being able to handle this in a single binary would be preferential to building two different binaries linked to two different builds of dlib and coordinating between the two.

Cuda set device is super useful for the second question though cheers! Just saw on docs that it sets the device for the host thread 😊

xd009642 avatar Aug 01 '19 22:08 xd009642

Sure, such an option would be cool. So a PR that set that up would be great. It would probably best be accomplished via an API similar to cudaSetDevice(). That is, calling some global function to set CPU or GPU, for the current thread.

On Thu, Aug 1, 2019 at 6:38 PM xd009642 [email protected] wrote:

Would there be any interest in adding the ability to run something on CPU if built with GPU support? Just because being able to handle this in a single binary would be preferential to building two different binaries linked to two different builds of dlib and coordinating between the two.

Cuda set device is super useful for the second question though cheers! Just saw on docs that it sets the device for the host thread 😊

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/davisking/dlib/issues/1852?email_source=notifications&email_token=ABPYFRYEKFCKONLESZ37PHDQCNQVRA5CNFSM4IIKR6Z2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD3MCRWI#issuecomment-517482713, or mute the thread https://github.com/notifications/unsubscribe-auth/ABPYFR3KZYSNEKK56D3YPQDQCNQVRANCNFSM4IIKR6ZQ .

davisking avatar Aug 02 '19 01:08 davisking

Warning: this issue has been inactive for 35 days and will be automatically closed on 2019-09-16 if there is no further activity.

If you are waiting for a response but haven't received one it's possible your question is somehow inappropriate. E.g. it is off topic, you didn't follow the issue submission instructions, or your question is easily answerable by reading the FAQ, dlib's official compilation instructions, dlib's API documentation, or a Google search.

dlib-issue-bot avatar Sep 06 '19 08:09 dlib-issue-bot

Warning: this issue has been inactive for 42 days and will be automatically closed on 2019-09-16 if there is no further activity.

If you are waiting for a response but haven't received one it's possible your question is somehow inappropriate. E.g. it is off topic, you didn't follow the issue submission instructions, or your question is easily answerable by reading the FAQ, dlib's official compilation instructions, dlib's API documentation, or a Google search.

dlib-issue-bot avatar Sep 13 '19 08:09 dlib-issue-bot

Notice: this issue has been closed because it has been inactive for 45 days. You may reopen this issue if it has been closed in error.

dlib-issue-bot avatar Sep 16 '19 08:09 dlib-issue-bot

Is there a chance to reopen this? Maybe @xd009642 made some progress? I am in a situation where I use dlib in GPU mode for inferencing, but depending on the training set, need to switch to CPU as GPU memory runs out.

pliablepixels avatar Nov 16 '19 15:11 pliablepixels

Continuing on a closed thread @davisking unless you think this warrants a new issue.

I took at quick look at the code, it looks like CPU code is compiled out when DLIB_USE_CUDA is used so its not a simple matter of setting a runtime flag. Supporting switching to CPU when GPU is enabled would be a non trivial effort that would involve keeping both code bases compiled in. Is that right or am I going down the wrong path?

pliablepixels avatar Nov 16 '19 16:11 pliablepixels

I didn't make any progress aside from looking at the code, unfortunately real-life interfered and I lost the need for this functionality at work. My approach though would have been to keep both code-bases compiling in as well as provide an option to keep the old behaviour (just in case any users had a requirement on the old behaviour for binary size etc).

I could potentially have another look at this over the Christmas holidays but I won't realistically get a chance before December 20th so you can always give it a shot if you wish

xd009642 avatar Nov 16 '19 18:11 xd009642

I don't think it's a big deal to support this. The change is basically just to replace a bunch of:

#ifdef DLIB_USE_CUDA
do_this();
#else
do_that();
#endif

statements with something like:

if (dlib::use_cuda())
    do_this();
else
    do_that();

Where dlib::use_cuda() simply returns a thread local bool. The state of the bool needs to be initialized based on the DLIB_USE_CUDA macro, and a few variable declarations that are conditionally created based on DLIB_USE_CUDA need to be updated. But other than that it doesn't seem like there is anything else to do.

I don't think there needs to be any option to keep the old behavior where one path is compiled out.

davisking avatar Nov 16 '19 23:11 davisking

I don't think there needs to be any option to keep the old behavior where one path is compiled out.

In my opinion, it would be really nice if we could still build CPU-only versions that have no CUDA dependencies whatsoever (during compiling, linking, or runtime execution).

reunanen avatar Jan 04 '20 14:01 reunanen

Oh yeah, I didn’t mean cuda would become required. The options would be cpu or cpu and gpu. No gpu only option.

davisking avatar Jan 04 '20 14:01 davisking