gpustat icon indicating copy to clipboard operation
gpustat copied to clipboard

use too much CPU resource

Open matthew-z opened this issue 5 years ago • 11 comments

Hi, gpustat -i uses about 80% CPU on my machine, is it expected or a bug? In contrast, nvidia-smi -l 1 uses less than 10%.

OS: Ubuntu 18.04 Nv Driver: 410.48 CUDA: 10.0.130 CPU: AMD Threadripper 1900x GPU: 2080Ti + 1080

image

matthew-z avatar Nov 08 '18 14:11 matthew-z

whats your interval value

Stonesjtu avatar Nov 09 '18 01:11 Stonesjtu

I didn't set, but gpustat -i 1 will reproduce the same result.

matthew-z avatar Nov 09 '18 05:11 matthew-z

Can you test if watch -n 1 nvidia-smi and watch -n gpustat use the same amount of CPU time.

Stonesjtu avatar Nov 09 '18 07:11 Stonesjtu

I tested, and with watch -n 1 they use the same amount of CPU time (about 0-20%).

matthew-z avatar Nov 09 '18 07:11 matthew-z

Hi, I just found that the CPU time problem of gpustat -i can be solved by running nvidia-smi daemon first.

matthew-z avatar Nov 09 '18 07:11 matthew-z

A difference is that in the watch mode (i.e. gpustat -i) handle resources are fetched at every time step, which is somewhat expensive. Therefore we could optimize in a way that GPU handles are fetched only once in the beginning, and use the (cached) resources. This would be possible in the watch mode as the gpustat process won't terminate until interrupted.

wookayin avatar Nov 09 '18 21:11 wookayin

Top four most expensive operations:

 36.50%  36.50%    4.59s     4.59s   nvmlDeviceGetHandleByIndex (pynvml.py:945)
 16.50%  16.50%    1.85s     1.85s   nvmlDeviceGetPowerUsage (pynvml.py:1289)
  9.50%   9.50%    1.18s     1.18s   nvmlDeviceGetUtilizationRates (pynvml.py:1379)
  7.50%   7.50%   0.805s    0.805s   nvmlDeviceGetComputeRunningProcesses (pynvml.py:1435)

wookayin avatar Nov 09 '18 21:11 wookayin

@wookayin Good point, I'm working on that.

Stonesjtu avatar Nov 10 '18 03:11 Stonesjtu

Working on this as #61.

In my case querying power usage is most expensive, so I made it optional whenever possible. Could anybody check whether it leads to less CPU usage?

wookayin avatar Feb 24 '19 02:02 wookayin

A difference is that in the watch mode (i.e. gpustat -i) handle resources are fetched at every time step, which is somewhat expensive. Therefore we could optimize in a way that GPU handles are fetched only once in the beginning, and use the (cached) resources. This would be possible in the watch mode as the gpustat process won't terminate until interrupted.

But I still have no idea of the difference between "watch -n 1 gpustat" and "gpu -i 1". Both of them need to call print_gpustat() every tick, while 'watch' requires additional step to parse command line arguments again and again. So intuitively the former should took longer.

BTW, here is the source code of 'watch' if needed: watch.c, where I found nothing useful : (

JalinWang avatar Mar 18 '21 14:03 JalinWang

Ter,

The most time is not spent on parsing CLI options actually, which only requires several us at most.

On Mar 18, 2021, at 10:42 PM, Ter @.***> wrote:

A difference is that in the watch mode (i.e. gpustat -i) handle resources are fetched https://github.com/wookayin/gpustat/blob/v0.5.0/gpustat/core.py#L372 at every time step, which is somewhat expensive. Therefore we could optimize in a way that GPU handles are fetched only once in the beginning, and use the (cached) resources. This would be possible in the watch mode as the gpustat process won't terminate until interrupted.

But I still have no idea of the difference between "watch -n 1 gpustat" and "gpu -i 1". Both of them need to call print_gpustat() every tick, while 'watch' requires additional step to parse command line arguments again and again. So intuitively the former should took longer.

BTW, here is the source code of 'watch' if needed: watch.c https://gitlab.com/procps-ng/procps/-/blob/master/watch.c, where I found nothing useful : (

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/wookayin/gpustat/issues/54#issuecomment-801984777, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABCYKDBSTNB2M3VNKCKC2RLTEIGMNANCNFSM4GCTJF2Q.

Stonesjtu avatar Mar 18 '21 14:03 Stonesjtu

Has this issue been resolved? I am observing this behavior from https://github.com/ray-project/ray/ when we run gpustat.new_query() repetitively at GCE.

profile

rkooo567 avatar May 31 '23 01:05 rkooo567

Lots of time is spent on NvmlInit & shutdown & nvmlDeviceGetHandleByIndex

rkooo567 avatar May 31 '23 01:05 rkooo567

In the recent versions of pynvml, nvmlDeviceGetHandleByIndex doesn't seem to be a bottleneck according to profiling result (If this is still slow, please let me know) so I did not optimize on redundant calls of nvmlDeviceGetHandleByIndex. #166 makes nvmlInit() called only once, so it should have some performance benefit.

wookayin avatar Nov 24 '23 07:11 wookayin