gpuR icon indicating copy to clipboard operation
gpuR copied to clipboard

Performance anomaly in solve()

Open anadon opened this issue 7 years ago • 11 comments

I was up until 2AM, so I'm only now getting to overnight results and haven't tested all functions.

I looks like solve() on one of my systems (I did get a different one working) only has ~10X speed up vs 1 CPU core, and when more cores are involved there is no performance difference. Tested with POCL and the multithreaded solve(), and system tools indicate the solve job is being completed on the GPU.

test script: https://gist.github.com/anadon/4ed315a8a64db7c455fbd580f6ecece0 clinfo: https://gist.github.com/anadon/c5dbfea49e1c9b6a4884330efd114810

Any idea what might be going on?

anadon avatar Apr 25 '18 12:04 anadon

It looks like there is something wrong with a combination of the POCL runtime and using gpuMatrix. Given the it is all CPU based, it should behave identically to vclMatrix, but it stalling.

anadon avatar Apr 25 '18 14:04 anadon

Here are my final (cleaned up) results:

AMDGPU-PRO GPU memory run times:
elapsed 
  0.341 
elapsed 
  0.048 
elapsed 
  0.258 
elapsed 
  2.387 
elapsed 
 27.039 
elapsed 
198.641 

AMDGPU-PRO main memory run times:
elapsed 
  0.072 
elapsed 
  0.197 
elapsed 
  0.912 
elapsed 
  6.165 
elapsed 
 55.894 
elapsed 
482.267 

CPU run times:
elapsed 
  0.052 
elapsed 
   0.09 
elapsed 
  0.357 
elapsed 
  1.819 
elapsed 
 12.041 
elapsed 
 90.615 

So something is definitely wrong.

anadon avatar Apr 25 '18 15:04 anadon

It looks like solve() as implemented in src/solve.cpp should be using viennaCL's viennacl::linalg::opencl::inplace_solve and the deep copy of the A matrix in R/solve.R:33 can be completely skipped. The implementations in viennacl::linalg and viennacl::linalg::opencl appear to vary significantly. This may be systematic.

Here's a number of the documents I'm looking at: http://viennacl.sourceforge.net/doc/manual-operations.html http://viennacl.sourceforge.net/doc/namespaceviennacl_1_1linalg.html http://viennacl.sourceforge.net/doc/namespaceviennacl_1_1linalg.html#a6e9b329b64ac782e6a5687ad2fc47a2a http://viennacl.sourceforge.net/doc/bicgstab_8hpp_source.html#l00496 http://viennacl.sourceforge.net/doc/direct__solve_8hpp_source.html#l00492 http://viennacl.sourceforge.net/doc/namespaceviennacl_1_1linalg_1_1opencl.html#ad6cf7d9d8b5ccaf3088350c4b9168dc5 http://viennacl.sourceforge.net/doc/opencl_2direct__solve_8hpp_source.html#l00126 http://viennacl.sourceforge.net/doc/opencl_2sparse__matrix__operations_8hpp_source.html#l00382 http://viennacl.sourceforge.net/doc/namespaceviennacl_1_1linalg_1_1opencl.html#ac90a3dca0595b15ae04662bddb6cf57c

anadon avatar Apr 25 '18 17:04 anadon

cleaned up my test script for easier use: https://gist.github.com/74f6c584b501e617af0fc1d9a2707d98

anadon avatar Apr 25 '18 18:04 anadon

@cdeterman Can you tell me how you run testthat for this project? I've been having a little difficulty and a few other projects to split my attention.

anadon avatar May 08 '18 01:05 anadon

@anadon what problem are you running in to? The testthat framework is generally pretty intuitive. In this package I have test files for both CPU and GPU specific tests. That way I can get tests run on Travis CI. If you can explain a little more where your difficulty is I can assist.

cdeterman avatar May 08 '18 16:05 cdeterman

Just from R studio, to to run anything. It should be, but I'm missing something. I've read a few things off of google, but it just doesn't seem to be working.

anadon avatar May 08 '18 23:05 anadon

@anadon Well, you can always just use devtools::test() to execute all the tests. If you just want to run one you can use testthat::test_file(path='path/to/file').

cdeterman avatar May 09 '18 13:05 cdeterman

Still on my list, just deprioritized as I have to get other work out.

anadon avatar May 25 '18 20:05 anadon

Dear Colleagues, what causes the gpux inversion code to be slower than the x inversion?

set.seed(0)
x <- matrix(rnorm(10000,0,1),100,100)

system.time(
for(i in 1:1e4){
  solve(x)
})

library(gpuR)

set.seed(0)
x <- matrix(rnorm(10000,0,1),100,100)
gpux <-vclMatrix(x, 100, 100)


system.time(
for(i in 1:1e4){
  solve(gpux)
})
  • In CPU: 10.746 sec
  • In GPU: 65.432 sec

I suppose that the slowness, if there is no problem in the code, more precisely in the definition of the array gpux refers to the size of the array. Also, I suppose in each iteration of the loop, there is a copy of the matrix of the local ambience for the GPU.

Below are the information for my CPU and GPU, respectively:

CPU:

[pedro@pedro-avell ~]$ lscpu
Arquitetura:                x86_64
Modo(s) operacional da CPU: 32-bit, 64-bit
Ordem dos bytes:            Little Endian
Tamanhos de endereço:       39 bits physical, 48 bits virtual
CPU(s):                     8
Lista de CPU(s) on-line:    0-7
Thread(s) per núcleo:       2
Núcleo(s) por soquete:      4
Soquete(s):                 1
Nó(s) de NUMA:              1
ID de fornecedor:           GenuineIntel
Família da CPU:             6
Modelo:                     60
Nome do modelo:             Intel(R) Core(TM) i7-4710MQ CPU @ 2.50GHz
Step:                       3
CPU MHz:                    1086.144
CPU MHz máx.:               3500,0000
CPU MHz mín.:               800,0000
BogoMIPS:                   4990.29
Virtualização:              VT-x
cache de L1d:               32K
cache de L1i:               32K
cache de L2:                256K
cache de L3:                6144K
CPU(s) de nó0 NUMA:         0-7
Opções:                     fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt dtherm ida arat pln pts flush_l1d

GPU:

[pedro@pedro-avell deviceQuery]$ ./deviceQuery 
./deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "GeForce GTX 970M"
  CUDA Driver Version / Runtime Version          10.0 / 10.0
  CUDA Capability Major/Minor version number:    5.2
  Total amount of global memory:                 6084 MBytes (6379536384 bytes)
  (10) Multiprocessors, (128) CUDA Cores/MP:     1280 CUDA Cores
  GPU Max Clock rate:                            1038 MHz (1.04 GHz)
  Memory Clock rate:                             2505 Mhz
  Memory Bus Width:                              192-bit
  L2 Cache Size:                                 1572864 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
  Maximum Layered 1D Texture Size, (num) layers  1D=(16384), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(16384, 16384), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  2048
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 2 copy engine(s)
  Run time limit on kernels:                     Yes
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  Device supports Unified Addressing (UVA):      Yes
  Device supports Compute Preemption:            No
  Supports Cooperative Kernel Launch:            No
  Supports MultiDevice Co-op Kernel Launch:      No
  Device PCI Domain ID / Bus ID / location ID:   0 / 1 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 10.0, CUDA Runtime Version = 10.0, NumDevs = 1
Result = PASS

Best regards.

prdm0 avatar Mar 01 '19 17:03 prdm0

@prdm0 I have a few insights, but the author knows more. The big thing is that not all operations are directly supported by calling into GPU operations. Like in this issue, it actually breaks into 2 calls handled in R which control offloaded GPU instructions. There is a lot of overhead in the adapting to R, but there also presumably some extraneous steps taken in the different GPU function calls which would be skipped by a merged call. If you want to fundamentally fix this in this project, changes actually have to be made in ViennaCL.

If you're doing a lot of valuable computing where speed like this counts, I'd actually suggest switching from R to something like C++ or Julia. R may be statistician friendly, but I still can't call it a good programming language for general or high performance use cases.

anadon avatar Apr 09 '19 16:04 anadon