torch-rnn icon indicating copy to clipboard operation
torch-rnn copied to clipboard

OpenCL backend slower than CPU

Open mewo2 opened this issue 9 years ago • 13 comments

Running the tinyshakespeare dataset with the default settings, I get timings of around 0.3s/iteration with CPU, but using the OpenCL backend I get more like 2.6s/iteration. These timings seem to be similar whether or not benchmarking is enabled. Running char-rnn the timings are approximately reversed (around 3s/iteration CPU, 0.3s/iteration GPU).

Running on OS X 10.10, with a Radeon 5770.

mewo2 avatar Feb 18 '16 16:02 mewo2

I get similar slowness on other tests

clausd avatar Feb 22 '16 11:02 clausd

OpenCL is slow for me too on a Titan X. I think the slowness is due to the OpenCL implementation of nn.LookupTable, but I'm not positive.

jcjohnson avatar Feb 22 '16 16:02 jcjohnson

I have a similar issue with CUDA (I don't know if I should open another issue). Using the default configuration and tiny-shakespeare, I get ~0.05 s on CPU and ~0.08 s on GPU per iteration.

simopal6 avatar Mar 15 '16 10:03 simopal6

What type of GPU are you using?

jcjohnson avatar Mar 15 '16 20:03 jcjohnson

GeForce GTX TITAN X

simopal6 avatar Mar 16 '16 08:03 simopal6

Sorry, my bad, I just had a better look at the command-line parameters, and I thought that omitting "-gpu" would run on the CPU. With "-gpu -1" it is about 40-50 ms per iteration.

simopal6 avatar Mar 16 '16 08:03 simopal6

Seeing this also on a AMD Radeon R9 M370X 2048 MB. Much slower than CPU, perhaps 10x as OP suggests. Makes me wish I'd bought a machine with nvidia!

jtippett avatar Apr 11 '16 08:04 jtippett

Hello

Do I need to install OpenCL before installing cltorch and clnn?

vinhqdang avatar Apr 29 '16 12:04 vinhqdang

I don't think you need to explicitly install OpenCL based on the cltorch installation instructions here:

https://github.com/hughperkins/cltorch#installation

On Fri, Apr 29, 2016 at 5:03 AM, Vinh Dang [email protected] wrote:

Hello

Do I need to install OpenCL before installing cltorch and clnn?

— You are receiving this because you commented. Reply to this email directly or view it on GitHub https://github.com/jcjohnson/torch-rnn/issues/11#issuecomment-215693916

jcjohnson avatar Apr 29 '16 15:04 jcjohnson

I am still getting this problem using Intel Iris Graphics 550 1536 MB on a MBP.

CPU training per epoch is abt 0.15 - 0.20 and GPU with openCL abt 1.4.

Output: (py2) Charless-MBP:torch-rnn charles$ th train.lua -input_h5 data/tiny_shakespeare.h5 -input_json data/tiny_shakespeare.json -gpu_backend opencl -speed_benchmark 1

Using Apple , OpenCL platform: Apple Using OpenCL device: Intel(R) Iris(TM) Graphics 550 Running with OpenCL on GPU 0

Forward / Backward pass took 4.0332989692688 Epoch 1.00 / 50, i = 1 / 17800, loss = 4.178679 Forward / Backward pass took 2.0255770683289 Epoch 1.01 / 50, i = 2 / 17800, loss = 4.086461 Forward / Backward pass took 1.5219600200653 Epoch 1.01 / 50, i = 3 / 17800, loss = 3.945212 Forward / Backward pass took 1.3577451705933 Epoch 1.01 / 50, i = 4 / 17800, loss = 3.758727 Forward / Backward pass took 1.2509491443634 Epoch 1.01 / 50, i = 5 / 17800, loss = 3.587259 Forward / Backward pass took 1.254331111908 Epoch 1.02 / 50, i = 6 / 17800, loss = 3.492134 Forward / Backward pass took 1.258672952652 Epoch 1.02 / 50, i = 7 / 17800, loss = 3.403253 Forward / Backward pass took 1.1694939136505 Epoch 1.02 / 50, i = 8 / 17800, loss = 3.414152

Any advice on fixing it?

beZXphUB avatar Apr 02 '17 04:04 beZXphUB

I am also finding that using the CPU (-gpu -1 forward/backward pass takes ~0.15) is ~7x faster than using -gpu_backend opencl (forward/backward pass takes ~1.05) on my AMD Radeon R9 M395X.

Any ideas? This issue seems to have become quiet over the last year..?

timbitz avatar Mar 30 '18 18:03 timbitz

++ (running on Intel Iris Graphics 8100 1536 MB on High Sierra (10.13.4) on an early 2015 MBP.

maiamcc avatar Apr 08 '18 18:04 maiamcc

I think you guys are forgetting that the GPU clockspeed is like 1/3 of the CPU clock-speed, so that is why it takes 3 times longer to do a pass.

GPU's are good for if you have a large amount of work to do in parallel, because they generally have more cores than a CPU, particularly if you take advantage of the vector data types.

elspru avatar Apr 20 '18 16:04 elspru