Kristoffer Carlsson

Results 1630 comments of Kristoffer Carlsson

All failures seem to be ``` [21:17:44] -- Checking for module 'libavcodec>=57.0' [21:17:44] -- Package 'libavcodec', required by 'virtual:world', not found ``` Arm: ``` sandbox:${WORKSPACE} # uname -m armv7l sandbox:${WORKSPACE}...

Julia nightly bug. Nothing to do here.

Yes, the tree hash issue is a nightly issue and should be fixed.

> Error appears to now be Should be fixed via https://github.com/JuliaLang/julia/pull/46866

I don't really know why this would happen. The Repl should be orthogonal to the loading of code. Feel free to open an issue though and I can try look...

There is https://docs.nvidia.com/deeplearning/sdk/cudnn-developer-guide/index.html#cudnnConvolutionBiasActivationForward to do the whole forward pass in one shot and then one can use https://docs.nvidia.com/deeplearning/sdk/cudnn-developer-guide/index.html#cudnnConvolutionBackwardBias and https://docs.nvidia.com/deeplearning/sdk/cudnn-developer-guide/index.html#cudnnConvolutionBackwardData for the backward pass?

Heh, I didn't know there already was an implementation so I did one myself (although worse than in the PR). Getting: ![capture2](https://user-images.githubusercontent.com/1282691/51412785-19d0a480-1b6d-11e9-9bf6-ffa0f7760727.PNG) so it seems even for CUDNN the bias...

Looking only at the forward pass we currently have: ``` GPU activities: 59.96% 184.91ms 2350 78.683us 29.184us 117.03us ptxcall_anonymous23_3 31.96% 98.559ms 2350 41.940us 16.960us 67.488us void cudnn::detail::implicit_convolve_sgemm(int, int, int, float...

I guess first thing would be to see if it replicates reliably locally.