Kristoffer Carlsson
Kristoffer Carlsson
All failures seem to be ``` [21:17:44] -- Checking for module 'libavcodec>=57.0' [21:17:44] -- Package 'libavcodec', required by 'virtual:world', not found ``` Arm: ``` sandbox:${WORKSPACE} # uname -m armv7l sandbox:${WORKSPACE}...
Julia nightly bug. Nothing to do here.
Yes, the tree hash issue is a nightly issue and should be fixed.
> Error appears to now be Should be fixed via https://github.com/JuliaLang/julia/pull/46866
I don't really know why this would happen. The Repl should be orthogonal to the loading of code. Feel free to open an issue though and I can try look...
Yay, OMR is innocent!
There is https://docs.nvidia.com/deeplearning/sdk/cudnn-developer-guide/index.html#cudnnConvolutionBiasActivationForward to do the whole forward pass in one shot and then one can use https://docs.nvidia.com/deeplearning/sdk/cudnn-developer-guide/index.html#cudnnConvolutionBackwardBias and https://docs.nvidia.com/deeplearning/sdk/cudnn-developer-guide/index.html#cudnnConvolutionBackwardData for the backward pass?
Heh, I didn't know there already was an implementation so I did one myself (although worse than in the PR). Getting:  so it seems even for CUDNN the bias...
Looking only at the forward pass we currently have: ``` GPU activities: 59.96% 184.91ms 2350 78.683us 29.184us 117.03us ptxcall_anonymous23_3 31.96% 98.559ms 2350 41.940us 16.960us 67.488us void cudnn::detail::implicit_convolve_sgemm(int, int, int, float...
I guess first thing would be to see if it replicates reliably locally.