bhack
bhack
/cc @gstoner
Are we going out of libdnn/convolution dogma? ;)
@gstoner as we have already discissed with @naibaf7 this kind of [Hsail kernels](https://bitbucket.org/multicoreware/hccaffe) approach doesn't improve the competiton against cudnn so much.
It would be interesting to know if AMD with @GPUOpen-ProfessionalCompute-Libraries will support the new neural OpenVx extension with https://github.com/GPUOpen-ProfessionalCompute-Libraries/amdovx-modules/
/cc @hughperkins I think that he could be interested in last comments of this thread.
Intel Beignet: 2.0 done https://lists.freedesktop.org/archives/beignet/2017-January/008476.html
@naibaf7 Is there a possibility to have upstreamed Intel kernels? Cause I think mkl-dnn and mkl 2017 will cover only CPU.
@naibaf7 What do you think of [this design](http://libocca.org/documentation/API/CPP)?
What do you want to be our responsabilities? Device, memory and context? Kernel launch?
We have no problem to handle device, memory and context. If you think that would be useful to have this features here ok. If not we will implement this in...