Cédric Bovar
Cédric Bovar
Hi, * What are the errors? * are you using GPU?
Hi, I think this is the same demand as #52. There is no unpooling and transposed convolution layers yet. It's definitely something that is missing in ConvNetSharp. I'd be happy...
I have just tried with cuDNN v6.0 (April 27, 2017), for CUDA 8.0 and the unit tests pass. I will update REAME.md because 6.1 doens't seem to appear on nvidia...
There are some small differences between MnistDemo CPU and GPU. - float vs double - different batch size (20 vs 1024) - GPU version doesn't use L2 regularization and Momentum...
Hi, What is the size of your batch ? GPU should be better utilized when sending bigger batches.
Which sample did you run exactly ?
I've run MinistDemo.GPU in Release mode and got this in VS2017 profiler:  I'm however not sure it represents reliably the GPU utilization.
Hi. There used to be a 'MergeLayer' to do that. It seems it somehow disappeared after some refacto. I am currently on on holidays with no access to a computer....
I've found those two classes from version 0.2.0: [MergeLayer](https://github.com/cbovar/ConvNetSharp/blob/v0.2.0/src/ConvNetSharp/Layers/MergeLayer.cs) [VolumeWrapper ](https://github.com/cbovar/ConvNetSharp/blob/v0.2.0/src/ConvNetSharp/VolumeWrapper.cs) It can give an idea how to implement a `MergeLayer` in the current version.
Hi, It should be possible using the 'Flow' part of ConvNetSharp (by creating a computation graph). I will try to implement your example to ConvNetSharp soon and will post it...