ConvNetSharp
ConvNetSharp copied to clipboard
Pure Flow Documentation
Thank you for this great library,
I'm trying to use the pure flow approach, but I have not been able to understand how to do it looking at ExampleCpuSingle.... Could you please provide a sample code showing how to build and train the net in the example image ?
Thank you very much
Hi. I'm on currently on holidays, having no access to a computer. I will reply when I get back.
Fine, thanks! Have nice holidays! Looking forward for your reply.
By the way, the example image is a screenshot of the output of the MnistDemo.Flow.GPU
demo.
In this demo, a network is built using layers from ConvNetSharp.Flow.Layers
. Those layers build a computation graph. You can see the use of the Flow approach in the ConvNetSharp.Flow.Layers.ConvLayer for example:
var cns = ConvNetSharp<T>.Instance;
using (ConvNetSharp<T>.Instance.Scope($"ConvLayer_{this.Id}"))
{
var content = new T[this._filterCount].Populate(this.BiasPref);
this._bias = cns.Variable(BuilderInstance<T>.Volume.SameAs(content, new Shape(1, 1, this._filterCount, 1)), "Bias");
this._conv = cns.Conv(parent.Op, this._width, this._height, this._filterCount, this.Stride, this.Pad);
this.Op = this._conv + this._bias;
}
Thank you for you answer. So I misunderstood, it is a net obtained by stacking layers represented in a computation graph, right? Actually, what I'd wish to do is to create a graph with more than one input layer, then after some operations consolidate layers to get a unique last layer and train backpropagating error over each branch of the graph. Can it be done? Should I use Ops? I can't really figure out how to merge volumes coming from different inputs and operations...
It seems you have the same need as #68
Yes, I was considering it too... Thanks!