running the network backwards? real-valued output layer?
In nolearn.dbn, is it possible to run the trained network backwards, ie generatively? I'm thinking of eg Hinton's lecture where he presents the output layer of his trained network with a value ('2') and runs it backwards, and it produces lots of possible images (input layer) of the digit '2'.
Also, is it possible to use a real-valued output layer? I know real_valued_vis works for the input layer.
My ultimate goal is to learn a small, simple parameter space -- say 5 real-valued parameters -- which I can use to interactively explore the original feature space, by tweaking these parameters. The two ideas above would allow that.
I know these are naive questions, I'm not a neural networks person. Thanks for making nolearn which is helping to make this field more accessible!