ML
ML copied to clipboard
How to adjust MLP output layer settings?
I have a question about the MNIST sample code. https://github.com/RubixML/MNIST
In the MNIST example on git, you say:
The output layer adds an additional layer of neurons with a Softmax activation making this particular network architecture 4 layers deep.
$estimator = new PersistentModel( new Pipeline([ new ImageResizer(28, 28), new ImageVectorizer(true), new ZScaleStandardizer(), ], new MultiLayerPerceptron([ new Dense(100), new Activation(new LeakyReLU()), new Dropout(0.2), new Dense(100), new Activation(new LeakyReLU()), new Dropout(0.2), new Dense(100), new Activation(new LeakyReLU()), new Dropout(0.2), ], 256, new Adam(0.0001))), new Filesystem('mnist.rbx', true) );
However, in the sample code, I can't see any settings for the output layer. In the sample code, I can only see the settings for three hidden layers. Where are the settings for the output layer?
Hey @neosaganeo good observation, the output layer is not as configurable as a hidden layer. The reason is because the output layer's configuration largely depends on the type of problem (classification or regression) and, if classification, the number of possible classes - and only has a few hyper-parameter that are independent of the problem type. Some deep learning libraries require you to define the output layer explicitly but, in Rubix, we build the output layer for you based on your problem. Having that said, you can configure the amount of L2 regularization the output layer receives by adjusting the $alpha hyper-parameter on MLP. You can also swap out the Cost Function if you wanted to. See https://docs.rubixml.com/1.0/classifiers/multilayer-perceptron.html#parameters.
Was there something else that you needed to do with the output layer?
Thank you very much for your answer.