mlpack
mlpack copied to clipboard
Expose ensmallen Callbacks.
The 2.10.0 release of ensmallen contains two major improvements:
-
Templated Optimize()
-
Callback support
For callbacks, this means that Optimize()
can take any number of
extra arguments that specify some special behavior to be taken when the
optimization is run. So, for instance, calling
Optimize(FunctionType&, arma::mat&, PrintLoss())
means that the objective function's value will be printed during the optimization. Checkout the documentation for more examples:
Since mlack uses ensmallen as an optimization framework, it would be nice to expose that new functionally to the existing methods in mlpack as well.
Here is an example:
// Load the training set.
arma::mat dataset;
data::Load("thyroid_train.csv", dataset, true);
// Split the labels from the training set.
arma::mat trainData = dataset.submat(0, 0, dataset.n_rows - 4, dataset.n_cols - 1);
// Split the data from the training set.
arma::mat trainLabelsTemp = dataset.submat(dataset.n_rows - 3, 0, dataset.n_rows - 1, dataset.n_cols - 1);
// Initialize the network.
FFN<> model;
model.Add<Linear<> >(trainData.n_rows, 8);
model.Add<SigmoidLayer<> >();
model.Add<Linear<> >(8, 3);
model.Add<LogSoftMax<> >();
// Train the model.
model.Train(trainData, trainLabels, PrintLoss());
The important line is:
model.Train(trainData, trainLabels, PrintLoss());
which now takes another parameter/function to print the loss of the model.
To support the functionality above we use a C++ feature called parameter packs; the main modification for the FFN
class looks like:
//ffn.hpp
template<typename OptimizerType, typename... CallbackTypes>
double Train(arma::mat predictors,
arma::mat responses,
OptimizerType& optimizer,
CallbackTypes&&... callbacks);
//ffn_impl.hpp
template<typename OutputLayerType, typename InitializationRuleType,
typename... CustomLayers>
template<typename OptimizerType, typename... CallbackTypes>
double FFN<OutputLayerType, InitializationRuleType, CustomLayers...>::Train(
arma::mat predictors,
arma::mat responses,
OptimizerType& optimizer,
CallbackTypes&&... callbacks)
{
ResetData(std::move(predictors), std::move(responses));
// Train the model.
Timer::Start("ffn_optimization");
const double out = optimizer.Optimize(*this, parameter, std::forward<CallbackTypes>(callbacks)...);
Timer::Stop("ffn_optimization");
Log::Info << "FFN::FFN(): final objective of trained model is " << out
<< "." << std::endl;
return out;
}
I think we can distribute the work over multiple PR's, so we should go through the mlpack codebase and see which method e.g. FFN, RNN
needs to be adapted and open a PR with the modification for each class.
Hi! Can I work on this? I would need guidance but I'll work towards solving this. 🙂
@ameyanator feel free, happy to clarify anything.
@ameyanator are you working on this issue?
@Saahitya no sorry, I have suddenly got busy with my college work and am unable to work on this presently.
Is anyone currently working on this?
Hello @vss96, @Saahitya worked on the FFN/RNN, but there are a couple of methods that have to be modified, like the lr: https://github.com/mlpack/mlpack/tree/master/src/mlpack/methods/logistic_regression.
@zoq Thanks for the info! I would like to pick that piece up :)
Great, please feel free, let me know if I should clarify anything.
@zoq Are there any more methods I could implement the callback functionality for?
@jaibhageria linear svm comes to mind: https://github.com/mlpack/mlpack/blob/master/src/mlpack/methods/linear_svm/linear_svm.hpp#L208
Hi! I would like to start contributing with this issue, any kind of guidance will be helpful
Hello @knakul853 there are similair PR's open or merged that should be helpful:
- https://github.com/mlpack/mlpack/pull/2059
- https://github.com/mlpack/mlpack/pull/2057
if you like you can take a look at the softmax regression method. Let me know if I should clarify anything.
@zoq thanks
@zoq Do you think we can close this one?
@shrit , I think #2046 is yet to be merged , we can close this issue after that is merged