python-alp
python-alp copied to clipboard
Add more metrics support in skelarn backend
So far only the mean absolute error is supported. We could increase that number of supported metrics in a future release.
- A first step could be to include other
sklearn.metrics, that should be quite easy. - A second step could be to test the serialization of custom objects.
PR to come soon. Local modifications:
- [x] switch the default metric to the
scoreattribute of the sklearn model - [x] support metrics that are available in
sklearn.metrics - [x] change tests so that they test 0, 1 and 2 additional metrics
- [x] local validation
- The major modification in terms of behavior for the end-user is that the
model.full_res.metricsnow has a fieldscore, that is related to thescoreattribute of the sklearn model (that is to say: the user should know what the behavior of its model is). - The user can now specify a list of metrics in the fit, for instance:
expe.fit([data], [data_val], metrics=['accuracy_score', 'mean_squared_error'])They will be computed on the top of thescorethus lead to a number of additional predictions with the model depending on the number of elements indataanddata_val(more specifically: the prediction is done regardless if there are additional metrics and used for all the additional metrics). That leads to 2 optimization points:- it would be easy to add a check and do not do the additional prediction if there are additional metrics.
- however getting the predictions from the
scorefunction of the sklearn model is not possible. A possibility would be to hard-code the default metric for all supported sklearn models if we really want to avoid that unnecessary prediction.