python-alp icon indicating copy to clipboard operation
python-alp copied to clipboard

Add more metrics support in skelarn backend

Open DrAnaximandre opened this issue 9 years ago • 1 comments

So far only the mean absolute error is supported. We could increase that number of supported metrics in a future release.

  • A first step could be to include other sklearn.metrics, that should be quite easy.
  • A second step could be to test the serialization of custom objects.

DrAnaximandre avatar Nov 09 '16 17:11 DrAnaximandre

PR to come soon. Local modifications:

  • [x] switch the default metric to the score attribute of the sklearn model
  • [x] support metrics that are available in sklearn.metrics
  • [x] change tests so that they test 0, 1 and 2 additional metrics
  • [x] local validation

  • The major modification in terms of behavior for the end-user is that the model.full_res.metrics now has a field score, that is related to the score attribute of the sklearn model (that is to say: the user should know what the behavior of its model is).
  • The user can now specify a list of metrics in the fit, for instance: expe.fit([data], [data_val], metrics=['accuracy_score', 'mean_squared_error']) They will be computed on the top of the score thus lead to a number of additional predictions with the model depending on the number of elements in data and data_val (more specifically: the prediction is done regardless if there are additional metrics and used for all the additional metrics). That leads to 2 optimization points:
    • it would be easy to add a check and do not do the additional prediction if there are additional metrics.
    • however getting the predictions from the score function of the sklearn model is not possible. A possibility would be to hard-code the default metric for all supported sklearn models if we really want to avoid that unnecessary prediction.

DrAnaximandre avatar Nov 30 '16 15:11 DrAnaximandre