autoxgboost
autoxgboost copied to clipboard
Create eval_metric from measure
We want to get the correct eval_metric for early stopping based on the passed measure
@Coorsaa what's the status here?
I haven't looked into it for a while now, but if you want to look at the current status, you can find it in the eval_metric branch
- it should work for regression already
- it might work for some classif measures, however we're facing the problem here, that some measures require probabilities and others the class labels. What we get depends on the objective function we're putting in. Moreover, some measures require the positive and the negative class of the task. Hence the
generateXgbEvalFun()function needs improved case handling.
while working on it I found another major problem:
- within mlr the measure functions, e.g. measureMMCE are named not consistently named. Most of them are name with "measure" + the measure's name in capital letters.
However, its not consistent, e.g. measureKendallTau for measure kendalltau or measureAUNU for measure multiclass.aunu. Hence
match.fun(paste0("measure", toupper(measure$id)))withingenerateXgbEvalFun()does not give is the correct result in alle cases. However, we cannot simply usemeasure$functionsince unlike themeasureXY(truth, response), the `````measure$funfunctions have more arguments:function(task, model, pred, feats, extra.args). IMO, this problem can be only solved when we rewrite allmeasureXY```` functions within to the form "measure" + the measure's name in capital letters. -> measureKENDALLTAU, ...