autoxgboost icon indicating copy to clipboard operation
autoxgboost copied to clipboard

Create eval_metric from measure

Open ja-thomas opened this issue 9 years ago • 2 comments

We want to get the correct eval_metric for early stopping based on the passed measure

ja-thomas avatar Apr 04 '17 11:04 ja-thomas

@Coorsaa what's the status here?

ja-thomas avatar Nov 06 '17 09:11 ja-thomas

I haven't looked into it for a while now, but if you want to look at the current status, you can find it in the eval_metric branch

  • it should work for regression already
  • it might work for some classif measures, however we're facing the problem here, that some measures require probabilities and others the class labels. What we get depends on the objective function we're putting in. Moreover, some measures require the positive and the negative class of the task. Hence the generateXgbEvalFun() function needs improved case handling.

while working on it I found another major problem:

  • within mlr the measure functions, e.g. measureMMCE are named not consistently named. Most of them are name with "measure" + the measure's name in capital letters. However, its not consistent, e.g. measureKendallTau for measure kendalltau or measureAUNU for measure multiclass.aunu. Hence match.fun(paste0("measure", toupper(measure$id))) within generateXgbEvalFun() does not give is the correct result in alle cases. However, we cannot simply use measure$function since unlike the measureXY(truth, response), the `````measure$funfunctions have more arguments:function(task, model, pred, feats, extra.args). IMO, this problem can be only solved when we rewrite all measureXY```` functions within to the form "measure" + the measure's name in capital letters. -> measureKENDALLTAU, ...

Coorsaa avatar Nov 07 '17 10:11 Coorsaa