hyperparameter-optimization
hyperparameter-optimization copied to clipboard
Best vs mean CV score
Thank you for this helpful notebook. Can you explain why you return the best cv scores in the objective function? Isn't that an overly optimistic (upward bias) measure of performance? I believe the results of the n_fold cross validations should be averaged and returned as the score/loss. Am I missing something?
You are correct that the average metric from all the folds should be used as the measure of performance. If you look at the code for the objective function, I do return the average roc-auc over the folds.
best_score = np.max(cv_results['auc-mean'])
loss = 1 - best_score
auc-mean
is the average auc over the cross-validation folds.