moabb
moabb copied to clipboard
Handled errors during validation via error_score parameter
When using error_score=np.nan in sklearn, this results in np.nan value whenver there's an error during transformations or model fitting.
For my personal use case I also would like values of np.nan when there's an error in the validation itself. I have a dataset for which (due to early stopping) sometimes the AUC cannot be calculated for 5-folds. In current moabb, this error is raised and stops the benchmark.
So far I only implemented it for WithinSessionEvaluation as I don't use the other ones.
It is ok for me. You could rebase on master branch to merge these changes.
For the sake of consistency, it could be good to update CrossSessionEvaluation and CrossSubjectEvaluation as well. Could you do that? I could help if needed.
I think there was some issues with CrossSubjectEvaluation, but I can check again and update this PR if possible.
My newest commit should contain the necessary changes, however, I have not yet tested this, as I currently do not use any of the other evaluations. I do not know when I will be able to check this on my ERP benchmark.
Did you had the time to check your code? You could add som test to verify that it is correctly working.