seb.
seb.
> Our framework should ignore features labeled as "ignore" @PGijsbers, agree and the fix was surprisingly not trivial: https://github.com/openml/automlbenchmark/pull/224
@PGijsbers typo on my side... I was running `openml/t/259948`!
> To which benchmark should that task belong? I don't see it in any of /s/218, /s/269, /s/270 and /s/271. None, It was a typo! Based on the failing tasks...
@PGijsbers temp monkey patch looks like a good approach
Hi @Innixma, technically, you can already run benchmarks against any branch, and even on fork repo, but not directly from the command line. The recommended approach is to add a...
@Innixma When testing a different version of AG, you shouldn't need to do any change to the `automlbenchmark` application itself, except when it requires a change in the integration code...
I'm not familiar with cache possibilities in GitHub workflow, but this looks like a useful improvement. > The FRAMEWORK/.installed file probably has to be generated. Not sure, I would cache...
> what do you think of caching the input directory/openml cache to avoid any contact with the OpenML server Good idea, we don't use many datasets for the workflows, so...
@Innixma I can't reproduce your error in a default environment: with unmodified `resources/config.yaml` and with custom config without `instance_tags` entry. The only way for me to reproduce the error is...
The reason is only technical: proper error handling is one of the most difficult thing to get right the first time. In this case, the thread that monitors CPU and...