Shreya Rajpal
Shreya Rajpal
Skip artifact logging for MLFlowCallback with an initialization parameter
**Context** I’d seen a higher AUROC (> 0.8) on a dataset in the logs printed out by one specific trial as in screenshot 1, but in the overall summary for...
Currently, we do pretty aggressive windowing in order to support large datasets in hyperopt. For every trial in hyperopt, we set the `window size = / 40`. However, based on...
Add integrations for experiment management for hyperopt. Notes & Requirements: - Should work with a ray backend - The current tensorboard `tfevents` file only stores the tune logs, and not...
This PR adds optional end to end training tests that train a LudwigModel from scratch on small datasets. Currently added: - ATIS
Currently, the [dynamic resource allocation function](https://github.com/ludwig-ai/ludwig/blob/b134a9dfbb1bd01b0bddb0cd1bf1320464d0b10f/ludwig/hyperopt/sampling.py#L53) used in Ludwig evenly distributes all CPUs and GPUs among all trials. However, for schedulers like `async_hyperband` that require different resources per trial depending...
The backend of a `LudwigModel` class can be set either by passing in a fully initialized backend to `LudwigModel`, or by initializing a backend using the `backend` section of a...
For callbacks that perform experiment management, it would be extremely useful to save complete information about the config used for training. The `base_config` will likely miss many defaults that are...
Running on a CPU only cluster when the number of CPUs < the number of samples currently fails. This is because the trials reserve all cores, leaving no CPUs available...