Haoyang Fang
Haoyang Fang
@zhiqiangdon Shall we initialize `self._config` in `learner.__init__` to avoid using `self._hyperparameters`?
TODO: 1. Add num_to_keep for HPO in our multimodal configs instead of hardcoding it: https://github.com/autogluon/autogluon/blob/c51aa59cd4c32fd96420c79140fd832e7dd09fc7/multimodal/src/autogluon/multimodal/utils/hpo.py#L175 2. Add checkpoint selection (and cleaning) before HPO to reduce the peak storage: https://github.com/autogluon/autogluon/blob/c51aa59cd4c32fd96420c79140fd832e7dd09fc7/multimodal/src/autogluon/multimodal/learners/base.py#L597
Add installation instruction (downgrade torch to 2.1.0) in https://github.com/autogluon/autogluon/pull/4447.