keisuke umezawa
keisuke umezawa
@erentknn Thank you for your contribution! As written in https://github.com/optuna/optuna/issues/3779#issuecomment-1181315711 , we do not need to implement a call back, but should add parameters in the log. How about adding...
@c-bata Could you check it when you have time?
## Current conclusion Only for simplifying the argument of `after_trial`, it is too heavy to implement `another state` or `editable trials after completion`. The current decision will be that `We...
> This is because the pruners have logic to skip pruning. If we add the interval argument to the callbacks, users may need to take care of the relationship between...
@VladSkripniuk cc: @toshihikoyanase @hvy Sorry for being late for updates. I discussed it with the other developers, and we concluded it as follows: 1. The intervals for reporting and pruning...
@hvy Thank you for the comment. I updated the top description.
@c-bata I clarified the situation more in the issue description. > I checked the rest of test cases (test_study.py, test_dataframe.py, and integration_tests/lightgbm_tuner_tests/test_optimize.py). I think that using DeterministicSampler in these tests...
@HideakiImamura I think this issue itself is out of the scope of v3 roadmap. How about removing v3 tag?
Optuna does not expect such a use case, but let me think about ad-hoc solutons. ### Option 1 At first, run `train_model_A.py` and you can get a series of trials....
> So, how about creating the keras distributed example based on the tutorial, and then translate it using tensorflow APIs? That's a better idea!