Hiroyuki Vincent Yamazaki
Hiroyuki Vincent Yamazaki
I created a naive implementation #1522 but noticed that it's not living up to what a user might expect (see https://github.com/optuna/optuna/pull/1522#issuecomment-658709554). Let me label this issue as contribution welcome since...
Labeled this contribution welcome. However, I believe that the interface such as what parameters to take and behaviors such as error handling is open for discussion. If someone would be...
https://github.com/optuna/optuna/pull/2490 has now been merged. The only integration callback that probably should adopt the same change, after which we can close this issue is `XGBoostPruningCallback` due to the usual large...
Note that https://github.com/optuna/optuna/pull/3807 has been merged now.
Reopening as this is still an issue.
Hm I see. Do you have the code for reproduction? Conditional hyperparameters will be skipped (unless you specify which parameters to evaluate explicitly using the `params` parameter) but I would...
Thanks for the follow-up. I don't think that falling back to independent sampling should affect the hyperparameter importance results. However, as you mention, they might just be two different manifestations...
Does the following simulate your study? ```python import optuna def objective(trial): x = trial.suggest_float("x", 0, 1) y = trial.suggest_float("y", 0, 1) if trial.number == 1: raise RuntimeError # x =...
Thanks for sharing your study. I now see that the `low` differs between trials for the `learning_rate` (2e-6 for the first trials, 2e-7 for the later ones). This is the...
> PS: Do you think the documentation should give a hint about this precondition? I could do a PR for this. Yes, that'd be great! Going through the logic in...