scikit-optimize
scikit-optimize copied to clipboard
Sequential model-based optimization with a `scipy.optimize` interface
`skopt.plots.plot_evaluations(results)` and `skopt.plots.plot_objective(results)` gives index error. It says `IndexError: tuple index out of range` on this line `return str(dimension.categories[int(x)])` in `plots.py`. But if I edit `plots.py` like so: ``` return...
I have noticed in some of my runs on smaller search spaces there are collisions between the points returned by subsequent `ask()` calls. Right now skopt prints a warning, but...
@kernc Hi. I see there are many PRs that would greatly improve the library ready to merge to `master`. Any idea when this will happen and a new release will...
Hi, maybe I'm doing this wrong, but I was trying to implement a version of what's in the hyperparameter optimization example, but not in an interactive notebook. If I set...
Here is the thing, I use this library to deal with a complex issue. In my issue, the parameters need to be incremental(like 1,5,9,15,20) in range(0,200), but I didn't find...
Using skopt.__version__ == '0.9.0' I am getting the error: ``` python3.9/site-packages/skopt/learning/gaussian_process/kernels.py in gradient_x(self, x, X_train) 125 scaled_exp_dist[mask] /= dist[mask] 126 scaled_exp_dist = np.expand_dims(scaled_exp_dist, axis=1) --> 127 gradient[mask] = scaled_exp_dist[mask] *...
This is a remake of #1030 with (hopefully) a cleaner history. I had initially forked kernc's fork instead of the base repo. --------------------------------- As discussed in PR #988 and issue...
The [Sampling methods](https://scikit-optimize.github.io/dev/modules/sampler.html) user guide is currently completely empty. Some docs for it would be very useful! Note that there are examples at [Comparing initial sampling methods](https://scikit-optimize.github.io/dev/auto_examples/sampler/initial-sampling-method.html#sphx-glr-auto-examples-sampler-initial-sampling-method-py) and each sampler...
when restarting an optimisation using a previous optimisation result the new optimiser goes though a number of randomly sampled points equal to the number of previously evaluated points before continuing...