qiskit
qiskit copied to clipboard
Steppable Optimizer
Summary
We incorporate a new subclass of Optimizer called SteppableOptimizer. A steppable optimizer has a method step() that allows to perform one single step in the optimization process as opposed to having to perform the whole minimization of the objective function. This way the user has more options when dealing with complicated functions to minimize. In this spirit, we have chosen to add ask() and tell() to this interface as well. More information about this can be found in the documentation.
The main goal of this PR is to implement the interface of the abstract class SteppableOptimizer and document it properly. In order to make some tests, we have implemented GradientDescent using this new framework.
Details and comments
A class CMA-ES is also in the making, but I prefered to start by making this PR with GradientDescent because it is easier to implement and later on we can add this other class.
Note that this PR is replacing #8150. The reason is that I accidentally altered the history of the branch and thought it was cleaner to start a new branch and a new PR.
Thank you for opening a new pull request.
Before your PR can be merged it will first need to pass continuous integration tests and be reviewed. Sometimes the review process can be slow, so please be patient.
While you're waiting, please feel free to review other open PRs. While only a subset of people are authorized to approve pull requests for merging, everyone is encouraged to review open pull requests. Doing reviews helps reduce the burden on the core team and helps make the project's code better for everyone.
One or more of the the following people are requested to review this:
- @Qiskit/terra-core
- @manoelmarques
- @woodsp-ibm
You should not include vim's .swp files.
Hello, I noticed that the documentation for the learning_rate to be provided in the constructor of gradient descent says:
learning_rate: A constant or generator yielding learning rates for the parameter updates. See the docstring for an example.
If I am not mistaken this is techincally incorrect, right? It is not a generator yielding learning rates but rather a factory of iterators yielding learning rates.
I am not 100% sure about it, so if it is already correct, why so? For some context here is some code from the examples in the documentation.
def learning_rate():
power = 0.6
constant_coeff = 0.1
def powerlaw():
n = 0
while True:
yield constant_coeff * (n ** power)
n += 1
return powerlaw()
optimizer = GradientDescent(maxiter=100, learning_rate=learning_rate)
The real generator (or iterator) here is powerlaw() and not learning_rate, right?
Also, the documentation for perturbation says so:
perturbation: If no gradient is passed to :meth:`~.minimize` the gradient is
approximated with a symmetric finite difference scheme with ``perturbation``
perturbation in both directions (defaults to 1e-2 if required).
Ignored when we have an explicit function for the gradient.
The finite difference method that is currently implemented uses forward difference approximations. We should either change the method and make it symmetric or change the documentation. I might also be missing some detail. If you want to have a look at the method that is called when approximating gradients: https://github.com/Qiskit/qiskit-terra/blob/10822404936e98df01f2975ef43b8378006621ba/qiskit/algorithms/optimizers/optimizer.py#L190-L238
@MarcDrudis yes to both your comments above 🙂 Also, I think the learning rate can be an array which doesn't seem to be documented.
This also needs a release note 🙂
Pull Request Test Coverage Report for Build 2948760306
- 144 of 164 (87.8%) changed or added relevant lines in 5 files are covered.
- 8 unchanged lines in 3 files lost coverage.
- Overall coverage increased (+0.0007%) to 84.121%
| Changes Missing Coverage | Covered Lines | Changed/Added Lines | % |
|---|---|---|---|
| qiskit/algorithms/optimizers/gradient_descent.py | 65 | 69 | 94.2% |
| qiskit/algorithms/optimizers/optimizer_utils/learning_rate.py | 22 | 30 | 73.33% |
| qiskit/algorithms/optimizers/steppable_optimizer.py | 52 | 60 | 86.67% |
| <!-- | Total: | 144 | 164 |
| Files with Coverage Reduction | New Missed Lines | % |
|---|---|---|
| qiskit/extensions/quantum_initializer/squ.py | 2 | 79.78% |
| qiskit/pulse/library/waveform.py | 3 | 89.36% |
| qiskit/test/decorators.py | 3 | 66.13% |
| <!-- | Total: | 8 |
| Totals | |
|---|---|
| Change from base Build 2947517494: | 0.0007% |
| Covered Lines: | 56829 |
| Relevant Lines: | 67556 |