qadence
qadence copied to clipboard
[Refac] Move transform into QNN, Remove TransformedModule
Closes https://github.com/pasqal-io/qadence/issues/382 Closes https://github.com/pasqal-io/qadence/issues/396
- [x] @smitchaudhary move https://github.com/pasqal-io/qadence-libs/pull/18/ into this MR
- [x] create a models/constructors.py containing the QNN constructor logic and the configs
- [x] add a classmethod to QNN which instantiates a QNN from configs only
- [x] remove "transform" arg from QNN constructor
- [ ] Add a descriptive docstring to the new QNN
- [ ] Modify the docs showcasing the new QNN capabilities
Thanks. LGTM.
Treating both the input and output transforms on equal footing is much nicer.
Maybe this is extremely nit-pciky but on the usage side, would it make sense to clarify the docstring here in featue_map
? As since now there are two ways of transforming the inputs, there is a risk of double transform.
feature_range: range of data that the input data provided comes from. If a range is provided,
it gets linearly transformed to the domain of the feature map used. See `target_range` below.
If None is provided, no transformation happens.
Or something similar?
@smitchaudhary I guess a question is if it's worth keeping both options. I guess the QNN option is only relevant if someone uses their own custom feature map right?
I guess the QNN option is only relevant if someone uses their own custom feature map right?
@jpmoutinho I think so. Or if someone for some reason wants some fancier transformation.
@dominikandreasseitz I have changed the observable config and added the functionality to automatically scale the output This happens in a manner similar that is similar to what happens with the feature maps, that is, the user provides the range, and the scaling and the shiftings are calculated automatically (according to what is the natural range of the detuning).
In addition, there is the option to also make these transformations trainable. If that is the case, then the scaling and shifting values are initialized to appropriate values, but then training will take over after that of course.
@dominikandreasseitz I have changed the observable config and added the functionality to automatically scale the output This happens in a manner similar that is similar to what happens with the feature maps, that is, the user provides the range, and the scaling and the shiftings are calculated automatically (according to what is the natural range of the detuning).
In addition, there is the option to also make these transformations trainable. If that is the case, then the scaling and shifting values are initialized to appropriate values, but then training will take over after that of course.
I looked at the functions for the output transformation, a few thoughts:
- Are you considering only the detuning because otherwise it's harder to know the natural range of the observable?
- Either way, I think it's still hard if the observable is not diagonal, e.g.
detuning = X
orY
. - The natural range of an observable is bounded by the minimum and maximum eigenvalues, so to make it fully general and work the same way the
feature_map
does it would probably require computing the spectrum each time, but that would be very intensive for larger qubit numbers. - Given that, one option would be to only allow getting the natural range for diagonal observables (but still allow multi-qubit terms). There is a private method
block._is_diag_pauli
that checks if a block is only composed ofI
,Z
orN
. - Another option would be to not make it work like the
feature_map
and the user can only pass a scaling / shifting constant, or set it to be trainable.
Not sure which option is best :)
@jpmoutinho
Another option would be to not make it work like the feature_map and the user can only pass a scaling / shifting constant, or set it to be trainable.
This seems like the most straightforward option to me. I did consider this too but went the way I did to keep the interface similar to the feature_map
, which is not necessary of course.
Are you considering only the detuning because otherwise it's harder to know the natural range of the observable?
Indeed that is the reason. I agree it is an unnecessary restriction on the kind of observables that one can have.
The natural range of an observable is bounded by the minimum and maximum eigenvalues, so to make it fully general and work the same way the feature_map does it would probably require computing the spectrum each time, but that would be very intensive for larger qubit numbers.
Currently, the hamiltonian_factory
only allows for 4 detunings, namely, X
, Y
, Z
and N
. And I think we know the spectrum in all 4 cases. Right? It is [-1, 1] for the 3 Pauli types and [0, 1] for N
. Do I have this wrong? This is why the different scaling and shifting parameters for what kind of detuning is used.
Though overall, I think this is imposing many restrictions which might not be necessary. Can also simply ask the user for the scaling and shifting parameters.
Another side effect of going this way is that now, the scaling is basically applied as the detuning strength of the first qubit, and the rest have the default strength of 1.0
This was necessary because if we were to apply the scaling to all the qubits, the range would be scaled by scaling^n_qubits
. A larger range is not a problem except in the case where scaling
< 1.
So even in the case of the user explicitly specifying the shift and the scaling values, do you suggest we keep it the same way as here and apply the scaling to the first qubit (pretty arbitrary) or do something else?
@smitchaudhary
Indeed you are correct if we only keep single-qubit terms then X
or Y
is still ok, the range is the same as Z
.
I don't understand the need to scale only the first qubit though, I think it should be a factor multiplying the whole observable, but we can talk about that today :)
@jpmoutinho
I don't understand the need to scale only the first qubit though, I think it should be a factor multiplying the whole observable, but we can talk about that today :)
Yeah that seems right. Was just an artifact of the previous code where this scaling was done through specifying detuning_strength
. A multiplication factor is a lot more straightforward.
Hey @dominikandreasseitz @smitchaudhary what's the status on this ?
Hey @dominikandreasseitz @smitchaudhary what's the status on this ?
@Roland-djee in progress, its actually quite a bit of files to go through. i am creating tests atm
Hey @dominikandreasseitz @smitchaudhary what's the status on this ?
@Roland-djee in progress, its actually quite a bit of files to go through. i am creating tests atm
Nice thanks for the heads-up.
@jpmoutinho I was thinking of setting certain defaults in the current config workflow. To keep it in line with the other pre-existing constructors.
- Default
basis_set
is Fourier? - Default
reupload_scaling
is Constant? - Default
feature_range
is None? (Then thefeature_map
does no transformation of the input). - Number of
num_layers
in ansatz be 1? -
ansatz_type
ishea
? -
num_features
being 1?
Just listing some of the things that I think could be the most common use? Thoughts?
Additionally, currently, the features are named phi_i
very lazily because I added it after QNN
started to require inputs
for multi-feature models. Maybe this should have the same behavior as when calling the QNN
and have no defaults?
@smitchaudhary thanks, and sorry for not commenting on this before.
Generally I agree with what you said, mainly to have it behave like the current defaults for the contructors.
I would suggest changing num_layers
to depth
just to be consistent with the previous hea
syntax.
I never liked the default feature map param being called phi
😅. I guess changing that to x
would technically break backwards compatibility for some codes 🤔. We can do it later... But I think the best for the QNN inputs would be to have no defaults right? If a user defines a multi-dimensional feature map they will have to provide different names anyway, so they might as well pass it into the QNN.
A quick overview of the changes made:
- The defaults are now set as discussed above.
- Various properties of the feature map, can now be provided as a dictionary instead of a list where the order of the elements was responsible for applying it to the correct feature. For example, earlier, the workflow would look like:
fm_config = FeatureMapConfig(
num_features=2,
feature_range=[ (0., 1.), (-1., 1.) ]
)
The same would now look like:
fm_config = FeatureMapConfig(
num_features=2,
inputs=('t', 'x')
feature_range={
't': (0., 1.),
'x':(-1., 1.)
}
)
Since now, for more than 1 input features, the user is required to give the inputs
list anyways, so much less error prone for the user to also specify the range (or any other property such as basis set, reupload scaling etc) explicitly.
While, if the user wants to apply the same property to each of the features (for example, basis set is Fourier), they can still give a single value, and it will be taken care of.
Thoughts?
Thanks @smitchaudhary, I like the dict option!
Hello, a quick small change made to the ObservableConfig
. Now, it no longer takes the number of qubits in the input but uses the register
variable passed to the function that creates qnn. Why so?
- Consistent with how the other configs are defined.
- As we are creating this QNN from the configs, the register is fixed already. No? Why provide it in two different ways?
- For the observable
hamiltonian_factory
can take aRegister
object, thus being able to handle arbitrary configurations with arbitrary distances (and thus an interaction accordingly), restrictingObservableConfig
to a singlen_qubits
is taking away this possibility when creating analog circuits.
And also add a small example in docstring.