qadence
qadence copied to clipboard
[Refactoring] Rework TransformedModule
@Roland-djee @Doomsk @jpmoutinho
Right now, the TransformedModule expects the user to pass floats/tensors to its constructor which are then held as buffers for input/output scaling and shifting. There are several issues which caused by this approach:
- Since we dont use torch pt for serialization, we save scalings/shifting values in a very hacky way which makes it very hard to reinstantiate the same transformedmodule and get the same results
- the transformedmodule also uses a error-prone positional encoding of input scalings and shifting without enforcing the user to pass a list of featureparameters like the QNN. this can lead to scaling/shiftings applied to the wrong featureparameters
Proposal 1:
Fully remove the buffer-approach and instead require the user to pass values for the scalings and shiftings in the values dict:
class TransformedModule:
....
def expectation(self, values):
values = self.transform_input(values)
return values['output_scaling'] * self.model.expectation(values) + values['output_shifting']
This is also in line with the qadence philosophy of the values dict + clear separation of circuit/observable and parameters
Proposal 2:
Fully remove TransformedModule and add utility methods to ml_tools like:
def transformed_expectation(model, values):
values = transform_input(values)
return values['output_scaling'] * model.expectation(values) + values['output_shifting']
The TM class is essentially 300 lines of code to accomplish the code snippet above.
let me know what you think
@inafergra @smitchaudhary @gvelikova maybe you can give your input on that too
The 2nd option definitely seems the cleanest.
+1 for the second option.
Not a huge fan of TransformedModule
in the first place. Much better to nudge users towards using feature_range
(and less importantly target_range
) during QNN definition and using transform
argument of QNN
for output scaling/shifting.
So on that note, a helper function in ml_tools
?
def create_linear_transform(scaling: Tensor, shifting: Tensor) -> Callable[[Tensor], Tensor]:
def transform(outputs):
return scaling*outputs + shifting
return transform
For the user to use like:
...
transform = create_linear_transform(scaling, shifting)
QNN(circ, observable, inputs, transform = transform)
This is maybe too many steps for a simple thing to do. But can help to move away from a clunky TransformedModule
and also helps expose the transform
functionality for more complex transformations.
+1 for the second option.
Not a huge fan of
TransformedModule
in the first place. Much better to nudge users towards usingfeature_range
(and less importantlytarget_range
) during QNN definition and usingtransform
argument ofQNN
for output scaling/shifting.So on that note, a helper function in
ml_tools
?def create_linear_transform(scaling: Tensor, shifting: Tensor) -> Callable[[Tensor], Tensor]: def transform(outputs): return scaling*outputs + shifting return transform
For the user to use like:
... transform = create_linear_transform(scaling, shifting) QNN(circ, observable, inputs, transform = transform)
This is maybe too many steps for a simple thing to do. But can help to move away from a clunky
TransformedModule
and also helps expose thetransform
functionality for more complex transformations.
cool idea! wdyt @Roland-djee @jpmoutinho ?
I also like Smit's idea! I was personally not aware of the fm_parameter_scaling
function containing the feature_range
and target_range
that Smit mentions, and it seems a bit obscure to me having to set QNN output and input transforms in different ways. Ideally, I think that being able to set input and output transforms in a similar manner would make the process more user friendly.
Idea: a way to achieve this could be having both output_transform
and input _transform
as parameters when initializing a QNN. The output_transform
would be what is now just transform
and the input_transform
could work similarly to _transform_x
in TransformedModule
now. We could have helper functions like Smit proposed for both input and output.
I also like Smit's idea! I was personally not aware of the
fm_parameter_scaling
function containing thefeature_range
andtarget_range
that Smit mentions, and it seems a bit obscure to me having to set QNN output and input transforms in different ways. Ideally, I think that being able to set input and output transforms in a similar manner would make the process more user friendly.Idea: a way to achieve this could be having both
output_transform
andinput _transform
as parameters when initializing a QNN. Theoutput_transform
would be what is now justtransform
and theinput_transform
could work similarly to_transform_x
inTransformedModule
now. We could have helper functions like Smit proposed for both input and output.
yeah i was thinking the same. the 'transform' arg of QNN is basically unused, tbh thats the best option to me!
+1 for Smit's approach
however
I'm not quite sure about adding the arguments (even the existing transform arg tbh) and to me it makes more sense that this is done by the user. But that also goes against the much belowed ml_tools training function in general...
What about starting to add hooks in general?
that is, rather than having to come up with lots of different callable arguments as feature requests pop up we'll add pre_forward
and post_forward
type hooks?
Perhaps this could simply other things as well, as in general I think we do various things that can be considered "pre forward" hooks (sympy evaluations, gpsr rules etc).
there are two main ways we could do the hooks. either they are instantiation arguments, or we have register methods (or both)
corresponding draft PR https://github.com/pasqal-io/qadence/pull/385