Gradient operator - allow anisotropic scaling
For a BlockOperator we can scale each operator in the Block separately using __rmul__: https://github.com/TomographicImaging/CIL/blob/64e0b1b6b871de0a148adca3845104020609f438/Wrappers/Python/cil/optimisation/operators/BlockOperator.py#L359-L380
The GradientOperator acts like a BlockOperator, returning a block data container, but crucially isn't and thus does not allow this scaling, i.e. if you attempt to scale by a np.array you end up with an array of Scaled operators
This came from a discussion with Martin Sæbye Carøe about anisotropic gradient regularisation, and how the GradientOperator is much quicker than the finite difference operator but doesn't allow this anisotropic scaling.
Discussed in the dev meeting this morning. To get around the question of it should scale like an Operator or like a BlockOperator could be avoided if we added a "weight" or "scaling" argument (similar to weightedleastsquares or leastsquares)