seitzdom
seitzdom
pyqtorch recently introduced several higher order operations like `Sequence`, `Add` and `Scale` which can now be used via qadence
[Refactoring, DA, Backends, Hamevo] Optimize block_to_tensor calls for repeated execution of HamEvo
right now, we call block_to_tensor in every forward pass to get the hamiltonian, which is then exponentiated in native pyq. lets find a way to not have to call block_to_tensor...
Currently, the `QuantumModel` is inheriting from a PyTorch `nn.Module`. This does not play well with the Jax backend thus it is currently not possible to use the quantum model interface...
To support parametric observables for adjoint and GPSR, we can not natively compute gradients w.r.t. observable parameters. Observable parameters are purely classical and they should be treated separately using automatic...
Ideas: @vincentelfving : https://github.com/pasqal-io/qadence/pull/218 @Roland-djee : express controlblocks as projectorblocks
Issue: Right now, when we do: ``` quantum_backend = SomeBackend() conv = quantum_backend.convert(circuit, obs) conv_circ, conv_obs, embedding_fn, params = conv ``` we store all of the following in the initial...
Since we added the JAX backend, we can either have jax.numpy or torch tensor types. The eigenvalues can be cast to the corresponding autodiff engine type when requested
Description: Refactor [`product_state`]((https://github.com/pasqal-io/qadence/blob/main/qadence/states.py#L187), such that it accepts a `backend: str` argument and returns a product state in the backends _native_ representation. A native representation uses native types for the backend...
Closes https://github.com/pasqal-io/qadence/issues/382 Closes https://github.com/pasqal-io/qadence/issues/396 - [x] @smitchaudhary move https://github.com/pasqal-io/qadence-libs/pull/18/ into this MR - [x] create a models/constructors.py containing the QNN constructor logic and the configs - [x] add a classmethod...