Atiyo Ghosh
Atiyo Ghosh
Firstly, thank you. I'm really enjoying the SciML ecosystem. I'm noticing an unexpectedly high number of allocations when applying operators in place. Here's a minimal example: ``` d2x = CenteredDifference{1}(2,...
The DiffEq ecosystem already has some support for DDEs, so I thought extending to DSSAs would be a natural extension for DiffEqJump. Is there any interest in this?
At the moment, `train` accepts a function `write_tensorboard(writer, loss, metrics, iteration)`, where metrics is returned from a custom loss function. However, we might want to log non-training related metrics, e.g....
TLDR: Currently when we ask for `observables = [Z(0), Z(1)]`, in an expectation, we get something like ``, but I think it'd be more useful if we returned `[, ]`...
Instead of calculating eigenvalues and eigenvectors of Hamiltonians acting on the entire state vector, we can instead calculate them on the matrix representation of the observable on a limited support...
The previous PR (https://github.com/pasqal-io/horqrux/pull/27) implements the parameter shift rule (PSR) for parameters defined in the `values` argument of expectations. However, it suffered from some limitations: - It did not allow...
As it stands, the adjoint method is only used to calculate gradients via vector jacobian products for Parametric gates with string parameters. However, it can be : - extended to...
The use of `checkify.check` can alter the return type of jitted functions, so it has been proposed to investigate chex (https://github.com/google-deepmind/chex) to replace the `checkify.check` calls in https://github.com/pasqal-io/horqrux/compare/feature/psr_on_all_gates Also, it...
Same problem as https://github.com/pasqal-io/pyqtorch/issues/217, i.e. if a single parameter is used mutiple times in a circuit, the resulting PSR gradient is incorrect. However, the fix in https://github.com/pasqal-io/pyqtorch/pull/219 can't be straightfowardly...