pyqtorch
pyqtorch copied to clipboard
[Refactoring, Hamevo] Optimize block_to_tensor calls for repeated execution of HamEvo
Original text from @dominikandreasseitz :
Right now, we call block_to_tensor in every forward pass to get the hamiltonian, which is then exponentiated in native pyq. lets find a way to not have to call block_to_tensor but rather "fill in" parameter values in a smarter way.
Actually doing that for parametric Hamiltonians will be hard. However, we should avoid doing it for non-parametric Hamiltonians. Those should only be tensorized once and then cached for repeated evaluations.
Related to https://github.com/pasqal-io/qadence/issues/134