pyqtorch icon indicating copy to clipboard operation
pyqtorch copied to clipboard

[Refactoring, Hamevo] Optimize block_to_tensor calls for repeated execution of HamEvo

Open jpmoutinho opened this issue 9 months ago • 0 comments

Original text from @dominikandreasseitz :

Right now, we call block_to_tensor in every forward pass to get the hamiltonian, which is then exponentiated in native pyq. lets find a way to not have to call block_to_tensor but rather "fill in" parameter values in a smarter way.

Actually doing that for parametric Hamiltonians will be hard. However, we should avoid doing it for non-parametric Hamiltonians. Those should only be tensorized once and then cached for repeated evaluations.

Related to https://github.com/pasqal-io/qadence/issues/134

jpmoutinho avatar May 16 '24 12:05 jpmoutinho