TensorNetwork
TensorNetwork copied to clipboard
Parallelism Contractors
Hi, I have a code in which the most computationally expensive part is the contraction of the whole network, that is composed by a number of tensors usually larger than 15/20. I am using the functions "contractors.auto" and the jax backend, but I notice that there is no parallelism. The code sees the GPU but its usage is always at 0%. On CPU the code uses just one core.
What are the best ways to speed up the computation?
Thank you very much, Giuseppe