MPSKit.jl icon indicating copy to clipboard operation
MPSKit.jl copied to clipboard

GradientGrassmann and LazySum performance issue

Open lkdvos opened this issue 1 year ago • 4 comments

Having a look at the output of the tests, it seems like there is some performance issue going on with the combination of LazySum and GradientGrassmann. My best guess is that the gradient is actually not computed entirely correctly, but the algorithm still converges because Hlazy = [0.5*H - H + 5.553H] is actually not that good of a testcase.

Test Summary:        | Pass  Total     Time
find_groundstate     |   40     40  1m16.4s
  Infinite 1         |    2      2     0.2s
  Infinite 2         |    2      2     0.8s
  Infinite 3         |    2      2     3.2s
  Infinite 4         |    2      2     7.1s
  Infinite 5         |    2      2     0.2s
  LazySum Infinite 1 |    3      3     2.2s
  LazySum Infinite 2 |    3      3     3.1s
  LazySum Infinite 3 |    3      3     2.9s
  LazySum Infinite 4 |    3      3     5.3s
  LazySum Infinite 5 |    3      3     0.9s
  Finite 1           |    2      2     1.3s
  Finite 2           |    2      2     2.6s
  Finite 3           |    2      2     7.2s
  LazySum Finite 1   |    3      3     2.9s
  LazySum Finite 2   |    3      3     3.0s
  LazySum Finite 3   |    3      3    33.4s

lkdvos avatar Feb 21 '24 08:02 lkdvos

Does this means addition of MPO's is expected to give the wrong result at this point in time or am I misinterpreting what lazysum does ?

Gertian avatar Feb 21 '24 11:02 Gertian

While it's possible that lazysum gives the incorrect gradient, I would've expected gradientgrassmann to then completely deadlock, as optimkit's linesearch is quite picky about the correctness of the gradient.

LazySum is not used for adding two MPO's together (though comparing the two would be a great test for the gradient)

Op wo 21 feb 2024 om 12:29 schreef Gertian @.***>:

Does this means addition of MPO's is expected to give the wrong result at this point in time or am I misinterpreting what lazysum does ?

— Reply to this email directly, view it on GitHub https://github.com/maartenvd/MPSKit.jl/issues/121#issuecomment-1956450141, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAJKVCTEXU2T3C5M4KOTHJ3YUXLCVAVCNFSM6AAAAABDSRYFROVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNJWGQ2TAMJUGE . You are receiving this because you are subscribed to this thread.Message ID: @.***>

maartenvd avatar Feb 21 '24 11:02 maartenvd

I think the only reason gradientgrassmann does not deadlock is that the sum in the test case is actually not a sum with different terms, and just reduces to factor * H. Thus, even if the gradient is wrong, it's probably only wrong by a factor, which makes it converge but just very slowly because it cannot reason about the norm of the gradient correctly. This is just my intuition though, I did not do any checks

lkdvos avatar Feb 21 '24 13:02 lkdvos

@maartenvd , thanks for this information. This stressed me out for a second...

Gertian avatar Feb 21 '24 13:02 Gertian