ITensors.jl icon indicating copy to clipboard operation
ITensors.jl copied to clipboard

Add contraction order optimizers

Open GiggleLiu opened this issue 3 years ago • 2 comments

I added OMEinsumContractionOrders as a tensor network contraction order optimizer backend. The OMEinsumContractionOrders includes some state of the art contraction order optimizers: https://github.com/TensorBFS/OMEinsumContractionOrders.jl

I removed the OMEinsum dependency in OMEinsumContractionOrders so that it is light weighted now. The using time is not increased much.

julia> @time using ITensors
  1.667815 seconds (3.95 M allocations: 281.811 MiB, 1.64% gc time, 30.85% compilation time)

Example

julia> n = 6
6

julia> vindices = [Index(2) for i=1:n, j=1:n];

julia> hindices = [Index(2) for i=1:n, j=1:n];

julia> tensors = vec([randomITensor(hindices[i, j], hindices[mod1(i+1, n), j], vindices[i,j], vindices[i, mod1(j+1,n)]) for i=1:n, j=1:n]);

julia> opt = optimize_code(tensors, TreeSA(ntrials=1));

julia> ITensors.set_warn_order(20)
14

julia> timespacereadwrite_complexity(opt) 
(18.960771236698694, 12.0, 15.668026245437165)   # space complexity 12 is optimal for periodic boundary square lattice

julia> r1 = foldl(*, tensors); # exact

julia> r2 = evaluate(opt);  # optimized

julia> r1 ≈ r2
true

julia> @time foldl(*, tensors);
  0.009529 seconds (16.71 k allocations: 7.554 MiB)

julia> @time evaluate(opt);
  0.001825 seconds (4.31 k allocations: 984.078 KiB)

Notice here I have to suppress the warning message using

ITensors.set_warn_order(20)

I would suggest setting the default value to 30. Because 2^30 is roughtly the largest tensor size that can be computed on GPU. Make sense?

GiggleLiu avatar Jul 12 '22 05:07 GiggleLiu

Really cool, thanks @GiggleLiu! I have been interested in the development of OMEinsumContractionOrders.jl, it is great to have these modern contraction sequence optimization algorithms available in Julia. Nice to hear that you've made it independent of OMEinsum.jl so it can be shared across packages.

I think it would be best to put this in https://github.com/mtfishman/ITensorNetworks.jl for now. That is where I am developing a general ITensorNetwork type and was planning on exploring new tensor sequence optimization techniques beyond the one we have in ITensors.ContractSequenceOptimization (which is basically the same as the one underlying TensorOperations.@tensoropt), but it would be nice to just make use of the tools you are developing in OMEinsumContractionOrders.jl.

We can still increase the default_warn_order here, I've got some complaints about it and it is probably too low. 30 seems fine for the reason you give.

mtfishman avatar Jul 12 '22 15:07 mtfishman

Oops, I did not notice the package ITensorNetworks.jl you are developing. It is really nice to hear that you are also interested in contraction order optimization! I read this package briefly, it seems to contain a lot additional nice features.

OMEinsumContractionOrders a light weighted general purposed contraction order otpimizer based on the most general Einsum notation so that we can use everywhere: ITensors, TensorOperations and OMEinsum et al. It also contains some additional features like contraction complexity estimation, dump/load the contraction order to/from json files et al. We can also include the standard tensor network representation in OMEinsumContractionOrders in the future, although introducing $\delta$ tensors might produce worse contraction complexity.

I think the most convenient way to do is:

  1. We keep developing better contraction order optimizers in OMEinsumContractionOrders or extending its optimize_code interfaces in another package. We can also include @Jutho 's algorithm there. I have read his code before, it is a very good algorithm for optimzing small scale tensor networks. Also, it is well written and should be easy to migrate. The author of JunctionTrees.jl @mroavi is also considering developing the Tamaki's algorithms: https://drops.dagstuhl.de/opus/volltexte/2021/13781/.
  2. It is straightforward to include OMEinsumContractionOrders in ITensors as a dependency. The additional code mainly contains converting a vector of ITensors to a Einsum representation (the rawcode function). Then convert it back to ITensorNetwork with the decorate function. It should be quite reliable now for its simplicity. The code in OMEinsumContractionOrders is well tested and is used a lot GenericTensorNetworks.jl that I have been developing for quite a while.
  3. You can keep developing more features in ITensorNetworks. For ITensors.ContractSequenceOptimization, we should retire it one day, or forward it to better algorithms.

It is definitely not an easy decision, looking forward to discuss more with you later in person!

GiggleLiu avatar Jul 12 '22 16:07 GiggleLiu

Sorry for the slow response about this @GiggleLiu. I've made a repository here: https://github.com/mtfishman/ITensorContractionSequences.jl. Could you move the PR there?

mtfishman avatar Aug 17 '22 16:08 mtfishman

Sure, sound good!

GiggleLiu avatar Aug 30 '22 09:08 GiggleLiu

@mtfishman Can you give me the write permission to that repo? It will make me easier to create a PR, thanks!

GiggleLiu avatar Aug 30 '22 17:08 GiggleLiu

Since the contraction order is not going to be a part of ITensors main repo, and it is much more convenient to develop in a repo with write permission. I have put the contraction order code here instead: https://github.com/GiggleLiu/ITensorContractionOrders.jl . I am open to move the code to the ITensor org for better maintainance.

Thank you for very helpful discussion! @mtfishman

GiggleLiu avatar Sep 02 '22 09:09 GiggleLiu