Unintuitive `ncon` result when scalar
Noted by @leburgel:
When contracting a network that results in a scalar, currently ncon does not insert a final call to tensorscalar. As such the following (unintuitive) behaviour is happening:
julia> using TensorOperations
julia> A = rand(2,2);
julia> ncon([A, A], [[1, 2], [1, 2]])
0-dimensional Array{Float64, 0}:
1.42602
It is probably easier to just catch this and wrap the result in tensorscalar, as ncon is inherently type-unstable anyways, this should not make a huge difference.
While it doesn't make a big difference, it is technically a breaking change. Other than that, I have no strong feelings. I thought that at least knowing that the return type is always some type of array or tensor more generally is somewhat consistent, but the compiler is inferring Any so it doesn't make a difference in practice. No strong opinions about this one.
My main argument would be that @tensor does in fact automatically insert tensorscalar, thus this would improve consistency in that regard
julia> using TensorOperations
julia> A = rand(2,2);
julia> ncon([A, A], [[1, 2], [1, 2]])
0-dimensional Array{Float64, 0}:
1.60394
julia> @tensor A[1 2] * A[1 2]
1.60394
As a side-note, in principle we could make this "type-stable" by promoting the output kwarg to an optional argument, and supplying an Index2Tuple. While not really affecting the performance, we can then at least assert the output type. As a double side note, this would also be useful to make TensorMaps output with the desired codomain and domain.
All of this because writing @tensor expressions for WxHxD PEPO's is apparently hard :grin: