Tensors.jl icon indicating copy to clipboard operation
Tensors.jl copied to clipboard

Idea: MixedTensor

Open KnutAM opened this issue 2 years ago • 9 comments
trafficstars

In some cases, a tensor could have mixed bases, e.g. a 2x3 tensor. This PR is a first tryout that is currently not super performant, but I think that can be "easily" fixed by using the same code-generation strategies as currently done for the existing Tensor-type. The motivation came from looking at the CellVectorValues in Ferrite thinking that for the special interpolations that were recently discussed, it might be good to have e.g. 3d shape functions for 2d coordinates or vice-versa. @termi-official / https://github.com/Ferrite-FEM/Ferrite.jl/issues/569 / CellRTValues?

Some examples and ideas

julia> x = Vec{2}((0.5,0.5));
julia> N(x) = Vec{3}((1-x[1], x[2], x[1]*x[2]));
julia> ∂N∂x = gradient(N, x)
3×2 MixedTensor{2, (3, 2), Float64, 6}:
 -1.0  -0.0
  0.0   1.0
  0.5   0.5
julia> ∂N∂x' ⋅∂N∂x
2×2 Tensor{2, 2, Float64, 4}:
 1.25  0.25
 0.25  1.25

where the point of the last is to show that it should always eagerly transform back to regular tensors whenever possible (i.e. when allequal(dims) - all bases are the same)

Parameterization

Currently, the type is parameterized by MixedTensor{order, dims, T, M} where dims is checked during construction to be an NTuple{order,Int}. Not sure what is the best option here, SArray use Tuple{dims...} (E.g. Tuple{2,3}. Perhaps a more general method would be to define a basis type, e.g.

struct CartesianBase{dim} end
struct TensorBasis{B<:NTuple{<:Any,CartesianBase}} 
 bases::B
end

But this makes construction less convenient, e.g.

MixedTensor{2, TensorBasis((CartesianBase{2}(), CartesianBase{3}()))}((i,j)->i*j)

compared to the current

MixedTensor{2, (2,3)}((i,j)->i*j)
2×3 MixedTensor{2, (2, 3), Int64, 6}:
 1  2  3
 2  4  6

KnutAM avatar Dec 26 '22 12:12 KnutAM

Codecov Report

Patch coverage: 52.53% and project coverage change: -3.89% :warning:

Comparison is base (996043c) 96.78% compared to head (1557caa) 92.89%. Report is 3 commits behind head on master.

:exclamation: Current head 1557caa differs from pull request most recent head 5ad0648. Consider uploading reports for the commit 5ad0648 to get more accurate results

:exclamation: Your organization is not using the GitHub App Integration. As a result you may experience degraded service beginning May 15th. Please install the Github App Integration for your organization. Read more.

Additional details and impacted files
@@            Coverage Diff             @@
##           master     #188      +/-   ##
==========================================
- Coverage   96.78%   92.89%   -3.89%     
==========================================
  Files          17       18       +1     
  Lines        1338     1394      +56     
==========================================
  Hits         1295     1295              
- Misses         43       99      +56     
Files Changed Coverage Δ
src/Tensors.jl 94.44% <ø> (ø)
src/mixed_tensors.jl 44.61% <44.61%> (ø)
src/automatic_differentiation.jl 97.83% <88.00%> (-1.21%) :arrow_down:
src/tensor_products.jl 100.00% <100.00%> (ø)

... and 3 files with indirect coverage changes

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.

codecov-commenter avatar Dec 26 '22 15:12 codecov-commenter

This one would be really great to have! As a first note, we do not need this for H(div) and H(curl) spaces. To elaborate where I utilize such embedding: I have sometimes problems with surface flows, surface reaction-diffusion systems or reaction-diffusion systems in "bioeletrical wires", which are all embedded in 3D. However, different from solid mechanics problems, it is possible to treat this problem as if it was an actual 1D/2D element (because stuff happening in the radial/normal direction can be neglected).

Another question here: This should already (or almost already) work for mixed derivatives, right? Turns out for multiphysics problems (or also sometimes mechanics problems with internal variables) we need partial derivatives of the form $\partial F \partial s \phi$ and $\partial s \partial F \phi$, where $s$ is some field or internal variable, so this would be quite handy to have in Tensors.jl. If not, then I will try to find some time for a follow up PR on this to implement such derivatives. :)

Edit 1: To also give some explanation on the CellRTValues - if I understand the implementation correctly, then this should be a so-called boundary element method, where you solve the problem only on a discretization of the boundary of your domain. Hence your elements naturally have codimension 1. I need more context from the original author on the exact formulation, because I have no experience for embedded H(curl) and H(div) problems.

termi-official avatar Dec 29 '22 13:12 termi-official

Nice to hear! Thanks for the comments!

The MixedTensor type allows taking derivatives of two tensors with different dimensions, so as long as you state variables are tensorial quantities, this should work. If you would like to take derivatives wrt. user-defined types, I'm not sure how to do the abstraction layer to retain the computational efficiency of Tensors.jl.

I have not included (in this PR) support for different orders than the current 1, 2, and 4. (I don't see any technical difficulties in implementing 3rd order in addition, it will just require a bit of code to become as performant and sufficiently tested. )

At this stage, this is just an idea, so I wouldn't count on it being merged any time soon. One potential issue is increasing compilation time due to the high amount of different possible combinations.

KnutAM avatar Dec 29 '22 19:12 KnutAM

Is the precompilation increase really this significant? If this is a problem for users, wouldn't be the actual solution to build a sysimage?

termi-official avatar Dec 30 '22 11:12 termi-official

Is the precompilation increase really this significant? If this is a problem for users, wouldn't be the actual solution to build a sysimage?

Ah, right - I was worried about "Combinatorial Type Explosion", but as long as not explicitly precompiled, I suppose this should only lead to performance degradation for type-unstable code that code using Tensors should normally avoid? (Of course also a slight degradation for compilation, but that is likely negligible as you say)

@KristofferC or @fredrikekre: would you like such a feature in Tensors? If so, I'll think more about the appropriate interface/parameterization. Once the interface is set, this branch can be used for experimenting with Ferrite and similar. If it turns out to be useful, I'll try to find some hobby-programming-time to work on making it performant to be worthy of Tensors.jl:)

KnutAM avatar Dec 30 '22 12:12 KnutAM

Random comment: If we dont want to create a new type MixedTensor, I think it could be possible to dispatch on the type of type-parameter order:

struct Tensor{order, dims, T, M, order_t} <: AbstractTensor{order, dims, T}
    n::StaticArrays{...}
end

const MixedTensor{order, dim, T, M} = Tensor{order, dim, T, M, Tuple{Int,Int}}

Not sure if therye would be any major benifit with this though

lijas avatar Sep 11 '23 06:09 lijas

Also, MixedTensors could be benifical when working with shells (and beams), where we have two surface coordinates (xi and eta) in 3d space, and it quite common to want to take the gradient dx/d\Xi which would be a 3x2 tensor. Currently what I do is just to store them as two vectors: dx/dxi and dx/deta instead (which also works fine).

lijas avatar Sep 11 '23 06:09 lijas

Great that it can be useful for more cases!

After talking to @fredrikekre earlier, a possible way forward is to define only a the functionality required for specific cases and consider the mixed tensor an in-development feature. In Ferrite, that means we can have shape gradients etc. output mixed tensors instead of StaticArrays as is currently returned.

possible to dispatch on the type of type-parameter order

Does that mean that the regular tensor would change to, e.g. Tensor{order,dim,T,M,Nothing} or something like that? (In that case, I guess one could also have Tensor{order,dim,T,M,:regular} and Tensor{order,dim,T,M,:mixed}, but I'm too, am not sure if that has any advantage over having a separate type). But please let me know if I am missing something.)

KnutAM avatar Sep 11 '23 07:09 KnutAM

I agree, it makes sense to view mixed tensors (and 3rd order tensors) as in-development, and to not implement all operations/features for them all at once. For example, with shells I have wanted to have 3rd order tensors basically just for storage (of higher order gradients), and not for doing multiplication/addition/other operations with.

lijas avatar Sep 11 '23 08:09 lijas