Gridap.jl
Gridap.jl copied to clipboard
Possibility of doing all computations using `Float32` for performance
This is some notes from my discussions with @santiagobadia and @amartinhuertas, regarding the option to use Float32
everywhere. Currently there are places where Float64
is hard-coded, especially the quadrature.
- For example, we can see in
/src/ReferenceFEs/TensorProductQuadratures.jl
, thatT=Float64
is hard-coded into_tensor_product_legendre(degrees)
function. (gauss
quadrature fromQuadGK.jl
, which is used here, can handleFloat32
) - Both
CartesianDiscreteModel{D,T,F}
andUnstructuredDiscreteModel{Dc,Dp,Tp,B}
carry Floating-point type parameterT
andTp
respectively, which can be propagated into quadrature constructors. But we note that their supertypeDiscreteModel
doesn't have this type parameter - Even if the model itself is in
Float64
, we can have the option of quadrature to beFloat32
, in cases where we are not using FE basis but only usingGridap
as means to integrate some external functions.