Gridap.jl icon indicating copy to clipboard operation
Gridap.jl copied to clipboard

Possibility of doing all computations using `Float32` for performance

Open kishore-nori opened this issue 2 years ago • 0 comments

This is some notes from my discussions with @santiagobadia and @amartinhuertas, regarding the option to use Float32 everywhere. Currently there are places where Float64 is hard-coded, especially the quadrature.

  • For example, we can see in /src/ReferenceFEs/TensorProductQuadratures.jl, that T=Float64 is hard-coded into _tensor_product_legendre(degrees) function. (gauss quadrature from QuadGK.jl, which is used here, can handle Float32)
  • Both CartesianDiscreteModel{D,T,F} and UnstructuredDiscreteModel{Dc,Dp,Tp,B} carry Floating-point type parameter T and Tp respectively, which can be propagated into quadrature constructors. But we note that their supertype DiscreteModel doesn't have this type parameter
  • Even if the model itself is in Float64, we can have the option of quadrature to be Float32, in cases where we are not using FE basis but only using Gridap as means to integrate some external functions.

kishore-nori avatar May 26 '22 01:05 kishore-nori