`@eval` considered harmful
Trying to use OceanScalingTest.jl from @simone-silvestri as a benchmark for TTFTS (Time-To-First-Time-Step) I came across this delightful error:
ERROR: LoadError: Evaluation into the closed module `Grids` breaks incremental compilation because the side effects will not be permanent. This is likely due to some other module mutating `Grids` with `eval` during precompilation - don't do this.
Stacktrace:
[1] eval
@ ./boot.jl:428 [inlined]
[2] allocate_metrics(grid::Oceananigans.Grids.LatitudeLongitudeGrid{Float64, Oceananigans.Grids.Periodic, Oceananigans.Grids.Bounded, Oceananigans.Grids.Bounded, Nothing, Nothing, Float64, Float64, OffsetArrays.OffsetVector{Float64, CUDA.CuArray{Float64, 1, CUDA.Mem.DeviceBuffer}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, CUDA.CuArray{Float64, 1, CUDA.Mem.DeviceBuffer}}, Oceananigans.DistributedComputations.Distributed{Oceananigans.Architectures.GPU, false, Oceananigans.DistributedComputations.Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}})
@ Oceananigans.Grids ~/.julia/packages/Oceananigans/kBe5X/src/Grids/latitude_longitude_grid.jl:554
[3] with_precomputed_metrics(grid::Oceananigans.Grids.LatitudeLongitudeGrid{Float64, Oceananigans.Grids.Periodic, Oceananigans.Grids.Bounded, Oceananigans.Grids.Bounded, Nothing, Nothing, Float64, Float64, OffsetArrays.OffsetVector{Float64, CUDA.CuArray{Float64, 1, CUDA.Mem.DeviceBuffer}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, CUDA.CuArray{Float64, 1, CUDA.Mem.DeviceBuffer}}, Oceananigans.DistributedComputations.Distributed{Oceananigans.Architectures.GPU, false, Oceananigans.DistributedComputations.Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}})
@ Oceananigans.Grids ~/.julia/packages/Oceananigans/kBe5X/src/Grids/latitude_longitude_grid.jl:222
[4] Oceananigans.Grids.LatitudeLongitudeGrid(arch::Oceananigans.DistributedComputations.Distributed{Oceananigans.Architectures.GPU, false, Oceananigans.DistributedComputations.Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}, FT::DataType; precompute_metrics::Bool, size::Tuple{Int64, Int64, Int64}, latitude::Tuple{Int64, Int64}, longitude::Tuple{Int64, Int64}, z::Vector{Float64}, topology::Nothing, radius::Float64, halo::Tuple{Int64, Int64, Int64})
@ Oceananigans.DistributedComputations ~/.julia/packages/Oceananigans/kBe5X/src/DistributedComputations/distributed_grids.jl:160
[5] LatitudeLongitudeGrid
@ ~/.julia/packages/Oceananigans/kBe5X/src/DistributedComputations/distributed_grids.jl:106 [inlined]
[6] macro expansion
@ ./show.jl:1230 [inlined]
[7] load_balanced_grid(arch::Oceananigans.DistributedComputations.Distributed{Oceananigans.Architectures.GPU, false, Oceananigans.DistributedComputations.Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}, precision::Type, N::Tuple{Int64, Int64, Int64}, latitude::Tuple{Int64, Int64}, z_faces::Vector{Float64}, resolution::Int64, ::Val{false}, ::Val{:DoubleDrake}; Bottom::Type{Oceananigans.ImmersedBoundaries.GridFittedBottom})
@ OceanScalingTests ~/src/OceanScalingTests.jl/src/grid_load_balance.jl:21
[8] load_balanced_grid(arch::Oceananigans.DistributedComputations.Distributed{Oceananigans.Architectures.GPU, false, Oceananigans.DistributedComputations.Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}, precision::Type, N::Tuple{Int64, Int64, Int64}, latitude::Tuple{Int64, Int64}, z_faces::Vector{Float64}, resolution::Int64, ::Val{false}, ::Val{:DoubleDrake})
@ OceanScalingTests ~/src/OceanScalingTests.jl/src/grid_load_balance.jl:16
[9] scaling_test_simulation(resolution::Int64, ranks::Tuple{Int64, Int64, Int64}, Δt::Tuple{Int64, Float64}, stop_time::Float64; child_arch::Oceananigans.Architectures.GPU, experiment::Symbol, Depth::Int64, latitude::Tuple{Int64, Int64}, restart::String, z_faces_function::typeof(OceanScalingTests.exponential_z_faces), Nz::Int64, profile::Bool, with_fluxes::Bool, with_restoring::Bool, loadbalance::Bool, precision::Type, boundary_layer_parameterization::Oceananigans.TurbulenceClosures.RiBasedVerticalDiffusivity{Oceananigans.TurbulenceClosures.VerticallyImplicitTimeDiscretization, Float64, Oceananigans.TurbulenceClosures.HyperbolicTangentRiDependentTapering})
@ OceanScalingTests ~/src/OceanScalingTests.jl/src/near_global_simulation.jl:64
[10] macro expansion
@ ~/src/OceanScalingTests.jl/src/OceanScalingTests.jl:54 [inlined]
[11] macro expansion
@ ~/.julia/packages/PrecompileTools/L8A3n/src/workloads.jl:78 [inlined]
[12] macro expansion
@ ~/src/OceanScalingTests.jl/src/OceanScalingTests.jl:53 [inlined]
[13] macro expansion
@ ~/.julia/packages/PrecompileTools/L8A3n/src/workloads.jl:140 [inlined]
[14] top-level scope
@ ~/src/OceanScalingTests.jl/src/OceanScalingTests.jl:32
[15] include
@ ./Base.jl:556 [inlined]
[16] include_package_for_output(pkg::Base.PkgId, input::String, depot_path::Vector{String}, dl_load_path::Vector{String}, load_path::Vector{String}, concrete_deps::Vector{Pair{Base.PkgId, UInt128}}, source::Nothing)
@ Base ./loading.jl:2664
[17] top-level scope
@ stdin:4
in expression starting at /home/vchuravy/src/OceanScalingTests.jl/src/OceanScalingTests.jl:1
in expression starting at stdin:
Caused by @eval. Note that @eval uses the current module and not the module the user is calling this function from.
This means we are trying to modify the Oceananigans after it has already been closed. This is imcompatible with precompilation since we are unable to track and restore this modification.
My intuition is that you probably just want a dictionary for these kind of globals, maybe even within the model? Instead of using global variables.
https://github.com/CliMA/Oceananigans.jl/blob/00f028bb37f13692e24921588aeb8a9150f6dd55/src/Grids/latitude_longitude_grid.jl#L554-L555
The use-case is shown in https://github.com/simone-silvestri/OceanScalingTests.jl/pull/8 where one wants to use PrecompileTools to cache important functions.
@simone-silvestri are you sure it's a good idea to use OceanScalingTests for this purpose? It might be better to put together something longer term in ClimaOcean
I just needed an example and I have some old data from last year. I am purely interested in the time it takes until the first time step completes.
The issues I identified so far with the use of eval would occur for any other example
Ok if its just for a quick test rather than extended work, that makes sense
Thank you @simone-silvestri
This enables:
https://github.com/JuliaGPU/GPUCompiler.jl/pull/557#issuecomment-2183674470 which is a great win for running massively parallel simulations.
Thank you @simone-silvestri
This enables:
JuliaGPU/GPUCompiler.jl#557 (comment) which is a great win for running massively parallel simulations.
epic