scream
scream copied to clipboard
Exascale global atmosphere model written in C++ as part of the E3SM project
https://github.com/E3SM-Project/scream/blob/3005a4b8ef245c06f0a769a87c3c85bf0c29e519/components/eamxx/src/physics/rrtmgp/CMakeLists.txt#L127 the issue is that netcdf is already processed at the top level in cime, but then rrtmgp is trying to find it, too (also, not in the optimal way)....
Since the work to get scream master working on Frontier, I've been testing out various cases. I noticed that some cases were not always BFB. So far, I have not...
With scream master, I've been trying some cases on frontier. About 10% of the cases (can get better estimate of frequency) are hanging at what looks like the same location....
For frontier-scream-gpu, I have been trying some experiments and I think we should change the following: The -c flag to srun should be the number of HW threads grouped for...
It is currently run by _all_ ranks, each trying to write the same file. It is buggy at best. Also, we should plot the "global" layout of a var, not...
Currently, when restarting a run, the default behavior is to restart all output streams. The user can prevent this by adding ``` Restart: Perform Restart: false ``` to the output...
It just occurred to me that in case of multiple output streams, we may be re-computing the same diagnostic quantities multiple times. If the same diag is needed in two...
EAM has the ability to create an initial condition file at specfied frequency via the `inithist` namelist variable. It would be useful to implement similar functionality in SCREAM in order...
On two nodes of Frontier (16 GPUs total), the following configuration ``` COMPSET=F2010-SCREAMv1 RES=ne4pg2_ne4pg2 ``` produces the error ``` Kokkos::TeamPolicy< HIP > the team size is too large. Team size...
The top bunch of layers in EAMxx is occupied by the sponge layer and most/all of our analysis is in the troposphere, so maybe we could save substantial disk space...