global-workflow
global-workflow copied to clipboard
GFSv16.3.? - GLDAS updates
Description
NCO/SPA Justin Cooke is opening several bugzillas related to updated needed to the GLDAS job in operations. This came out of recent issues in operations related to missing CPC gauge data.
Justin's comments:
1) We saw references in the code to an h2 emc space for this input data too:
export CPCGAUGE=${CPCGAUGE:-/lfs/h2/emc/global/noscrub/emc.global/dump}
Production jobs should not reference emc disk spaces. We'll be opening up a Bugzilla ticket for that.
2) If the CPC data is missing the job will fail, the warning message needs to be changed, currently it is:
if [ ! -s $cpc ]; then
echo "WARNING: GLDAS MISSING $cpc, WILL NOT RUN."
exit 3
fi
It needs to be:
if [ ! -s $cpc ]; then
echo "FATAL ERROR: GLDAS MISSING $cpc, WILL NOT RUN."
exit 3
fi
3) This job also runs at 06, 12, 18Z, but at those times it just reports this message:
0.319 + echo 'GLDAS only runs for 00 cycle; Skip GLDAS step for cycle 18'
GLDAS only runs for 00 cycle; Skip GLDAS step for cycle 18
Why does the job exist for those cycles?
Target version
v16.3.?? (TBD)
Expected workflow changes
Initial suggested changes:
- So that we can retain the ability for EMC developers to still run outside of ops and use the dump data that we store in the global dump archive (that default emc space), we should set
export CPCGAUGE=/lfs/h2/emc/global/noscrub/emc.global/dump
in the dev-only config.base (config.base.emc.dyn). I am fine with Jiarui's suggestion to change the default in the script to something like/lfs/h1/ops/prod/com/gfs/v16.3
. We can pass our dump archive path via the override and our config.base setting. - I agree with updating the error message. Let's get that changed to Justin's suggestion.
- For ecflow in ops we can just remove the GLDAS job from the 06/12/18 job families and adjust job dependencies for those cycles to not wait for that job (the analysis job triggers).