xMIP
xMIP copied to clipboard
CMCC-ESM2 wmo needs unit conversion to kg/s
Just FYI for other users of wmo
, the CMCC-ESM2 wmo data in Pangeo Cloud is in m3/s and needs to be multiplied by 1026 kg/m3 to match other models that are actually in kg/s.
cat_url = "https://storage.googleapis.com/cmip6/pangeo-cmip6-noQC.json"
col = intake.open_esm_datastore(cat_url)
cat = col.search(table_id='Omon',experiment_id='historical',
variable_id='wmo', grid_label='gn', member_id='r1i1p1f1',source_id='CMCC-ESM2')
cmcc_wmo = cat.to_dataset_dict(
zarr_kwargs={'consolidated':True, 'decode_times': True, 'use_cftime': True},
preprocess=combined_preprocessing,
aggregate=False)
plot_wmo = cmcc_wmo['CMIP.CMCC.CMCC-ESM2.historical.r1i1p1f1.Omon.wmo.gn.gs://cmip6/CMIP6/CMIP/CMCC/CMCC-ESM2/historical/r1i1p1f1/Omon/wmo/gn/v20210127/.nan.20210127.good.none.none'].isel(lev=0,time=0)
plot_wmo['wmo'].plot.contourf()
cat_url = "https://storage.googleapis.com/cmip6/pangeo-cmip6-noQC.json"
col = intake.open_esm_datastore(cat_url)
cat = col.search(table_id='Omon',experiment_id='historical',
variable_id='wmo', grid_label='gn', member_id='r1i1p1f1',source_id='IPSL-CM6A-LR')
ipsl_wmo = cat.to_dataset_dict(
zarr_kwargs={'consolidated':True, 'decode_times': True, 'use_cftime': True},
preprocess=combined_preprocessing,
aggregate=False)
plot_wmo = ipsl_wmo['CMIP.IPSL.IPSL-CM6A-LR.historical.r1i1p1f1.Omon.wmo.gn.gs://cmip6/CMIP6/CMIP/IPSL/IPSL-CM6A-LR/historical/r1i1p1f1/Omon/wmo/gn/v20180803/.nan.20180803.good.none.none'].isel(lev=0,time=0)
plot_wmo['wmo'].plot.contourf()
An errata ticket has been opened and the datasets are available at esgf-node2.cmcc.it. Eventually the datasets will be updated across nodes. :)
Thanks for raising this issue @jdldeauna. We have discussed this offline, but I will just repeat things here for openess: I think that fixing issue like this is in scope for cmip6_pp! I think this package should provide two things:
- A way to apply arbitrary 'fixes' based on the exact
instance_id
(which includes the version, and thus prevents a fix like this to be applied to an updated dataset!). I think that an implementation of this should be carried out as part of a refactor to using a more general database like daops as a 'backend' for cmip6_preprocessing. - It would be nice to have some way of checking individual datasets for existing ERRATA issues. This is something discussed in #149.
+1 on finding a way to solve this within this package's scope, that would be super useful functionality and I can think of some other examples