cf-python
cf-python copied to clipboard
Support for generating a set of tracking-ids from a slicing operation into an aggregation.
Consider the following use case:
- A cf aggregation points to 365 daily files each of which has a high resolution 3D grid for a variable for 24 hours.
- A user does a
cf.within
(or any other sort of valid slice) into that aggregation to extract a mean value of a particular variable over a week. - The calculation will touch seven files. We can think of those seven files as the necessary data to reproduce the calculation. So these are the digital artifacts we want to save for reproduction, identify in a workflow, and cite in a paper.
(This is obviously a trivial case, it gets more interesting, if say, these are calculations carried out across ensembles from multiple institutions).
The feature request is that
- the aggregation metadata includes the tracking-ids (if they are present) in such a way that the same
cf.within
(or other slice) can return a set of tracking-ids which can be added to a list of "provenance sources" ... (so potentially a series of cf calculations can generate a list of all the files needed for reproduction (and/or citation). - cf-python supports the use of the slicing operation so it does do this.
Thanks @bnlawrence, great write up relating to what we all discussed today.
No comments as yet, since I will need time to think over this and study some background, but just FYI for now, I'm ~~creating a cf-store
label~~ assigning the 'CFA' label to mark these issues so we can pick them out easier from the Issue Tracker, etc.
This is resolvable with https://github.com/NCAS-CMS/cfa-conventions/issues/41. With this change to the CFA conventions, cf-python could automatically create auxiliary coordinate constructs from any non-standardised aggregation metadata, making it available for slicing. With cf.read
creating this new auxiliary coordinate construct, all of the cf-python machinery kicks in unchanged. E.g.we were to read the file from the new CFA example 1b:
>>> f = cf.read('example_1b.nc')[0] # aggregated array has 12 months split over two files
>>> f.coord('long_name=tracking_id')
<CF AuxiliaryCoordinate: long_name=tracking_id(12, 1, 73, 144) >
# Each element of "f" has a tracking_id, but there are only two different values
>>> print(f.coord('long_name=tracking_id').array[:, 0, 0, 0])
[[[['764489ad-7bee-4228' '764489ad-7bee-4228' '764489ad-7bee-4228'
'764489ad-7bee-4228' '764489ad-7bee-4228' '764489ad-7bee-4228'
'a4f8deb3-fae1-26b6' 'a4f8deb3-fae1-26b6' 'a4f8deb3-fae1-26b6'
'a4f8deb3-fae1-26b6' 'a4f8deb3-fae1-26b6' 'a4f8deb3-fae1-26b6']]]]
# Find unique tracking IDs
>>> print(f.coord('long_name=tracking_id').data.unique())
<CF Data(2): [764489ad-7bee-4228, a4f8deb3-fae1-26b]>
# Find unique tracking IDs corresponding to a subspace:
>>> g = f.subspace(T=cf.wi(cf.dt('1959-12-01'), cf.dt('1960-03-01'))))
>>> print(g.coord('long_name%tracking_id').data.unique())
<CF Data(1): [764489ad-7bee-4228]>
Memory storage wise, this is cheap, because each fragment's tracking ID array will be cf.FullArray instance, which just stores the scalar common to that fragment. However, when we come to get the (unique) values, the array will be expanded in memory into the full shape of the subspace. This will managed by dask
, though, so will always work, but would not be as efficient as we might imagine.
This is all implemented in #630
Closing now #630 is merged.