USECASE: Third Party Aggregator
In the list of TEA use cases, I see a gap where third party aggregators may want to generate an offline cache or a backup of documents. For example, some systems collect caches of CVEs by using dedicated APIs to pull batches of records. Some people may devise additional methods for deep analysis that require custom data stores, or that would otherwise tax the remote system. Other groups may want to create backups to prevent data loss due to disasters or the source provider "pulling the plug."
While the API as it exists allows for a smart system to query for all the data, I do not think it allows for efficient bulk exports, or for data updates in the form of "changed since" queries.
Note that, regardless of the implementation (or lack of implementation) for this functionality, it raises the spectre of how to handle false propagation of artefacts - one storehouse hosting contrasting information on a product than a different storehouse.
That's a great point, but I don't believe we should try to tackle this problem or add it to a list of use-cases. There are several other projects dealing with software attestations and verification (particularly, checking that specific artifacts can be trusted - notably, in-toto, SCITT, others). TEA can store those artifacts in the collections and thus facilitate verification, but I don't think we should attempt to propose our own mechanisms to attest and verify.
Currently, TEA is focused on first party serving of artifacts (either directly or via a contracted TEA service provider), where the main problem we're tackling is artifact management and discovery. I think we should stick to this scope for the time being.