notary
notary copied to clipboard
Off-line use: Support usage with outdated mirrors
Some data centers are run completely firewalled off (or physically cut off) the public internet, relying on local mirrors of all necessary data. In such setups, the frequently-expiring timestamps (which are quite appropriate for the public internet) are unsuitable because refreshing the local mirrors so frequently would be too much of an operational burden.
So, it should be possible (but not the default) to have a configurable grace period for signature expiration, especially for timestamp.json but probably for the other files just as well. This grace period should be per-repository or perhaps per-registry; images created and signed within that off-line data center can still benefit from strict expiration enforcement.
The configuration mechanism for #395 would be a likely candidate location for also configuring this timestamp expiration grace period.
@diogomonica, @NathanMcCauley and I discussed this scenario a while back and one idea was to have these "mirrors" resign all content loaded into them with their own keys. There would be significant work to make that possible but what are your thoughts on that solution?
Potentially some hybrid might work well where the mirror substitutes its own root and timestamp keys and the client has a way to specifically trust known targets keys. In that scenario, the mirror would be able to timestamp but not modify targets because the client would have a pinned targets key. This feels like a significant amount of work but I have a feeling it'll be relevant to general delegations work too where a user may want to trust only certain delegations within a TUF repo.
I think it is very valuable to have end-to-end integrity and authorship guarantees, so I was thinking more along the lines of finding ways to deliver the existing signatures by original authors to the off-line data center and ultimately to clients within the data center.
Re-signing loses the end-to-end authorship guarantee (or, more precisely, introduces one more trusted party into the model). The hybrid model you propose would be a way around that, at the cost of confusing the trust model (we can no longer just say “the repository’s certificate”, users have to know about the root and targets keys and the difference).
In either case, AFAICS the benefit of re-signing is mostly in the ability to have current, non-expired timestamp signatures — but that is fairly meaningless within the off-line data center because even if the signature were legitimately expired and the clients should reload targets.json
and learn about a newer targets.json
tag mapping and new image versions, the newer targets.json
and the newer images are not available. If the data center is pulling new images from the public internet once a week, for all intents and purposes nothing about the images or their signatures can change in the middle of the week, more frequent expiration seems not useful.
So, it seems natural to me to give up on the frequent timestamp expiration if it buys us the ability to keep using the original end-to-end signatures.
(FWIW re-signing in mirrors should AFAICS not be too much work: Instead of doing it within a server process on the mirror, have a mirroring cron job re-sign everything as if a client were copying the images into a private repository, and publishing signatures within that private repository: i.e. the client would load the upstream targets.json
, create a changelist to generate an equivalent tag mapping, and then simply notary publish
. This can work with a completely ordinary Notary server as it exists today, if we can configure the clients to use the mirror's URL and to bootstrap trust into the re-signed images. But again, I don’t find this kind of automatic mirroring particularly useful. (Re-signing as an indication of manual approval of an image for use does make sense, but also probably needs a different kind of automation.))
one of the things resigning the root in particular might provide is tracking of who brought the data into the offline data center. Not sure if that's worth having?
Tracking who brought the data in might be useful, but that aspect is not unique to Notary (if this is something that needs to be tracked, the same problem needs to be solved for every image and text file brought in)—and specifically for Docker images I’d expect that information to be already recorded in the configuration management tool (e.g. puppet) used to deploy and run the mirrors. There is no such substitute for the end-to-end authorship-proving signatures.
(The root.json signature format does technically allow adding extra signatures to e.g. record sign-off, but the TUF code would just ignore them; so recording both authorship and approved import in the file is technically possible but not implemented. Alternatively, clients could in principle be implemented so that they require tags to be signed by two different signers (pulling signatures from two different notary servers). Or, finally, TUF signatures can be extracted and used the way detached GPG or X.509 signatures are, so we can have multiple signatures for a single image that way.
All of this can in principle be done if anyone cares to implement it—but all of these approaches will need to somehow deal with off-line scenarios where the timestamp signatures expire faster than signatures can be mirrored, so we do need some kind of support for expiration grace periods.)
What's the current status of this?
@nfrush There's a scenario for this in the v2 project requirements.