specification icon indicating copy to clipboard operation
specification copied to clipboard

Should the TUF update workflow eventually succeed even if non-volatile storage writes always fails?

Open erickt opened this issue 3 years ago • 3 comments

In https://github.com/heartsucker/rust-tuf/pull/304, I'm extending rust-tuf to fail an update if writing to non-volatile storage fails. However, in https://github.com/heartsucker/rust-tuf/pull/304/files#r512292123, @wellsie1116 noticed an odd consequence to the update workflow, where even though the workflow fails on each storage, we're still updating our TUF trust database for each update attempt. So:

  • update attempt 1:
    • we trust 1.root.json
    • 5.1, we fetch a new root metadata 2.root.json, and set it as our trusted root.
    • we try and fail to write 1.root.json to non-volatile storage, and so we fail the update
  • update attempt 2:
    • 5.1, we try to fetch 3.root.json, but it doesn't exist
    • 5.2, we fetch timestamp.json, and set it as our trusted timestamp.
    • we try and fail to write timestamp.json to non-volatile storage, and so we fail the update.

And etc. I found this to be quite surprising, but I'm not sure if this is an actual problem. It sort of makes me wonder if it's really worthwhile failing a metadata update if we can't persist the metadata, and instead trying to signal to the user "we successfully updated the TUF metadata, but we couldn't persist the metadata because of ...". What do you all think?

erickt avatar Oct 28 '20 17:10 erickt

I think it should fail, yes, because one is then technically more susceptible to rollback attacks due to no memory. What do others think? @mnm678 @JustinCappos

trishankatdatadog avatar Oct 29 '20 11:10 trishankatdatadog

Hmm, this is interesting. I imagine a scenario where this would happen is when you retrieve a (now very large) targets file but cannot persist it. I'm not fully sure what to think about this. Is this common?

In general, we do assume you can persist metadata. In particular, the snapshot, root, and timestamp are essential. Only the snapshot here will really vary in size as it changes with the number of targets files. The targets files can be discarded if space is needed but this may cause (avoidable) duplicate downloads.

Justin

On Thu, Oct 29, 2020 at 7:40 PM Trishank Karthik Kuppusamy < [email protected]> wrote:

I think it should fail, yes, because one is then technically more susceptible to rollback attacks due to no memory. What do others think? @mnm678 https://github.com/mnm678 @JustinCappos https://github.com/JustinCappos

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/theupdateframework/specification/issues/131#issuecomment-718697940, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAGROD34SBECULXIU7ZP3M3SNFICVANCNFSM4TCUIOKA .

JustinCappos avatar Nov 10 '20 06:11 JustinCappos

In general, we do assume you can persist metadata. In particular, the snapshot, root, and timestamp are essential. Only the snapshot here will really vary in size as it changes with the number of targets files. The targets files can be discarded if space is needed but this may cause (avoidable) duplicate downloads.

In most cases, discarding existing targets files should leave enough space for a larger snapshot. However, it's possible (although unlikely) to have a snapshot file that is larger than targets metadata, for example if there are a million targets listed in snapshot, but a delegation setup such that the top level targets file lists a single target, then delegates for all other targets. In this case, the client may need to abort the update when they do not have space for the large snapshot file. (though the large snapshot file would most likely be due to some kind of DoS attack, and so the client would be able to update once the snapshot file is fixed)

mnm678 avatar Nov 10 '20 18:11 mnm678