extract the tarball while downloading
pipe the data to an instance of tar, extract as we download.
failure cases:
- download doesn't complete
- sig/checksum verification doesn't match
those seem like risks that will be harder to mitigate. thoughts?
I feel like pipelining of downloads (such that all download+extract steps occur in parallel) is probably "good enough", and in general most tarballs are not as big as e.g. LLVM.
@jonchang do you mean running downloads of multiple packages in parallel but executing download/verify/unpack in series per package, or do you mean download+unpack in a stream, and verify afterwards for any given package?
download doesn't complete
delete extraction is sufficient I assume
sig/checksum verification doesn't match
same…?
unless I'm missing something.
my goals here are for us to be suppppper fast
parallel downloads/verify/extract being a next step, but I'm worried about the UI for that so plan to do it myself. Since I wrote the current TUI for it.
I suppose I'm worrying about the case where we're replacing existing/working bins, but that's mostly only on build, so deletion of the target directory might be sufficient.
replacing existing/working bins
tea can't do this currently. build is certainly different and I'd like us to work better here but currently don't have any great solutions (bar two tea prefixes, which is maybe the best bet, but a little tricky to engineer).
also we should be able to resume downloads that fail eventually. Not too hard in this modern HTTP world. Currently we cannot and in fact “trying again” fails and deletes the tarball.
This in 0.25