pip
pip copied to clipboard
Splitting RequirementPreparer
Welcome to another edition of Pradyun dumps his thoughts in an issue to get inputs from actually smart humans!
Currently, RequirementPreparer has two "jobs" - fetch+unpack a requirement and generate metadata.
It does this using:
unpack_urlfrompip._internal.downloadDistributionobjects frompip._internal.distributions
There is some additional functionality that it provides. Mainly, it calls req.archive, which well, is used in pip download. I think there's benefit to splitting out all these operations.
Given that InstallRequirement.archive skips the generated pip-egg-info directory, I think it's reasonable to move the logic archive generation code to do so before metadata is generated.
This would result in a behavior change that I'm okay with -- we'd create an archive before calling egg_info, when using pip download. I'm not sure if this affects setuptools_scm, but based on my rudimentary understanding, it shouldn't. And if we really care a lot, I think we'd be better off moving the logic for the archiving into pip download, so that we can maintain the separation between these stages. We should probably be doing that anyway since in the more-correct resolver model, we'd only want to archive whatever is in the final set. Anyway, I'm calling it "not required" right now so I won't be making that change.
With that change, all the fetching related logic would happen before metadata generation. That'll allow splitting RequirementPreparer into RequirementFetcher and MetadataGenerators. This in turn would make it so that we can also introduce abstraction-style objects between these stages if we want to. I'm open to exploring that based on how the refactor here goes.
My understanding is that we can get away with making the MetadataGenerator to be just functions. In future, we could make them transform some kind of FetchedCandidate into a Distribution object. For now, they'll consume an InstallRequirement, do whatever we're doing today and return the same object. This change also confuses me what we'd want to be doing with Distribution objects (they have the build isolation code and call the metadata generation code in InstallRequirement) but, hey, one step at a time. I'll look into that once this "stage" is done with.
And if we really care a lot, I think we'd be better off moving the logic for the archiving into pip download, so that we can maintain the separation between these stages.
I agree with this approach in general - currently a lot of what makes things complicated are pip download, pip wheel, and hash checking spread across at least download and RequirementPreparer. If we were able to do these operations at the end (by operating on a return value from Resolver.resolve for example) or using a callback/event mechanism then it would let us lift a lot of code from "generic" modules into the more specific command modules.
In future, we could make them transform some kind of FetchedCandidate into a Distribution object.
We had talked before about having the objects advance themselves (as opposed to being transformed from the outside). I played a little with that, which you can see for example RemoteWheel.__next__() -> LocalWheel, UnpackedSources.__next__() -> Local*Project, RemoteSdist.__next__() -> LocalSdist.
We had talked before about having the objects advance themselves
Yep yep. I was mostly just hinting at how these operations would then be the same as the transitions we'd want in that model. The exact interface for how the transitions happen would likely stay the same as what we'd discussed.
We had talked before about having the objects advance themselves (as opposed to being transformed from the outside).
I haven't seen the full design, but this seems a little too fancy to me. In general, I would go for decoupling things, use simple, straightforward patterns, and favor being explicit over cleverness or implicit behavior.
Looking at this again, there's three things here:
- Renaming
_get_prepared_distribution, since it's not "preparing" (i.e. download+unpack + generate-metadata), it's only generating metadata. It and everything it calls should likely get renamed accordingly. - Factor out the renamed
_get_prepared_distributioninto the callers ofRequirementPreparer, ideally into a separate public function or a separate class. - At this point, the
RequirementPreparerwould basically be aRequirementFetcher, so we canlikely go through and do the renaming across the board for that.
I think step 2 is going to be the most involved, since that's what involves moving code around -- the rest seems to be renaming things.
I think the thing that makes step 2 complicated is the "partial fetch" logic for --use-feature=fast-deps. I am wondering whether we should drop fast-deps handling, given that:
- We have PEP 658 for directly serving METADATA files from index servers
- PyPI will be implementing PEP 658 in due time.
fast-depsdid not provide any significant speedup over the regular usage
If we get rid of that, we'd basically have a much simpler workflow, where the download and metadata generation steps are separate and not intertwined in any way.
We should probably get PEP 658 in first, but that might not be trivial either with the current structure. Not quite sure, I only did a very brief investigation a while ago and may very well have missed obvious approaches.
There's a PR implementing PEP 658, thanks to @cosmicexplorer -- https://github.com/pypa/pip/pull/11111
I am wondering whether we should drop fast-deps handling, given that:
In general, I support this, since I didn't spend the time a year+ ago to follow up on fast-deps to demonstrate the speedup I found in my first few tests with that technique on the v1 resolver (sorry!). I think in general that alternate execution modes should have some proven benefit to be retained.
Alternative: Extend fast-deps to sdists
However, I would like to complicate this and mention an alternative I am currently investigating, which is to instead extend the fast-deps handling to support sdists as per #8929.
EDIT: Upon reviewing this comment from my original proposal to do this https://github.com/pypa/pip/issues/8929#issuecomment-699909805, it's clear that this approach will not work for most sdists. Please consider this an off-topic remark; sorry for the noise.
As I described in https://github.com/pypi/warehouse/pull/9972#issuecomment-1237191485, the main reason we would really want to consider this approach is if PEP 658 support within pypi continues to be blocked on review. However, another benefit of this approach (if a speedup can be reliably demonstrated) would be to speed up resolves against outdated pypi instances, or even find_links repos (the latter of which is used within e.g. Twitter to serve python code).
Additionally (and I just realized this part a few hours ago), I'm also thinking that if we enable fast-deps for sdists as well, we could also expose a mode which allows resolving dependencies without executing setup.py files, which would finally close #1884.
Summary
In short, for the main line of pip development, I give my support to removing the fast-deps handling if it helps unblock this workstream. However, especially after noticing:
- the extended difficulty that PEP 658 has undergone in getting merged to warehouse,
- the request for a PR to address #1884,
I'm thinking there may be a stronger case than before for instead extending fast-deps to cover sdists, to get both a perf improvement (maybe) + security improvement (I think) for resolves against any current python package repo, for users willing to enable an experimental flag. I understand that I would still need to demonstrate this technique works, but I would like to know whether that PR would be likely to be accepted if I created it.
i hid my previous comment since it wasn't feasible, but wanted to restate that I think removing fast-deps handling until it can be shown to improve performance is probably a good idea if it simplifies any of this. I'll continue experimenting with that technique on my own but that shouldn't block any of this work.
Based on https://github.com/pypa/pip/issues/11447 and https://github.com/pypa/pip/issues/8670, my guess is that it's still useful -- if a little poorly implemented.
I do think that it's worth exploring whether we should get rid of it once PEP 658 is implemented on PyPI though.
A huge % of wheels will have the entire .dist-info directory in the last 8-10 kb of the zip archive. I checked this once but don't have the measurement on hand.
A huge % of wheels will have the entire
.dist-infodirectory in the last 8-10 kb of the zip archive.
I might be misunderstanding you, but this exact realization indeed led directly to the current --use-feature=fast-deps implementation. It was surprising that that alone wasn't enough to produce a useful speedup without other changes such as parallel downloads, and I regret overselling the potential speedup of fast-deps when it was first proposed.
@cosmicexplorer the lazier lazy-wheel makes many fewer requests! Check it out! We avoid the HEAD request and inspect zipinfo to make a single read (translating to a single request) for the entire .dist-info. Almost 4s faster, but I have a fast connection and very low latency to fastly.
Less-lazy lazy wheel wastes a HEAD request, and ZipFile makes a couple of tiny reads for each file's inline header (not at the end of the file) if we don't prefetch.
https://github.com/pypa/pip/issues/8670#issuecomment-1267230135
This work was part of https://github.com/conda-incubator/conda-package-streaming, we can do a great job on .conda (is ZIP) and .tar.bz2 conda files by hanging up the connection once we've seen what we are interested in.
Folks, I'd like to suggest moving further discussion on lazy wheels to #8670 -- it's unrelated to the refactoring that is on-topic for this issue. :)