Add 'loose dependency checking' option
We build a Linux distribution and due to the bootstrap requirements currently use a different tool, https://gitlab.com/rossburton/picobuild, to build Python modules. Now that lots of packages have moved to flit the bootstrap is easier, so I'm looking at abandoning picobuild and moving to build
Remember that this is a traditional distribution being built, so there's no build isolation and a single set of site-packages.
Some packages have troubling requirements, suchas pytest-forked:
requires = ['setuptools ~= 41.4', 'setuptools_scm ~= 3.3', 'wheel ~= 0.33.6']
We have setuptools 65.5.1, so with current build this dependency fails. Picobuild has a --loose-depends option which transforms ~= into >= for packages like this in distribution builds where we can't control the dependent package versions per package.
I could pass --skip-dependency-check but that skips all checking, which is a rather heavy hammer. Would you consider adding something like --lose-depends for distro builds?
I think this is sufficiently rare that I think it makes more sense for you to patch the requires on your side (and possibly raise an issue upstream, as I seriously doubt they want to be using setuptools 41 exactly).
Also, if you are controlling the build by disabling isolated builds, that's what --skip-dependency-check is for. In pip, dependency checking even opt-in instead of opt-out for non-isolated builds, which I prefer, since most of the time, requirements here are specifically for isolated builds. Things like cmake and ninja usually can't be satisfied by non-isolated builds. Pinned versions are almost never correct, as you've seen. Etc. Authors usually assume they will get anything they want in isolation in requires=.
I very much like having some dependency checking when doing builds, as there are many situations where missing dependencies silently change behaviour (for example, I've seen a package that uses setuptools-scm-git-archive change the version number if that isn't present), so disabling them entirely isn't always ideal.
Note setuptools has been having a lot of compatibility issues lately. I would say if something states it needs specific setuptools version, then it probably really does. And setuptools_scm is probably pinned to be compatible with setuptools. I don't see good reason why wheel would be pinned, it's typically super-stable.
Recent versions of wheel are breaking users who manually use it - the public API of wheel was never intended to be used, so anyone importing it is likely to be eventually broken. ninja & cmake's builds are broken by the latest wheel, for example. Though you normally don't need to, and setuptools declares it via PEP 517 anyway.
Anyway, there were some problems with a few releases of setuptools, but most of the issues have been fixed; some packages might need an envvar for local vs. stdlib distutils, but other than that, new setuptools tends to work. And old setuptools don't support newer versions of Python, so limiting it is asking for breakage.
Anyway, you certainly can't use the dependency check in a lot of other cases (like if a dependency is not required or if it can be fulfilled another way, like by a command line tool), and this is one of them. I'd recommend disabling the dependency check and manually making sure it works. Personally don't think it's worth complicating the API just to have a custom way to ignore version pinning/capping. (And you can read what I think about that here anyway - if this causes push back to remove the limits, all the better :) ).
I just bumped to similar but different case with filelock where author pins to latest possible version (and newer) of setuptools because that is the only one they have tested. We live in an interesting world.