Bootstrapping problem (how to bootstrap all-source environments)
In this comment, @pfmoore suggests that we may need to re-think --no-binary, specifically around wheel but more generally around any or all build dependencies. In pypa/pip#7831, we've essentially concluded that --no-binary :all: is unsupported for bootstrapping issues. As reported in pypa/wheel#344 and and pypa/wheel#332, wheel is essentially blocked from adopting any form of PEP 518, or any other tool that requires pyproject.toml (because presence of a pyproject.toml implies PEP 518) unless wheel can elect to break --no-binary wheel installs.
Furthermore, the backend-path solution is likely not to be viable. Even at it's face backend-path adds a path to the PYTHONPATH when building, but unless a source checkout of wheel has had setup.py egg-info run on it, it won't have the requisite metadata to advertise its distutils functionality.
@pfmoore has suggested that it's not the responsibility of the PyPA or its maintainers to solve the issue, but that the downstream packagers should propose a solution. To that end, should wheel restore support for pyproject.toml and break --no-binary workflows, thus breaking the world for downstream packagers and incentivizing them to come up with a solution (that seems inadvisable)? Should wheel (and other use-cases) remain encumbered with this (undocumented) constraint in order to force legacy behavior when building wheel, thus alleviating the incentive to fix the issue (also sub-optimal)?
Perhaps solving the "bootstrapping from source" problem would be a good one for the PSF to sponsor.
Perhaps solving the "bootstrapping from source" problem would be a good one for the PSF to sponsor.
@brainwane @ewdurbin @di @dstufft ^ for the Packaging-WG "Fundable Packaging Improvements" page.
I'm interested in adding this to the list of fundable packaging improvements, but I need help with wording. Try answering each of these questions in about 20-200 words each:
-
What is the current situation/context? Example: "Scientists need to install some Python packages via pip and others with conda."
-
What ought to be fixed, made, or implemented?
-
What problems would this solve, and what new capabilities would it cause?
Anyone who wants to add this to the list of fundable packaging improvements can now submit a pull request on https://github.com/psf/fundable-packaging-improvements .
This issue is more general than just wheel and affects a number of packages beyond setuptools and wheel.
In https://github.com/pypa/setuptools/issues/980, I learned of the issue in build tools. If build tools have dependencies, and those dependencies use the build tool, it's not possible to build the dependencies from source. For example,
- Setuptools depends on appdirs.
- Appdirs is built by setuptools.
An attempt to build/install Setuptools or Appdirs from source rather than wheels (as many platforms prefer to do) will fail.
Setuptools has worked around this issue by vendoring all of its dependencies, although it has a stated goal to stop vendoring (https://github.com/pypa/setuptools/issues/2825).
However, in https://github.com/python/importlib_metadata/issues/392 today, I've learned of even more places where these circular dependencies can manifest. In this case:
setuptools_scmdepends onimportlib_metadata.importlib_metadatarelies onsetuptools_scmto build.
So unless setuptools_scm is pulled pre-built, when it attempts to build from source, it also pulls importlib_metadata which requires setuptools_scm to build.
I've probably stated before, I wish for libraries like setuptools_scm to be able to adopt dependencies at a whim (and not have users encountering errors).
These new cases lead me to believe the problem is not solveable by downstream packagers except by forcing all build tools (and their plugins) to vendor their dependencies.
I'd like to avoid each of these projects needing to come up with a bespoke hack to work around this issue and eliminate this footgun that will reduce adoption due to these bootstrapping issues.
What is the current situation/context? Example: "Scientists need to install some Python packages via pip and others with conda."
The situation is in any context where in a project, any of the PEP 518 build-system.requires has a dependency whose build-system.requires also leads back to the original project. For the general case, this situation is not a problem as the pip resolver can resolve dependencies and pull in their pre-built versions, but when users wish to build from source, the circular dependency prevents building.
What ought to be fixed, made, or implemented?
First, it's not obvious what needs to be implemented. This project still needs work to explore the problem space and propose options to be considered by the PyPA. It's going to require coordination across many projects and may require a PEP.
A rough sketch of some options to consider:
- Implement a bootstrap handler that will identify and intercept these circular dependencies and provide an alternate way to make the functionality available without building (such as by adding the project's sources root and src/ directory to
sys.path). - Require the builders to keep blessed, vendored copies of dependencies of build tools.
- Explicitly disallow building these projects from source and require downstream packagers that wish to bootstrap a system from source to own the problem and devise their own workaround.
- Require build tools not to have circular dependencies. Publish this formal requirement.
What problems would this solve, and what new capabilities would it cause?
It would solve the class of problems where errors occur when users attempt to build from sources but dependencies exist in the build dependencies. It would mean that build tools like setuptools and setuptools_scm could naturally depend on other libraries. It would mean that other projects could rely on these build tools without getting reports of failed builds.
Anyone who wants to add this to the list of fundable packaging improvements can now submit a pull request on https://github.com/psf/fundable-packaging-improvements .
Based on the criteria in the readme there, I don't believe this project is yet ready for a fundable project. There is not yet consensus or an accepted PEP. To the contrary, the initial feedback has been negative, meaning there's probably a need to develop that consensus first. Maybe the additional use case described above will help nudge us in that direction.
In the meantime, I'm slightly inclined not to workaround the issues for users, thus masking the root cause.
I think there's a good fundable project in here, but it's a research project. I think the problem is that we are getting feedback that people want to be able to "build everything from source", but that causes an issue because bootstrapping involves circular dependencies. This sounds pretty much identical to the issues of building a C compiler, so we could probably learn a lot from how people solved that problem. Also, presumably the people wanting to build Python packages from scratch stop at some point - do they need to build the Python interpreter itself from sources, for example?
I'd suggest proposing a research project to find out why people want to build with --no-binary :all:, and in particular what are the precise constraints they are working to. This could involve user surveys, requirements collection, etc. That's one of the reasons I think this would make a good fundable project - the skills needed to do a good job here are not common in the Python packaging community, so hiring specialists would be very cost effective.
The deliverables from such a project could be a requirement spec for "build everything from source" activities and in particular a listing of the constraints that need to be satisfied for any solution.
Once we have that documentation, follow-up projects to actually design and/or implement a solution will be much easier to specify. Or it may be that the negative reception that's happened so far can be crystallised into specific issues that we (the Python packaging community) aren't willing to support, which will give users much more actionable feedback at least.
In https://github.com/pypa/setuptools/pull/4457#issuecomment-2206452168, I describe what appears to be a distilled version of the implicit constraint:
- Build backends can only have (build or runtime) dependencies on projects that don't rely on that build backend.
I wonder if instead the constraint should be:
- Environments that rely exclusively on building from source (e.g.
--no-binary :all:) must use a frontend that implements caching of built artifacts in order to break cycles in build dependencies.
From that conversation:
I think I've got that right. A build backend can have dependencies if all of the dependencies rely on a different build backend (hatchling, for now). Or a build backend can have no dependencies (flit_core). I wonder if we should document this constraint somewhere. Is there a place where we could document this constraint?
I don't think it's an explicit constraint, rather it's an implied consequence of the defined behaviour when combined with a requirement to build everything from source. In particular, it's not a constraint just on build backends, it affects projects that are dependencies of build backends as well - if setuptools depends on wheel, and wheel is built using setuptools, then wheel has a problem if you want to build it under a constraint of never using pre-built binaries. Conversely, if you are willing to use a multi-stage process where you build a setuptools wheel "by hand" and then use it to build everything else, there's no issue.
I'm not the best person to comment here, though, as I don't understand the whole "everything must be built from source" mindset - you have to start somewhere, and why can't that be "pre-build a setuptools wheel using some manual recipe, then use that to bootstrap everything else"? But from what I recall, there were some purists for whom that wasn't sufficient - and I never got a clear understanding of why.
I've had some conversations with these integrators (at spack and fedora and debian and others). Their primary motivation is based largely on a philosophy of "build everything from source" because that's the source of truth. Even at large enterprises like Facebook or Google, everything is built from source. They want to build from source to maintain independence (from other systems and abstractions) and provenance (no question about what was built from source and not). They (and often their users) want control and assurance that what's running is built from source and could be built again (repeated) without external resources. Think of it this way - if you woke up in a world with nothing but source tarballs, could you rebuild the system (and how).
I think some insight can be found your framing of the issue ("if you are willing to use a multi-stage process" and "pre-build a setuptools wheel"). I can see why integrators would shy away from (a) more complex processes when simple ones can do and (b) having special casing for select packages. They'd like to be able to apply a technique uniformally to any and all packages. If the process is multi-stage and has to manage interactions between packages, it's much more complex than one that can act on one package at a time. Such a multi-stage system could become impractical if enough of these interdependencies exist. It's also a risky proposition to be caught between two projects managed by independent parties.
Moreover, I don't even think having a pre-built setuptools wheel works to bootstrap the process if setuptools has dependencies. You need to have a special process to bootstrap the setuptools build, then you need another special process to bootstrap all of setuptools dependencies (because setuptools still doesn't have its dependencies) then you likely need to rebuild setuptools without the special process. And of course, this problem isn't unique to setuptools. If any of hatchling's dependencies decides to adopt hatchling or if hatchling wanted to adopt a dependency built in hatchling, it too would be broken.
I wonder if instead the constraint should be:
* Environments that rely exclusively on building from source (e.g. `--no-binary :all:`) _must_ use a frontend that implements caching of built artifacts in order to break cycles in build dependencies.
I just realized even this constraint isn't enough unless it also includes a weaker version of the other constraint:
- Build backends must be capable of being built without any transitive dependencies to the backend.
In other words, setuptools cannot declare a build dependency on any setuptools-built project without also vendoring it.
Quoting myself from above:
Think of it this way - if you woke up in a world with nothing but source tarballs, could you rebuild the system (and how).
I'm finding this line of thought very useful in reasoning about the problem.
If someone gave me a directory of sdists for setuptools and all of its dependencies and nothing vendored, could I use a Python interpreter to build a functional setuptools?
In general, the answer is no. If any of those dependencies have C or Rust extensions and need setuptools to build them, there's no way to build those.
Even without extension modules, there are problems. I imagine it could be possible, since Python source files are importable, to analyze the metadata of the sdists (pyproject.toml:build-system) and determine which sdists are needed (all of them) and then somehow stage them to be importable. However, there's no way in general to do that. Each project might use a src layout or a flat layout or maybe something far less scrutable. What's needed is a working, built, installed build system (setuptools with all it dependencies) that's necessary to transform those sources into something importable.
Can you think of a way to address the situation?
In one approach, I imagine an integrator keeping a bootstrap build of each and every backend (and its dependencies) and falling back to that whenever a build dependency cycle is encountered, but given that the Python ecosystem allows for an unlimited number of backends, that could grow unwieldy... and would we want to put the onus on each and every integrator to implement their own such system? I argue no.
If someone gave me a directory of sdists for setuptools and all of its dependencies and nothing vendored, could I use a Python interpreter to build a functional setuptools?
This is indeed a good way of thinking about it. However:
- The way you build a functional setuptools does not need to be the "standard" way of building a wheel (such as
pip wheelorbuild). It can be, and I'm sure it would be easier for people in that situation if it were, but bootstrapping from nothing is an unusual situation, so non-standard solutions are (IMO) acceptable, if needed. - This isn't really a general packaging question, and it doesn't need new rules or constraints to cover it. It's simply something that build backend developers (and indeed, any package developer) should consider as a question of what use cases they want to support.
One solution, of course, is for setuptools to use flit_core as its backend. We know flit_core can be built from scratch, and indeed that's a deliberate design choice, so it's not something that's likely to change in the future. And flit_core can build pure Python packages like setuptools. Obviously there's a certain irony in not using setuptools to build itself, but maybe practicality beats purity here...? 🙂
FWIW, I do like the idea of having a single solution here (i.e. a single build-backend that can be bootstrapped from nothing) being reused across backends. If it's flit-core, that's fine by me as would be using something other than flit-core as a bootstrap-from-nothing backend and migrating flit to using that.
Subject of course, to other flit maintainers being onboard with the idea.
To re-emphasise, though, this wouldn’t be a requirement for build backends to use that build tool, it would simply be a community-supported solution for the (non-trivial) problem of build backends needing a means to bootstrap themselves.
If setuptools prefers to self-bootstrap, that’s fine, but they need to solve any “build from all source” issues themselves in that case.
I've been pondering using flit-core for setuptools. At least on integrator has already bristled at the idea, but it does seem worth exploring.
I don't think it solves the problem, however, simply for a build backend to adopt flit-core (or some other backend without dependencies), because there still exists a circular dependency with any dependencies of the backend that use the backend (e.g. you can't build importlib_resources without setuptools being installed and you can't install setuptools without importlib_resources; similarly, if a hatchling dependency were to adopt hatchling (or a setuptools with dependencies), it also would no longer be compatible). If the problem were as simple as special-casing the build backends, that wouldn't be too onerous, but since the requirement extends to any dependency, which may be independently managed, it creates an almost unmanageable situation. At the very least, before a backend accepts a dependency, it'll have to establish an expectation about what build backend that dependency uses.
My other reluctance for adopting flit-core is it's philosophy runs against a basic objective of projects I manage, which is to avoid the foot gun of double accounting (requiring versions and file manifests and other metadata to be manually replicated in static files). It's a compromise I'd be willing to make for a build backend but reluctant to impose for any dependency of a build backend.
So we're stuck unless we can get the integrators to accept a methodology of somehow bootstrapping build backends by providing pre-built artifacts of all dependencies of all supported backends (similar to how pip is able to bootstrap by using pre-built wheels).
The more I think about it, I think that should be the guidance. When bootstrapping a system, an integrator should either rely on pre-built artifacts from PyPI or supply their own pre-built artifacts for each backend (including its dependencies). These pre-built artifacts can be a part of the build system and can be discarded after bootstrapping. Such an approach, although somewhat heavy, should work universally and allow any build backend to simply declare dependencies without compromising the build-all-from-source model.
It feels like a choice between that or all build backends must bundle all of their dependencies (and not declare them) to be safe.
Does that sound right? If it sounds right, I'll put together a design proposal and review it with the system integrators.
I agree with Jason here, if the problem was only the "build time" dependencies when building setuptools itself, I think we have ways of fealing with that. Right now, setuptools can "bootstrap" itself in the sense that it can create its own wheel without dependencies. So we would not need flit-core.
The way I see, the problem is not about the "build dependencies" (i.e. [build-system] requires) for the backend itself, but rather the "runtime dependencies" (i.e. [project] dependencies) for the backends. While flit-core could be beneficial for addressing the former, it doesn't help with the latter, which is where our real struggle lies.
(This is related to the dilemma discussed in https://github.com/pypa/setuptools/pull/4457#issuecomment-2205874399).
Jason's proposal on the other hand seem to be a good solution.
I've drafted this proposal. Please feel to have a look - you should be able to make comments or suggest edits. If not, let me know and I'll move it to a Google Doc. I'll plan to circulate this proposal with key stakeholders for system integrators in a week or so.
Looking at Doug Hellman's https://github.com/python-wheel-build/fromager project, that seems consistent with @jaraco's proposal (the prepare-build step in fromager accepts a --wheel-server-url, so build dependencies can be downloaded in binary format from a trusted index, rather than having to be built from source)
At this stage, I'd like to loop in the systems integrators, especially those involved in source-only builds. I know a few, such as @tgamblin, @mgorny, @doko42, @kloczek. If there are better contacts, or if you know of any other integrators who you think should be aware, please refer them here. If any of you are on discord, I'd also like to invite you to join the PyPA discord and in particular the #integrators channel, where we can discuss and coordinate this effort and efforts like it. But most importantly, please review the proposal and share your feedback on it (comment in the doc, comment here, or send a note in discord). I know it’s an ask for integrators to adopt a new methodology, but I believe it’s the only way to allow backends to have dependencies.
In this comment, Doug Hellman brought to my attention that PEP 517 actually forbids cyclic (transitive) dependencies (I'm ashamed I'd not previously linked this constraint to this issue). This requirement essentially boils down to "no build backend can have dependencies", a constraint that's been a problem ever since setuptools chose to adopt six. While there are certain conditions where a build backend can depend on other projects, it can only do so by constraining those projects (not to use the build backend). It also means that a backend with dependencies cannot be in a self-contained system (such as the coherent system; their build backend must always resolve outside of the system).
What are the thoughts on having a package finder that considers project.toml files so that standard metadata like entry points can be used
Then building wheels for a cyclic set of packages would first run a building with potentially partial metadata to make either final wheels or editable ones, and then the process would repeat based on the first stage of artifacts to generate a second stage of hopefully reproducible artifacts
What are the thoughts on having a package finder that considers project.toml files so that standard metadata like entry points can be used
Then building wheels for a cyclic set of packages would first run a building with potentially partial metadata to make either final wheels or editable ones, and then the process would repeat based on the first stage of artifacts to generate a second stage of hopefully reproducible artifacts
I was thinking of something like that - a system/tool to essentially bootstrap any given backend (and its dependencies) from source. I do think it would entail a lot more than honoring entry points. You'd need to honor:
- code arrangement - some projects use src layout, others flat layout, others something else. It's typically the responsibility of a viable build backend to know where and how to translate the source layout to something importable.
- non-core metadata - like entrypoints or RECORD, which may or may not be present in the pyproject.toml.
- core metadata - some projects have dynamic metadata; presumably this is already materialized in the sdist.
- bespoke logic - some projects still use setup.py and perform arbitrary customizations to the build tools and behaviors. A build backend could itself have arbitrary logic for handling the construction of the wheel.
All of these concerns led me to think it would be near-impossible to support arbitrary dependencies without adding severe constraints on what form they could take. If we wanted to support arbitrary dependencies (or at least arbitrary pure-python dependencies), it will essentially require honoring (and re-implementing) the bulk of every backend in this bootstrapping tool. And much better than re-implementing would be to simply re-use, which leads me back to the idea of having pre-built versions of every build backend for the purposes of bootstrapping the build backends.
Another way to address the concerns could be to drastically limit what behaviors a dependency of a build backend could use. For example, all build backend dependencies must use flit-core, or they must use PEP 621 and a src layout. The problem is, I really don't want to impose such a constraint. I want Setuptools to be able to adopt wheel or ordered-set or next useful dependency without having to go to the maintainer of that project (and the maintainers of their transitive dependencies) and ask them to re-organize their project today and in perpetuity and for any dependency they adopt.
I wish there were a way to generalize the bootstrap from pure source problem, but it feels intractable to me. In light of these concerns, do you still feel there's a viable path to a source-only bootstrap methodology?
there is no need to fully generalize - have a core set of tools that keep the base project layout simple enough that config on code layout and entry-points can be considered, then just put them into a import pool and run the tools to make the real artifacts
afterwards its literally have a set of sibling folders and a new import meta hook that does well enough to run that set of packages
for that bootstrap i consider it acceptable to have incomplete metadata, as long as the tools run and produce consistent results for a second stage
have a core set of tools that keep the base project layout simple enough
By "core set of tools", do you mean all the transitive dependencies of the build backend?
If I thought this approach was viable, I'd set out to implement it. Would you be willing to prototype it, or maybe walk through how it would work for a setuptools sdist (with its core dependencies) or a coherent.build sdist (dependencies)?
the starting point would be to have all checkouts
and then a meta path loader that takes metadata from pyproject.toml - it may be necessary to annotate it with layout metadata
this probably ought to be a own project, being a python project with a entrypoint file, its own metadata
the key would be to load entrypoints and package metadata from standard compliant fields in pyproject.toml (and/or extensionlib)
that way all entrypoints and the basic metadata for those projects would be avaliable in enough capacity, to run each packages build backend once
once one has executed all build backends for all packages, binaries are avaliable for a second stage with full metadata for validaton
I'm trying to think of a corresponding situation in a different ecosystem. How do you build a C compiler if you don't already have a C compiler? Or does "everything must be built from source" not apply that far down the stack? And if that's the case, what are the rules on what gets an exemption (and can Python build backends not get a similar exemption, for much the same reason?)
To be completely clear, I don't think it's reasonable to expect that build backends can have completely free rein when it comes to dependencies, the way @jaraco is suggesting. Keeping the dependency tree simple, and possibly even vendoring, is IMO the correct solution here. It's what pip does, and I'll happily concede that it's a real PITA, but it's a necessity given how Python dependencies work. I don't think build backends are any different in this regard.
thing is - unlike c code - python code executes just fine as long as it is made importable - so the bootstrap for running is far less of a problem than
most other bootstrap systems need some type of staircase that ends up at a reproducible build of the compiler
I'm trying to think of a corresponding situation in a different ecosystem. How do you build a C compiler if you don't already have a C compiler? Or does "everything must be built from source" not apply that far down the stack? And if that's the case, what are the rules on what gets an exemption (and can Python build backends not get a similar exemption, for much the same reason?)
At least at one time the process started with hand-written assembler that was used to compile a more complex program, eventually getting to the point of being able to build the compiler.
http://users.ece.cmu.edu/~ganger/712.fall02/papers/p761-thompson.pdf may be interesting reading.
thing is - unlike c code - python code executes just fine as long as it is made importable - so the bootstrap for running is far less of a problem
Yeah, my comment was (to an extent) directed at the assertions that this is a significant issue - if distributions can manage to handle C compilers, they should be able to handle Python packages somehow. It feels like people are taking opposed stances and aren't willing to compromise, or otherwise accept that a certain amount of give and take is needed.
Having said that, I think your proposal is viable, and (with a certain amount of ongoing maintenance, and some reasonable constraints on how build backends and their dependencies lay out their source code) could be made to work. At the simplest level, the process could be:
- Check out the backend and all its (transitive) dependencies. Getting the list of things to check out could be hard if you're adamant about not using any pre-built tools, but it's certainly possible to work out the list manually, and something like
pip install --dry-run setuptools | grep "Would install"could get the list using only what's shipped with Python[^1]. You could get the list once, for each release of a backend, and manually audit it if you want. There's no need to calculate it for every build. - Add a
.pthfile tosys.pathcontaining the source directory for each of those projects. That's possible as long as they all work in "legacy" editable mode, available viaeditable_mode=compatin setuptools. You may have to use a heuristic like "usesrcif it's present, otherwise assume the root of the checkout is the source directory", and might (in theory) need an exception list for projects with weird layouts, but I'd hope that few projects do that.
That would give you an environment that could run the build backend, to build wheels for itself and its dependencies. At that point you're bootstrappend and everything else should be fine. It's likely to be messy and a bit manual, but I consider that a reasonable thing to expect. On the other hand, it only places demands on build backends and their dependencies that I consider reasonable (dependencies must be static, i.e., not depending on the target environment, and the project must be usable in "legacy editable mode", i.e., via a .pth file).
I hope no-one is suggesting that a build backend should depend on anything other than pure Python packages? That would be far more difficult to handle (and IMO goes way beyond reasonable expectations).
[^1]: You could also use the zipapp for a known-acceptable version of pip, or checkout pip's source code and run it from its src directory, which I'm pretty sure would work just fine.
I'm trying to think of a corresponding situation in a different ecosystem. How do you build a C compiler if you don't already have a C compiler? Or does "everything must be built from source" not apply that far down the stack?
For us in MacPorts, the natural place to draw the line is to start with Apple's developer tools installed. I assume other distros must have some minimal known good toolchain they start from, and you can see how it's desirable to keep that as minimal as possible.
To be completely clear, I don't think it's reasonable to expect that build backends can have completely free rein when it comes to dependencies, the way @jaraco is suggesting. Keeping the dependency tree simple, and possibly even vendoring, is IMO the correct solution here. It's what pip does, and I'll happily concede that it's a real PITA, but it's a necessity given how Python dependencies work. I don't think build backends are any different in this regard.
Indeed, front ends have exactly the same problems. We currently resort to unpacking the sdists for installer and build and all its dependencies into a directory and adding that to PYTHONPATH, so we can get those modules installed.
IMO there should be a wheel installer and simple build front- and back-ends in the stdlib, but that doesn't seem to be a very popular opinion.
In the rootbeer repo, I've drafted an idea based on @RonnyPfannschmidt's proposal. Instead of a tool that collects the sources into an importable directory, I've used Git and submodules to assemble the build-time dependencies for Setuptools and its dependencies (basically setuptools' run and build-time dependencies and their transitive closure).
To use it, simply git clone --recurse-submodules --shallow-submodules --depth 1 https://github.com/pypa/rootbeer and then build any of the backend resources with env PYTHONPATH=path/to/rootbeer pyproject-build --no-isolation --skip-dependency-check.
It works by keeping clones of the submodules in special directories and then using symlinks to make flat- and src-layout packages available for import.
This repo currently supports setuptools and flit-core backends but could be extended to support all Python build backends.
Because it's a git repo, it can readily be trusted by pinning to a specific git hash (which also has pins to the submodules' hashes).
I believe this approach may generalize to any source-only integration bootstrapping process. It still would require integrators to identify build backends in order to know to employ the bootstrapping. I'm also yet unsure if this approach could support compiled artifacts. I'm assuming they're out of scope for now.