Modern Python packaging
Motivation
I have a Django web project, which targets Linux server but some team members use MacOS for development. We use uv as Python package manager. In the "dependency resolution" phase, uv tried to find systemd-python metadata but couldn't get what it wanted from the setup.py file. It had to build systemd-python wheel to get the metadata, then failed on MacOS machines.
What this PR does
Adopt new pyproject.toml file, in place of setup.py. For building C code, I adopt Meson, which has meson-python as a build backend so that pip or any other modern Python package manager can know how to build the C-based extensions.
Because this new build process is done in an isolated environment and leave no *.so files, the Makefile is also updated to work with new build way.
Converted Makefile to Meson run targets. It requires me to introduce uv, to wrap the commands with uv run, to ensure the tools are installed (by uv) before running. But it then breaks the building docs in GitHub Action. I'm trying to fix...
It requires me to introduce uv, to wrap the commands with uv run
No longer wrap, because it creates circular reference: If we build the project with uv build, uv will run meson indirectly under the hood (via meson-python build backend). If we wrap Meson target with uv, uv will then under meson (because the task is run as meson compile -C _build task).
@behrmann I moved Makefile jobs to Meson "run target". It turns out to be not convenient for developers, because:
-
Meson check for availability of the binaries in target command, like
rsync,gpgeven when we just want to build the Python extensions.run_target( 'sign', command: ['gpg', '--detach-sign', '-a', archive], ) -
Meson always run the "taget" inside the
_buildfolder, which is the same folder we use for building Python extension. It means that themesonwe use for running targets must be the same version of the one invoked by meson-python build backend. It leads me to declare meson in both two places, like this:[build-system] requires = ["meson-python", "ninja", "meson"] # This is to be invoked by mesonpy build backend build-backend = "mesonpy" [dependency-groups] build = [ "meson>=1.8.2", # This is for us to run other targets in meson.build "twine>=4.0.2", ]
Sorry for the slow turnaround, but busy with other stuff
Meson check for availability of the binaries in target command, like rsync, gpg even when we just want to build the Python extensions.
I've not run into this myself before, but looking at the systemd meson.build I think the workaround is wrapping the called binary in a call to end
env = find_program('env')
# …
run_target(
'ctags',
command : [env, 'ctags', '--tag-relative=never', '-o', '@0@/tags'.format(meson.project_source_root())] + all_files)
)
Meson always run the "taget" inside the _build folder, which is the same folder we use for building Python extension. It means that the meson we use for running targets must be the same version of the one invoked by meson-python build backend. It leads me to declare meson in both two places, like this:
I'm not quite sure I understand this, but I assume you are saying that you include meson in a build dependency group simply to signify that it is needed to run the run_targets? That sounds reasonable to me.
@behrmann That "wrapping with env" is a funny trick. If not hitting this Meson behavior, I will not understand why people have to do that. I think the migration is Ok now, although the experience of running "build docs", "test" with Meson will be surprising for Python dev coming from other projects.
Thanks for commenting, @keszybz . I'm willing to redo this. Do we need to split to smaller PRs?
About etags. It is originally called etags in Makefile. But then I tried to make the GitHub install.yml workflow works, trying to install etags to the Linux distro and turn out that the package name in all distros are ctags, not etags. So I rename to ctags to avoid confusion.
Superseded by #148 .