cibuildwheel icon indicating copy to clipboard operation
cibuildwheel copied to clipboard

misconception or incorrect usage? shared library not compiled as macOS arm64

Open AndrewAnnex opened this issue 1 year ago • 6 comments

Description

I produce an open source library spiceypy which is a python ctypes-based wrapper for a pure C library. The C library is distributed as ansi c and header files with no cmake script, I have made my own "build" script using python to download the source/apply patches/and run gcc/visual studio to compile the shared libraries. I have been using cibuildwheels to build python version independent (linux, macos, windows) shared libraries but I recently discovered via an issue that the shared library built for Apple Silicon/arm64 MacOS is actually being compiled as a x86_64 shared library. This was verified using the file command on an intel macbook pro on the shared library embedded in the wheel of the most recent release of my project.

After looking at the docs for cibuildwheel again and this, I am a little confused what is going on as it is implied by the docs that one can build universal2 and arm64 wheels with cibuildwheel using github actions (see ci config below).

From the build log attached I can see for the "arm64 build" that a x86_64 cpu architecture is used from the logs for my before_build step which compiles the shared library, rather than arm. In the build log I see CPython 3.9 macOS arm64 - Apple Silicon being detected by cibuildwheel. For my linux aarch64 builds, I can see that aarch64 is correctly detected and the build takes about an hour longer than the platform native builds as expected. I have also tested the aarch64 builds before on a rpi4 so I know those are actually compiled correct.

Did I assume incorrectly that arm64 wheels can be built on macos workers on github? Looking at the usage table in the readme Linux ARM is indicated as supported but there is no column for macos. Otherwise from the docs (https://cibuildwheel.readthedocs.io/en/stable/faq/#apple-silicon) it seems like I am doing everything correctly? Maybe I am pointing to the wrong gcc in my build script?

note: above I may say arm64, by that I really mean Apple Silicon/M1/M2 in case that is unclear

Build log

https://gist.github.com/AndrewAnnex/0e4fc730ef3c165891ed1d2c21a2f13e

CI config

https://github.com/AndrewAnnex/SpiceyPy/blob/main/.github/workflows/publish-to-test-and-live-pypi.yml

AndrewAnnex avatar Jul 27 '22 01:07 AndrewAnnex

If you are building using your own script, you'll have to ensure that you're building the correct arch. The easiest way to do this in before-build is to look at the ARCHFLAGS environment variable. We set that here:

https://github.com/pypa/cibuildwheel/blob/main/cibuildwheel/macos.py#L213-L217

joerick avatar Jul 27 '22 09:07 joerick

All Github actions runners are x86_64 for macos, so we cross-compile to universal2 and arm64.

joerick avatar Jul 27 '22 09:07 joerick

Might have changed as of yesterday, no idea if it's just a small change or not to get it running however https://github.blog/changelog/2022-08-09-github-actions-self-hosted-runners-now-support-apple-m1-hardware/

samuelstjean avatar Aug 10 '22 09:08 samuelstjean

From blog

All actions provided by GitHub are compatible with the runner except for a known issue with setup-python. The fix for that can be tracked here.

so it may need some additional time, but worth to test.

Czaki avatar Aug 10 '22 09:08 Czaki

This announcement is for self-hosted runners, meaning you can now run Github Actions on your own M1 hardware. But most people will be waiting for Github-hosted arm64 runners.

joerick avatar Aug 10 '22 10:08 joerick

@joerick thanks for the pointer to ARCHFLAGS. I ended up updating my build script to add -target arm64-apple-macos11 to my gcc call based on the ARCHFLAGS set, and then also updated the github workflow for the arm build to set it explicitly (I split my build matrix already on the build target for cibuildwheels).

That said, I am not sure if the ultimate fix was setting archflags, or if it was by explicitly setting the -target. Since I was interested in a quick fix, I didn't look into which change was ultimately responsible. The compilation is occurring during the before-build step, maybe archflags is not being set by default during that stage?

So in a way my problem was solved and we could close the issue, but maybe some documentation improvements could be made. As I view it, needing to update my build script to perform the cross compilation was outside of my expectations (a misconception). So either archflags isn't being set automatically during the before-build step, or gcc just needed to be told the target explicitly, in which case that is just something to note in the docs. I am happy to take a college try at updating the docs in a PR if that is the ultimate need here.

As for the m1 runners, I saw that bitrise might be an option but I haven't fully explored that yet.

AndrewAnnex avatar Aug 10 '22 13:08 AndrewAnnex