DISCUSS: Raise MacOS minimum target from 10.9 to 10.13?
current todo list
- [x] make an announcement (https://github.com/conda-forge/conda-forge.github.io/pull/1993)
- [x] decide how we want to enforce the __osx constraint
- ~going to use recipe_append.yaml generated by
conda-forge-ci-setup(https://github.com/conda-forge/conda-forge-ci-setup-feedstock/issues/260)~ - The append doesn't work for recipes with outputs, so we'll have to use a run export on the compilers for osx.
- ~going to use recipe_append.yaml generated by
- [ ] Do https://github.com/conda-forge/conda-forge.github.io/issues/2102
- [x] Decide if we will allow folks to set a min version less the global minimum.
- ~If yes, do nothing.~ stdlib jinja mechanism will allow this
- ~If no, then we need a minimigrator to adjust / remove any custom minimum pins below 10.13~
- [ ] clean up recipes with outdated pins in CBC (that don't explicitly want/need it)
- [ ] figure out if there are any finicky builds on osx that need adjusting? I recall some but maybe that is out of date by now.
- [ ] figure out remaining(?) places in the infrastructure that involve assumptions about the sysroot and fix them up
- [ ] patch old builds with 10.9 __osx constraint?
- [ ] bump
c_stdlib_versionto 10.13 forosx-64in global pinnings
issue text from start
When azure deprecated the macOS-10.15 images, it turned out macOS-11 still supports targets all the way back to our baseline 10.9, and so it was decided to separate the discussion of the image upgrade from the default MACOSX_DEPLOYMENT_TARGET.
However our baseline target 10.9 is now EOL since almost 6(!) years, so I guess that discussion should be had at some point. I don't think there's a really urgent need (what with shipping our own libcxx, plus the _LIBCPP_DISABLE_AVAILABILITY mechanism[^1]), but this topic recently came up in a numpy discussion, so I though I'd open an issue.
Originally, I thought our hand would be forced once macOS-11 images are deprecated, but it turns out that even in the macOS-12 images, there's still an SDK with Xcode 13.1, which in turn still supports targets back to 10.9. Only once we're forced to use Xcode 14+ would we have to bump the minimum target to 10.13.
[^1]: though there are quite a few feedstocks that just bump the MACOSX_DEPLOYMENT_TARGET because it happens to unbreak CI - not least because the compiler errors point in this direction ("X is unavailable: introduced in macOS Y.Z") - but often not adding __osx >={{ MACOSX_DEPLOYMENT_TARGET|default("10.9") }} as a dependence; such packages might already be broken on old OSX anyway...
July 2023 update
Out of curiosity, I wanted to check when the last bump of the MACOSX_DEPLOYMENT_TARGET happened, and, as it turns out, it's been at 10.9 since the initial commit of https://github.com/conda-forge/conda-forge-pinning-feedstock 🤯
With a bit of digging (and luck), I found the bump from 10.7 to 10.9 though: https://github.com/conda-forge/toolchain-feedstock/commit/7a470c5ec71ad250bbfe6565016e793f3cc8f339 - 7 years ago. At the time, 10.9 was just before its EOL. If we applied the same standard today we should jump directly to 11.0.
Given that most users these days are on much newer versions, and want to use newer features (relevant for our packaging, like support for metal or the new LAPACK implementation), I think it might make sense to stop dragging our feet so much on this. We're (slowly but steadily) moving with the time on linux and windows as well, why should osx need to fall so far behind?
To substantiate this a bit more, I wanted to look at the usage numbers for different MacOS versions. The broadest measure is "everyone who uses a browser", but there are apparently no good usage numbers, because Apple keeps misreporting its OS version in HTTP headers, for some complicated reasons. I did find however that 92% are on 10.15+, which is the version every version after that pretends to be. Notably, all (distinguishable) versions that're EOL have max 1-1.5% usage numbers (<7% cumulated) -- and again, this is all MacOS users, not just those of conda-forge.
FYI: We are not restricted by the xcode version and our builds are completely independent of the macos build image. See https://github.com/conda-forge/conda-forge-ci-setup-feedstock/blob/main/recipe/download_osx_sdk.sh
The phracker repo we get the SDKs from is no longer maintained.
our builds are completely independent of the macos build image
Yes, but we also use a hack to expose recent Metal stuff, e.g. in pytorch:
- https://github.com/conda-forge/pytorch-cpu-feedstock/blob/5df993a140f950439191ac1149045b7239eddf68/conda-forge.yml#L1-L5
- https://github.com/conda-forge/pytorch-cpu-feedstock/blob/5df993a140f950439191ac1149045b7239eddf68/recipe/conda_build_config.yaml#L1-L2
Maybe irrelevant to the main discussion here though...
One aspect of this discussion that's flown under the radar for a long time is now starting to come to the fore. While we can patch around the C++ standard library by shipping our own up-to-date libcxx, we're dependent on the sdk for an up-to-date C standard library.
So far, many projects have not relied much on newer C functionality, but certain projects (crucially LLVM, which is our main compiler on osx) now starts to rely on C11 features more and more. I've had a lot of problems with building libcxx 15, and one of the big issues was the lack of C11 support on osx (and linux).
I've now solved this by reverting a commit that removed a lot of workarounds in the LLVM 15 timeframe, but this is not a long-term solution. Of course, we could raise the MACOSX_DEPLOYMENT_TARGET for libcxx, but given its central role in conda-forge, that's likely effectively equivalent to bumping the minimum version everywhere.
I haven't yet managed to google (resp. experiment) which SDK added the required C11 functions. I can do that if people are interested.
libcxx 17 has recently dropped support for macOS<10.13 when building the shared libcxx library (I noted this in https://github.com/conda-forge/libcxx-feedstock/issues/110 when the RFC opened).
As far as I can tell, this is more fundamental than the availability workarounds we do (for building projects on top of libcxx), as it will really become impossible to build (shared) libcxx itself for targets <10.13.
Like for other fundamental toolchain dependencies (like upstream requiring vs2019 vs. vs2017 recently), I don't think it's feasible to try to keep patching back in support for older macos versions, as there will be zero coverage upstream and therefore things will bitrot fast.
Seeing that LLVM is our default toolchain on osx (and assuming I'm not overlooking something), I think we'll have to increase the minimum version to 10.13 at the latest when we switch our default compilers on osx to clang 17 (earliest that could happen would be late 2023; of course we could drag it out for quite a while longer by staying on clang 16 for another year or so afterwards).
PS. It's worth noting that libcxx had been waiting on chrome to bump their required minimum version to 10.13, which was apparently the last major "holdout" (from the POV of LLVM). So I think end of 2023 would actually a very reasonable timeline to bump to 10.13; at that point 10.12 will have been EOL for 4 years.
@h-vetinari: It's worth noting that libcxx had been waiting on chrome to bump their required minimum version to 10.13, which was apparently the last major "holdout" (from the POV of LLVM)
I just noticed that abseil (and google more broadly[^1]) also raised its minimum to 10.13, and uses as official policy what LLVM did informally, i.e. MacOS support is defined in this document as:
We will support [macOS] back to the oldest macOS target platform needed by Chrome
[^1]: notably protobuf, but also google-cloud-cpp, gtest, and probably several others
There's also an overview in this table.
Reminder: To the degree that the lower bound of the SDK depends on C++'s stdlib, we can continue to circumvent this with _LIBCPP_DISABLE_AVAILABILITY, but on the C side there are no workarounds. From the outside, it's hard to say if a project is relying on C's stdlib functionality, but use of C11's aligned_alloc (including std::aligned_alloc on the C++ side) is a sure sign, for example.
Just want to chime in that several feedstocks are requiring the -D_LIBCPP_DISABLE_AVAILABILITY flag to get passing on osx-64 for the r-base=4.3 migration builds. This is specific to R 4.3 builds, which default to C++17 standard.
It's not widespread, but it is a front where the continued support for 10.9 adds to maintainer workload.
I am not 100% sure if that's the case but I believe if nothing is done, there will be a flurry of problems Conda maintainers will have to deal with.
If nothing is done about the issue, Conda users will start having a lot more problems soon, so I suggest Conda maintainers to make some bold moves there.
One of the problems that conda users are going to experience is more and more packages not supporting binary version of packages for 10.9 will fail compilation. This is already happening with Apache airflow installing google-re2 package. If someone has a conda installation done ~ 10.9 (I guess), they cannot install Airflow because conda tries to compile a 10.9 variant of the packag as it has no binary version and it fails.
Apparently reinstalling conda from the scratch helps. This is what we recommend our users for now.
We do not officially support conda as installation medium, and if this is will not be fixed soon and we get more issues, we might want to officially say it does not work with conda and direct them to only use pip if this is not handled somehow - we are at a loss because we cannot do anything with that, other than telling our users to reinstall conda or move to pip.
I would really appreciate if conda maintainers have a solution for that, we are going to redirect our users to this issue here like we did with https://github.com/apache/airflow/discussions/32852#discussioncomment-6557411
@xylar - I believe you were at some point in time maintainer of Airlfow packages on conda, I would really appreciate if this problem is taken off our heads at Airflow and handled by Conda maintainers.
If someone has a conda installation done ~ 10.9 (I guess), they cannot install Airflow because conda tries to compile a 10.9 variant of the packag as it has no binary version and it fails.
I'm not sure if you're confusing conda and pip here, but conda will not compile anything when a package is not found (this is the default behaviour of pip though), which seems to be consistent with https://github.com/apache/airflow/discussions/32852.
It's also worth noting that this is in some ways the opposite problem than what this issue is about (aside from the fact that cross-use of conda & pip is always tricky, not recommended and not supported), namely that someone who has a new MacOS (13.5) is being wrongly detected as an old one. This issue is about raising the very conservative baseline version (10.9) against which our packages are compiled (successfully).
I have a vague suspicion that what's happening is that someone has a conda environment, tries to pip-install a package, pip doesn't find a wheel for some reason and thus compiles from source, the compilation within the environment picks up and old MacOS SDK. If this issue is solved by reinstalling (==updating) conda, then this might also just be an old conda bug.
In any case, I'd appreciate if you didn't start casting blame before understanding the mechanics of the situation more deeply.
Unfortunately I do not know how conda works nor have a history of conda under my belt and I do not think I should learn old conda bugs to help out usera who hit it.
As I wrote, this was the suspicion I had but I clearly stated that I do not know. I just know that conda users might get into the situation, and are blaming airflow. So I am not blaming specific problem just trying to guess. If it is an old conda bug, it's still a conda problem not airflow :). And it's not blaming anyone is just directing the users to the place where helpful conda maintainers might help our users or maybe even are made aware about the issue and fix it permanently.
Just wanted to raise awareness of it because at the beginning of the issue or was written it is not a widespread issue - and I am signalling thar it might become one soon. And I guess conda maintainers are the only ones that can diagnose it and help (also their users - because this is as much conda as Airflow users we are talking her about).
So yeah. I am not blaming, I am trying to direct the user problem to the place where the users can get help. If conda maintainers will help somehow by fixing conda, cool. If this is a conda bug, cool as well - same place, same people.
In either case we will keep redirecting similar problems here (after proposing workaround with reinstallation) because we can't do anything else about it
I hope it is clear and no bad feelings.
@h-vetinari - do let me know if my explanation is clear. I did not mean any blame casting, I just wanted to make sure that the users of ours who have problems are properly redirected to the place where they can get help. If I made a wrong assesment, I'd love to get corrected.
@potiuk, there are two issues. The macos deployment target issue in python 3.7 and pybind11 not being found in python 3.8+. We don't maintain python 3.7 support anymore and you should ask users to use python 3.8+ with conda. pybind11 issue is unrelated to this issue.
@potiuk, there are two issues. The macos deployment target issue in python 3.7 and pybind11 not being found in python 3.8+. We don't maintain python 3.7 support anymore and you should ask users to use python 3.8+ with conda. pybind11 issue is unrelated to this issue.
No idea what you are asking for. Python 3.7 support has been dropped from Airflow https://github.com/apache/airflow/pull/30963 but Airflow 2.6 still supports Python 3.7 (it would be a breaking change if it does not) but the user was instaling 3.7, 3.8, 3.9 with conda and all of them failed - so it is not a 3.7 issue.
pybind11 issue is unrelated to this issue.
Im not even sure what pybind11 is so I am not sure what i should ask the user for?
What I see is that when user uses conda and installs airflow, it tries to compile google-re2 (our dependency) with osx-10.9 linraries (6 years end of life). I have no idea what is the reason for it but we got it reported twice already by users who use conda to install Airflow (which is BTW. unsupported officially - we only support pip installation with constraints - and there are no such issues if they do).
So I am not sure if I can help the user of ours in any way.
For now - I will redirect similar questions from our users to this thread, but if there is anything else I can do to help them - I am all ears.
For now the only solution I have is to ask the users to nuke their conda environment and reinstall it - this is what I am going to tell them.
One might say it's "blaming" conda - but this is the only workaround I can provide - it has already proven to be working for one of them, so I will continue to do so. But maybe there is another solution - not sure.
For now - I will redirect similar questions from our users to this thread,
Please don't. Ask them to give all of the information asked in https://github.com/conda-forge/google-re2-feedstock/issues/6 and redirect there.
For now - I will redirect similar questions from our users to this thread,
Please don't. Ask them to give all of the information asked in conda-forge/google-re2-feedstock#6 and redirect there.
Sure. it has already link to this thread anyway so users will find this thread too if they are curious.
What I am mostly interested is to be able to tell my users "here you wil find help" . I will also continue mentioning that nuking and reinstalling conda helped some of the users having similar problem - just to make sure they have an easy workaround ready, rather than waiting for the issue to be solved in a better way.
Sure. I'm going to mark all of this as off-topic since the issue is not related to the bumping of minimum target at all. Also the user is using conda defaults channel and is not using conda-forge.
Just to make it very clear about my intentions. I am not passive-agressive even if it might look like it. I believe from my answer, you might perceive it this way, but it;s not. I am really looking for a solution I can give to my users.
I sm really after finding an answer I am can to give a "recipe" for my users to solve their problems. I am not at all for blaming anyone. I just want to be able to tell my users "here is how to solve your problem".
If you are going to mark it "off-topic" - feel free. But it does not help conda users of airflow to get their problems solved. this is all what I am for.
Without more information there's no point talking. Please fill out the information requested at https://github.com/conda-forge/google-re2-feedstock/issues/6
Without more information there's no point talking. Please fill out the information requested at https://github.com/conda-forge/google-re2-feedstock/issues/6
This is exactly what I am going to ask my users for when raising similar issues.
I redirected all the reports of ours we had - we also had some people on Slack having similar problems I will also direct them there.
Out of curiosity, I wanted to check when the last bump of the MACOSX_DEPLOYMENT_TARGET happened, and, as it turns out, it's been at 10.9 since the initial commit of https://github.com/conda-forge/conda-forge-pinning-feedstock 🤯
With a bit of digging (and luck), I found the bump from 10.7 to 10.9 though: https://github.com/conda-forge/toolchain-feedstock/commit/7a470c5ec71ad250bbfe6565016e793f3cc8f339 - 7 years ago. At the time, 10.9 was just before its EOL. If we applied the same standard today we should jump directly to 11.0 (see here).
Given that most users these days are on much newer versions, and want to use newer features (relevant for our packaging, like support for metal or the new LAPACK implementation), I think it might make sense to stop dragging our feet so much on this. We're (slowly but steadily) moving with the time on linux and windows as well, why should osx need to fall so far behind?
To substantiate this a bit more, I wanted to look at the usage numbers for different MacOS versions. The broadest measure is "everyone who uses a browser", but there are apparently no good usage numbers, because Apple keeps misreporting its OS version in HTTP headers, for some complicated reasons. I did find however that 92% are on 10.15+, which is the version every version after that pretends to be. Notably, all (distinguishable) versions that're EOL have max. 1-1.5% usage numbers (<7% cumulated) -- and again, this is all MacOS users, not just those of conda-forge.
Newest abseil requires MacOS 10.13 (and fails with 10.9). I'd like to migrate this together with the new libgrpc & libprotobuf, but seeing how widely abseil is used, that's effectively a conda-forge-wide decision (at least as far as C++ deps are concerned).
I saw that @chenghlee had put up the idea of dropping osx-64 support entirely (in the meeting notes of the last core call, though apparently it wasn't discussed yet). Given all of the above datapoints (libcxx, clang, abseil, protobuf, C11's aligned_alloc, how long ago the EOL was, the comparison to the last bump, etc.), is there any serious opposition to just bumping to 10.13?
@chrisburr should chime in. He grabbed a bunch of stats from pypi on what to move to next.
I don’t want this to move any faster than it has to. There’s no reason to leave users on older systems behind if we don’t have to.
...put up the idea of dropping osx-64 support entirely.
This was mostly to spur discussion of when/if we should start considering that, mostly because I expect that at some point, we will no longer have to (native) osx-64 CI resources. That said, based on the PPC to x86 transition, I'm guessing Apple will at least support for macOS on x86_64 for at least another 2-3 years, so that's a discussion we can continue to punt for a while.
In my opinion, I think we should start considering dropping osx-64 when Apple drops Rosetta 2 support from osx-arm64 systems. Of course, if we stop having osx-64 CI, then we have no choice but to drop it…
I personally totally support increasing the minimum for macOS. I would even advocate to go to 11 or even 12 directly. I completely sympathize with the need to support older linux systems (believe me, I have access to many of these gov systems myself) but if I understand things correctly, that simply doesn’t apply the Apple ecosystem (Apple is aggressive in pushing people to upgrade). If people have other experiences with Apple systems, please weigh in so that your voice is heard!
I'd push back on dropping osx-64. It seems premature, but I don't know why it is getting brought up here as it is tangential to the topic of macOS 10.9 support, which is starting to get pretty old
Agree with Matt we should collect data on OS version usage and see what we find
but I don't know why it is getting brought up here as it is tangential to the topic of macOS 10.9 support
If we were to drop osx-64 (not my proposal), then obviously this discussion becomes moot, hence the pretty direct relation. In any case, let's shelve that one, as @chenghlee is clearly not planning that in the short term.
Agree with Matt we should collect data on OS version usage and see what we find
Obviously numbers from our package downloads would be interesting. I already provided some numbers above. 95% of all internet users are on >=10.13, which includes an even longer tail than conda-forge. The fact that chrome dropped support for <10.13 should be a good indicator that even google with all its resources does not see this as worth supporting anymore.
Apple aggressively forces people to upgrade, and we're talking about moving from something that's been EOL almost 7 years to something that's "only" been EOL 3 years. The last upgrade in conda-forge was 7 years ago[^1], which is also staggering.
[^1]: to a version that wasn't EOL at the time; the equivalent of that today would be 11.0
There’s no reason to leave users on older systems behind if we don’t have to.
There's ample reasons in this thread already. We're on a 10 year old SDK that is not just unsupported but now broken on many projects. We've managed to extend it way past its shelf life with hooking up our own C++ stdlib, but we've reached the end of the road.
Continuing to support <10.13 means we never upgrade abseil anymore, never upgrade LLVM anymore, etc. I have trouble finding neutral words for how bizarre I find this desire to support museum OS versions over current users and packages.
I only meant to say as old as possible. If 10.13 is that number, great!
I didn't realize you had numbers for all internet users above. Sorry about that.
Yeah I think the thing to keep in mind is Apple does a pretty good job of making updates available to fairly old hardware (and as Axel notes they push upgrades pretty assertively)
As an anecdotal example, for a long time I had a 2010 Mac laptop (no longer though) and saw these fairly regularly given out as loaner laptops at a previous employer. These laptops were able to keep upgrading to 10.13, but couldn't go any further than that
Generally agree with upgrading and 10.13 being potentially reasonable. Just wanted to make sure we had good data to back it up
The thing to remember is no matter how conservative the change, we will get some push back (this happened with Windows & Linux 32-bit as well as old Python versions, etc.). We just need to make a really good case for it. In the past we have done user surveys, which could be reasonable here
The thing to remember is no matter how conservative the change, we will get some push back
Sure, and we do our level best to keep things going as long as possible. What I don't understand is how these potential complaints can ever be more important than being able to build current packages.
Our hand is forced by the choices of fundamental packages in the ecosystem, so either we freeze ourselves in time, or those (presumed existing) user are just going to have to live with conda-forge packages appropriate for their ancient OS. I certainly don't see them maintaining patches to LLVM, qt, abseil, protobuf, grpc, etc.[^2]
[^2]: and we'd be carrying all the risk for intrusive patches, aside from one-upping the upstream developers that we can maintain their own package better than them. It's just not a reasonable demand on feedstock maintainers, even if there were concrete complaints.
We just need to make a really good case for it. In the past we have done user surveys, which could be reasonable here.
Let me quote your own words from https://github.com/conda-forge/toolchain-feedstock/pull/9 for context:
As most people seem in favor of this change, our binaries only support 10.9 anyways, and we are starting to encounter very challenging issues by not trying to set the minimum at 10.7, it only makes sense to bump this to 10.9.
Given that keeping support for <10.13 means "very challenging issues" now[^1], I think a user survey (also not used back for the jump to 10.9 from what I can tell) is an unreasonable requirement and would be wasting scarce resources -- all for hypothetical complaints, and more importantly: without providing a feasible prospect for how to keep building current packages.
[^1]: very practically: what to do with abseil, grpc & protobuf, e.g. closing the protobuf migration means requiring >=10.13 for all dependent packages, because a proto4-compatible grpc requires >=10.13.
OK! So it looks like 10.13 is the minimum. I think we had settled on 10.12 or so a few years ago so that seems fine.
My understanding of what we need to do to make this happen.
- [x] make an announcement (https://github.com/conda-forge/conda-forge.github.io/pull/1993)
- [ ] decide how we want to enforce the __osx constraint
- [ ] Decide if we will allow folks to set a min version less the global minimum.
- If yes, do nothing.
- If no, then we need a minimigrator to adjust / remove any custom minimum pins below 10.13
- [ ] adjust ci-setup package to download the right SDK? (it may use whatever is in the pins and so this may not be needed. I need to check.)
- [ ] adjust smithy? IDK if we need this or not.
- [ ] figure out if there are any finicky builds on osx that need adjusting? I recall some but maybe that is out of date by now.
- [ ] patch old builds with 10.9 __osx constraint?
- [ ] bump in global pinnings
What else did I miss?