meta icon indicating copy to clipboard operation
meta copied to clipboard

Consider not capping major versions for many of our dependencies

Open choldgraf opened this issue 3 years ago โ€ข 42 comments

Description

Currently, we cap major versions of most dependencies. This helps ensure that we have stable behavior when dependencies release new major versions. When a new major version is released, we manually bump our cap and make sure that behavior still works as-expected in the PR.

There are obvious benefits to this approach, but I think there are also significant costs, and I wonder if we can relax this constraint a bit and take a more nuanced approach.

The problem we're introducing for users

By capping major versions, we may introduce headaches for our users. These are particularly problematic when they have complex environments with many versions installed. Specifically:

We run the risk that our dependency chain will be un-resolvable due to a conflict with some other package's dependencies.

If this happens, there's nothing that our users can do, other than wait for us to make a new release. Here's an extreme example of this with click. tl;dr: Click released a new major version, one of our dependencies required that version, and it was then impossible to install Jupyter Book in a fresh environment until we make a new release.

There are a lot of blog posts arguing back and forth about this, but here's a nice recent one that our own @chrisjsewell has chimed in on as well: Should You Use Upper Bound Version Constraints? from Henry Schreiner.

Why we shouldn't just un-cap everything

There are definitely benefits to having major version caps for the dependencies that we know will introduce breaking changes with version releases (looking at you docutils) or those that have lots of complex interconnections with our codebase that might lead to unexpected outcomes. When we cap the versions for these dependencies, we help save our users unexpected headaches.

So, I wonder if we can find a compromise and adopt a policy like the following:

Proposal: cap our own stack and unstable dependencies, and do not cap other dependencies

  • We do cap:
    • Our own EBP tools (because we know that we follow semver and can coordinate releases)
    • sphinx and docutils (because they introduce many changes, are very complex, and our stack has many connection points with them)
    • Anything else?
  • By default, we do not cap versions for other dependencies, UNLESS we determine that:
    • A dependency is known to be unstable
    • A dependency has a history of breaking changes with new versions
    • A dependency is interwoven with our stack enough that major versions will likely cause confusion or problems

For the second group, we can selectively add dependency caps if we know there's a breaking change that we can't quickly resolve, and we can make a decision whether to keep that cap if we expect subsequent releases to also introduce breaking changes.

I think that this could be a way to balance "dependability of our environment" against "constraints that cause headaches for our users". For example, in the case of click, we likely would not have used a cap because we use it in a targeted fashion, and wouldn't expect it to break our usage in the ways we're utilizing it. This would have saved us the time of then manually bumping click across our repositories, and cutting new releases.

EDIT: After some discussion below, I also propose that we add a CI/CD job for unstable versions to our most heavily-used repositories. These could be canary deployments that run against main/master and we can use this to know when a breaking change is coming down the pipeline.

Thoughts?

What do others think? Would this be a net-benefit for our time + the time of users? In particular I am curious what @chrisjsewell thinks since I know you care about UX stability for our users!

choldgraf avatar Feb 02 '22 00:02 choldgraf

Firstly, just to clarify:

Here's an extreme example of this with click. tl;dr: Click released a new major version, one of our dependencies required that version, and it was then impossible to install Jupyter Book in a fresh environment until we make a new release.

This issue was due to a deficiency in https://github.com/jazzband/pip-tools/issues/1372 (as I just mentioned: https://github.com/executablebooks/jupyter-book/issues/1590#issuecomment-1027480224), not because it is actually impossible to install, e.g. using pip.

Using upper-pinnings should never prohibit jupyter-book from being installed, if a new dependency version is released, in fact it should be quite the opposite.

chrisjsewell avatar Feb 02 '22 01:02 chrisjsewell

So this is where I stand:

Just looking at jupyter-book for now: it is almost an "end-user tool", whereby you could actually pin every dependency to an exact version, and know that it will never break. BUT, It's not quite, because we also want to allow "power users" to install it alongside other python packages, such as sphinx extensions. So hard pinning is probably out of the question (and also you would never get bug fix updates).

If you remove upper-pinnings, jupyter-book will always eventually break, i.e. for a basic user, only installing jupyter-book into a fresh python environment, eventually there will be a breaking change in a dependency that will cause jupyter-book to fail in some way.

Those users will then come to us and say "hey jupyter-book is broken": what do we then do? Do we say; well that's not our problem, you need to create your own requirements file, pinning all the problematic dependencies, until we have time to update jupyter-book and release a new jupyter-book, and also you then need to remember to unpin these dependencies when a new release of jupyter-book is actually released.

Is this really what you want to be telling every user of jupyter-book, i.e. that it's not enough to simply install jupyter-book and expect it to work?

Now, its true that pinning major versions does not "guarantee" that a dependency update will never break jupyter-book; it is quite possible for dependencies to (inadvertently or otherwise) introduce breaking changes in minor versions. But it at least gives you a fighting chance, whilst still allowing for some flexibility in version compatibility.

If you have upper-pinnings, it is indeed true, that eventually there will be (primarily power) users that encounter an issue with wanting to use the latest version of another python package, alongside jupyter-book. They will indeed have to wait for us to update/relax our upper pinnings, before they can do this. I understand their likely frustration, but at the same time feel this will affect a significantly smaller portion of the user base

chrisjsewell avatar Feb 02 '22 01:02 chrisjsewell

Basically, the two extremes here are:

  1. You provide jupyter-book as completely hard/exact pinned, for every dependency (and python version); we can "reasonably" guarantee it will never break, but it probably can't be used alongside any other packages, and also only new jupyter-book versions will get dependency bug fixes, etc. This is probably what a lot of "basic/non-technical" users want (i.e. simplicity/stability).

  2. You provide jupyter-book as completely unpinned for upper bounds, for every dependency (and python version); it basically cannot be used without an accompanying "lock file" (requirements file specifying all the pins), and we would provide no guarantees of it working. It would though allow power users the flexibility to try getting it to work with any python package they wanted.

In principle, you could have these as two corresponding releases of jupyter-book, e.g. jupyter-book and jupyter-book-unpinned

chrisjsewell avatar Feb 02 '22 02:02 chrisjsewell

I think that you raise a lot of good points - agreed we don't want Jupyter Book to become unstable for people because our upstream dependencies introduce breaking changes in unexpected ways. As you say, it's a balance between these extremes.

Quick clarification on the dependencies our users likely bring with them

One clarification: maybe I am not understanding the way dependencies work in Jupyter Book, but I think that installing jupyter-book alongside many other dependencies is not restricted to power users. Our target user personas are most likely "data scientist types, often in research and education communities".

In my experience, this user persona does almost no environment management. They just use a single kitchen sink environment and pip/conda install things into it over time, with no thought given to versions, dependency chains, etc. The power users are the ones that use something like multiple conda environments, virtualenv, etc. Even if they did do some environment management, they still need all of the packages to execute their content installed in the same environment as Jupyter Book, right?

So I would think that our users in particular are likely to be vulnerable to "kitchen sink environments causing dependency constraint conflicts". (that said, we have had a few reports of this but not a ton, so I don't know how much of an issue it is).

Investigation into other projects in adjacent ecosystems

I was asking around other projects to see how they handle this. It doesn't seem like many people use upper limits for their dependencies (e.g. Dask doesn't, scikit learn doesn't, pandas doesn't, jupyterlab uses a hybrid approach.

So what do these projects do to avoid this potential problem?

A common pattern was including a CI/CD job that explicitly tests against unstable versions of upstream dependencies. That way they know before releases are made if it will break their test suites. For example:

  • XArray uses this shell script to install from master across a bunch of projects: https://github.com/pydata/xarray/blob/main/ci/install-upstream-wheels.sh
  • Dask follows the same approach: https://github.com/dask/dask/blob/main/continuous_integration/scripts/install.sh
  • The SatPy project has an unstable CI job that runs their test suite against unstable branches of upstreams: https://github.com/pytroll/satpy/blob/e27414fb8c00216f251f9e464d72a8ab62f9ba54/.github/workflows/ci.yaml#L98-L118

Speaking to folks in those communities it sounds like the workflow is roughly:

  • If their "unstable test" fails:
  • They patch it locally until the test passes (they generally don't block PRs unless the PR causes the failing test).
  • If this is an upstream regression/bug, they file an issue or upstream a PR to fix it.
  • They either delay their release until the unstable test is passing (so that they know the next upstream release won't break it) or they temporarily pin the upstream and un-pin it when the problem is resolved.

Maybe this could be a nice balance between "make sure our users don't have instability" and "don't constrain our users' dependency chains too strongly". Our tech stack is a bit different so I'm not sure how this would work with, eg, our regression tests w html and such, but maybe worth a try?

Curious what you/others think about it.

choldgraf avatar Feb 05 '22 21:02 choldgraf

In my experience, this user persona does almost no environment management. They just use a single kitchen sink environment and pip/conda install things into it over time, with no thought given to versions, dependency chains, etc.

Then you can't really use https://iscinumpy.dev/post/bound-version-constraints as a reference point, since the author specifically states in the comments:

if someone knows how to work with code, they should know how to install packages, pin dependencies, and hopefully, knows how to use a locking package manger or lock file. If they don't, you probably have more problems than just this one

i.e. a major point of the thesis, is that you put the responsibility of version management on the user

chrisjsewell avatar Feb 06 '22 00:02 chrisjsewell

Even if they did do some environment management, they still need all of the packages to execute their content installed in the same environment as Jupyter Book, right?

Well, your kernel for code execution can be entirely separate from your execution environment: https://ipython.readthedocs.io/en/stable/install/kernel_install.html#kernels-for-different-environments. It's really the better way to do it, to separate concerns (book building vs code execution), and you could even have different environments specialized for different notebooks. Although, I don't think this is possible via ReadTheDocs unfortunately

chrisjsewell avatar Feb 06 '22 00:02 chrisjsewell

@choldgraf invited me to weigh in here from Twitter, thanks ๐Ÿ˜„

My perspective is that I am 99% against upper-level pins, and only not 100% because "never say never". I personally struggled with the pins in jupyter-book and related projects as I was developing MyST-NB-Bokeh because I couldn't install a dev version of any of the dependencies because they were pinned so tightly. I had to install everything with --no-deps to get an environment that pip could solve. So that wasn't a fantastic experience, although granted it was a little bit outside the normal user experience.

I'll also reply to a couple of comments from the thread that stood out to me.

Our target user personas are most likely "data scientist types, often in research and education communities".

I suspect this target persona does a fair amount of environment management, especially if they use conda. I think this is generally taught as "best practice" when you're learning conda. That said, they can still run into conflicts between dependencies of jupyter-book and any dependencies to run their code, especially Sphinx/Jupyter extensions as @chrisjsewell noted.

Do we say; well that's not our problem, you need to create your own requirements file, pinning all the problematic dependencies, until we have time to update jupyter-book and release a new jupyter-book

Yes, I think this is what you should say. But instead of creating a requirements file, say "please do pip install dependency==version until we fix this. Then run pip install -U jupyter-book dependency when we release the new version" and it's all better.

Even if they did do some environment management, they still need all of the packages to execute their content installed in the same environment as Jupyter Book, right?

Well, your kernel for code execution can be entirely separate from your execution environment: https://ipython.readthedocs.io/en/stable/install/kernel_install.html#kernels-for-different-environments. It's really the better way to do it, to separate concerns (book building vs code execution), and you could even have different environments specialized for different notebooks.

So your position is that environment management/requirements.txt file is too complicated or not desirable, but multiple execution environments for different notebooks is feasible? That doesn't seem consistent to me...

Now, its true that pinning major versions does not "guarantee" that a dependency update will never break jupyter-book; it is quite possible for dependencies to (inadvertently or otherwise) introduce breaking changes in minor versions. But it at least gives you a fighting chance, whilst still allowing for some flexibility in version compatibility.

If you have upper-pinnings, it is indeed true, that eventually there will be (primarily power) users that encounter an issue with wanting to use the latest version of another python package, alongside jupyter-book. They will indeed have to wait for us to update/relax our upper pinnings, before they can do this. I understand their likely frustration, but at the same time feel this will affect a significantly smaller portion of the user base

I doubt this is true. I think if you're at a point where you're sharing enough content that you want to put it into jupyter-book, you're already a reasonably advanced user. I also don't agree that pinning major versions gives you a fighting chance; as @choldgraf noted if any dependency starts to rely on a dependency that you don't support, no one can install JupyterBook. Even without that case, it just defers the headache to your users instead - and especially to power users who could otherwise be evangelizing for your software.

As to the proposal by @choldgraf:

Proposal: cap our own stack and unstable dependencies, and do not cap other dependencies

I think capping the EB stack in jupyter-book is a reasonable stance to take, since it really is meant more as an end user application, although still somewhat annoying. At least I can do pip install jupyter-book && pip uninstall jupyter-book && pip update any deps I need && pip install --no-deps jupyter-book or conda install --dependencies-only (I think that's a flag...).

Capping dependencies further down the stack is much harder to overcome. Now I need to fork that package, bump its dependencies, and then install from my fork if I want to update anything. Since those (MyST-NB, MyST-parser, etc.) are (to me) meant more as libraries that folks can install, the chances of having dependency conflicts is much higher.

By the way, feel free to pin docutils with == to whatever version works. And then rip it out and replace it wholesale one day ๐Ÿ˜„

bryanwweber avatar Feb 06 '22 01:02 bryanwweber

Heya,

as @choldgraf noted if any dependency starts to rely on a dependency that you don't support, no one can install JupyterBook

That's incorrect, you can't "start to rely on a dependency" without releasing a new version, then jupyter book could still be installed with the old version of the dependency

chrisjsewell avatar Feb 06 '22 02:02 chrisjsewell

I couldn't install a dev version of any of the dependencies because they were pinned so tightly.

Can you give a dev version of what you were trying to install?

chrisjsewell avatar Feb 06 '22 02:02 chrisjsewell

Heya,

as @choldgraf noted if any dependency starts to rely on a dependency that you don't support, no one can install JupyterBook

That's incorrect, you can't "start to rely on a dependency" without releasing a new version, then jupyter book could still be installed with the old version of the dependency

Only if that old version is still available... It could get yanked due to a security vulnerability or any other reason, really. But this is a side point anyways.

I couldn't install a dev version of any of the dependencies because they were pinned so tightly.

Can you give a dev version of what you were trying to install?

I don't remember the versions precisely, but it was equivalent to jupyter-book had the dependency jupytext~=1.11.3 but I wanted to install jupytext==1.12.0-dev to test against. Something like that, I don't recall exactly which package it was. I can go look if you're interested in more specifics

bryanwweber avatar Feb 06 '22 02:02 bryanwweber

As a casual user I did run into too strict pinning by various jupyter-book components as well. In my case it was nothing dramaticโ€”more of a minor nuisance, especially since forcing pip to ignore the pinning worked just fineโ„ข.

I think another cost for the end user that isn't mentioned is the inability to get bugfixes in the dependencies if they land in the newer versions. While larger packages would backport bugfixes, in practice this doesn't happen too often in the ecosystem.

Well, your kernel for code execution can be entirely separate from your execution environment: https://ipython.readthedocs.io/en/stable/install/kernel_install.html#kernels-for-different-environments. It's really the better way to do it, to separate concerns (book building vs code execution), and you could even have different environments specialized for different notebooks.

I agree that it is cleaner to separate book building from code execution, but it adds maintenance complexity, especially to CI. In practice I expect that hardly anybody does this. Is there an EBP repository that can serve as an example of this more systematic approach?

akhmerov avatar Feb 06 '22 10:02 akhmerov

I think another cost for the end user that isn't mentioned is the inability to get bugfixes in the dependencies if they land in the newer versions.

Pinning to major versions means you get every bug fix, from minor and patch releases

chrisjsewell avatar Feb 06 '22 10:02 chrisjsewell

Pinning to major versions means you get every bug fix, from minor and patch releases

Of course. I expect, however, that some bugfixes land in major releases.

akhmerov avatar Feb 06 '22 11:02 akhmerov

I just landed here and only read the first post in this thread. When packiagng the various executable book projects in conda forge, these upper bounds are often problematic for the conda solver. Taking them away would give the SAT solvers more flexibility in finding solutions that are likely to still result in consistent functioning environments for users. This would give less headaches for downstream packagers, environment managers, and users.

moorepants avatar Feb 12 '22 10:02 moorepants

One option is that you don't add upper bounds on dependencies in general, but only add them to bugfix releases when you know a dependency update breaks that past released version.

Example:

Release version 4.1.0 of your package that depends on NumPy. You know 4.1.0 only works with numpy >=1.3.0 in the present so that is the pin you set. At some point NumPy releases 1.13 and that breaks your package's version 4.1.0. So you go back and release a 4.1.1 with an upper pin numpy >=1.3,<1.13.

So instead of trying to predict what future versions might break your packages in the present, just wait till a dependency does break one of your old versions and bugfix it.

This would allow flexible package manager solves as time moves on, but also gives you the control to keep the latest bugfix releases working for as many past versions of your software as you want to maintain.

This wouldn't work for anyone installing with exact version pins to your software, but it would work for anyone installing with X.* or X.X.*. If users are installing with exact version pins, then they should be (or are probably) using a lock file type setup so that the whole tree of dependencies is pinned. Otherwise those users would need to watch out for bug fix releases for the version of your software they are using.

moorepants avatar Feb 12 '22 16:02 moorepants

At some point NumPy releases 1.13 and that breaks your package's version 4.1.0. So you go back and release a 4.1.1 with an upper pin numpy >=1.3,<1.13.

Why would the solver install 4.1.1 and numpy 1.12, and not just 4.1.0 and numpy 1.13?

chrisjsewell avatar Feb 12 '22 16:02 chrisjsewell

Why would the solver install 4.1.1 and numpy 1.12, and not just 4.1.0 and numpy 1.13?

Because the user would be upgrading your software and the package manager solver tries to give the newest possible versions of all dependencies.

moorepants avatar Feb 12 '22 16:02 moorepants

Why would the solver install 4.1.1 and numpy 1.12, and not just 4.1.0 and numpy 1.13?

Which is exactly what the "Backsolving is usually wrong" section of https://iscinumpy.dev/post/bound-version-constraints/ explains, i.e. once you remove upper pinning, that's it, the only fix for breaking changes is to release a new version that supports the change, or tell users to pin

chrisjsewell avatar Feb 12 '22 16:02 chrisjsewell

Because the user would be upgrading your software and the package manager solver tries to give the newest possible versions of all dependencies.

Why is 4.1.0 + 1.13 not newer than 4.1.1 + 1.12, when considering all dependencies?

chrisjsewell avatar Feb 12 '22 17:02 chrisjsewell

Just want to clarify here, I am not advocating for every package upper pinning, I see the arguments for libraries. But I'm stressing, there is a big difference between something like markdown-it-py and jupyter-book; the specific goal of jupyter-book is to make things as easy as possible for non-technical users. If you are a developer and you are using jupyter-book, IMHO I'd suggest you shouldn't be (I don't), you should be using the "lower level" myst-nb etc, jupyter-book really does nothing but collate them in an opinionated application (and part of that opinion is the pinning) and on that note, another point I want to make is that, the best way to avoid dependency issues is not to have any ๐Ÿ˜„ With myst-nb, I will soon remove nbcobvert, jupyter-sphinx and ipywidgets, I've already removed nbdime for jupyter-cache, and am working on removing attrs for markdown-it-py

chrisjsewell avatar Feb 12 '22 17:02 chrisjsewell

Why is 4.1.0 + 1.13 not newer than 4.1.1 + 1.12, when considering all dependencies?

It isn't, but we don't consdier all dependencies in Python (at least not for libraries and often not for apps either). My suggestion assumes the user is trying to keep some small set of packages at the top of their stack up-to-date and functioning, either one package or a collection of packages. If I'm using your package in my environment I have n packages I directly depend on. I only care that the API from those n packages works for what I'm doing (user or library viewpoints). Everything my n packages depend on can be any version as long as the API for the n packages is fixed and the dependency stack fits the constraints of all packages involved.

There is the rare situation that I need API from your 4.1.* and API from numpy 1.18.* I'd be out of luck at that point and have to modify my code to work with a newer version of your package or older versions of numpy. No way around that.

With NPM you could have a numpy 1.18 and numpy 1.12 installed in the same dependency tree, but you can't for pip or conda. The later tries to provide the latest versions of all packages in the entire tree given a desired state of a subset of packages you specify in your requirements.txt, setup.py, pyproject.toml, environment.yml, etc.

I'm suggesting something similar to metadata patches, which is also mentioned in the blog post:

There is actually a solution for this outside of PyPI/pip packages; you can support metadata patches, allowing a release to be modified afterwards to add new compatibility constraints when they are discovered. Conda supports this natively; you can also approximate this with post releases and yanking in PyPI, but itโ€™s a pain.

We can't patch the metadata but we can release a bugfix with the upper bounds. But yes, things could still break for a user because any dependency in the stack could have a incompatibility that conflicts with the new upper bound you add. But that is often rare and honestly, people just have to upgrade their software to be at versions that are compatible in present time. It's very hard to maintain compatibility for old software stacks without exact pins (and even impossible. For example, try installing Plone 2 from the early 2000s with buildout which does do exact pins).

moorepants avatar Feb 12 '22 17:02 moorepants

But I'm stressing, there is a big difference between something like markdown-it-py and jupyter-book

There may be a big difference if you think of them each as the only package you are trying to get working in a given "top of the stack" environment (environment for me = collection of consistent packages). A user may be using jupyter book to explain concepts that require a large number of software packages, so they also need to have all that software installed along with jupyter book in the same environment because jupyter book executes code cells in the book. So adding hard constraints on jupyter book's dependencies means it could then be impossible to setup the environment you need for your book.

moorepants avatar Feb 12 '22 17:02 moorepants

so they also need to have all that software installed along with jupyter book in the same environment because jupyter book executes code cells in the book

Jupyter book could have fixed pins if you execute the jupyter cells in a different environment. That should be possible through selecting the right kernel (right?). If that can be done, then you can have two environments: 1) an environment with only jupyter book and 2) an environment with all the software you need to execute cells in your book.

moorepants avatar Feb 12 '22 17:02 moorepants

so they also need to have all that software installed along with jupyter book in the same environment because jupyter book

I've already explained why this is not actually the case; a core concept of jupyter is having the kernel (where all the software is installed) separate from the client environment, i.e. where jupyter-book is installed. We should make this easier for users to achieve

chrisjsewell avatar Feb 12 '22 17:02 chrisjsewell

a core concept of jupyter is having the kernel (where all the software is installed) separate from the client environment

It is, but it's also a very confusing and unusual concept for most users. Setting up different virtual environments is a tough concept for average users too. Most expect conda install x and pip install x to just work and most have all their packages in the base environment.

moorepants avatar Feb 12 '22 17:02 moorepants

Setting up different virtual environments is a tough concept for average users too.

Which is exactly why I said we should make it easier ๐Ÿ˜‰

chrisjsewell avatar Feb 12 '22 18:02 chrisjsewell

Sorry, couldn't resist ๐Ÿ˜…, but over at https://github.com/pallets/jinja and https://github.com/pallets/markupsafe this kind of highlights my "fear"; within a few hours of a marksafe release that breaks a version of jinja , they are now inundated with people telling them their package is broken (many because readthedocs builds are suddenly failing):

image

image

Fair, they are telling people:

You are using an unsupported version of Jinja, please update to the latest version ... then use a tool like pip-tools to pin your dependencies

But, I doubt that's going to stop them getting many, many more "complaints" before this is over

chrisjsewell avatar Feb 18 '22 15:02 chrisjsewell

Sorry, couldn't resist ๐Ÿ˜…, but over at https://github.com/pallets/jinja and https://github.com/pallets/markupsafe this kind of highlights my "fear"; within a few hours of a marksafe release that breaks a version of jinja , they are now inundated with people telling them their package is broken

Yep, that's a fair concern, I don't think anyone disputes that this could happen. However, the EB projects are not nearly at the level of adoption that Jinja is at, and I think you're optimizing for the wrong problem at the current moment in the EB hype cycle. Besides, what's the worst case? You end up with a few dozen issues that you have to close? Even a few hundred could be handled in not too long a time, especially if it's a generic response like "Please upgrade".

bryanwweber avatar Feb 18 '22 15:02 bryanwweber

Even a few hundred could be handled in not too long a time, especially if it's a generic response like "Please upgrade".

As long as one of you guys is willing to man the issue boards when this happens ๐Ÿ˜…

But it is not just the problem of closing issues with a generic response; even if you think those that opened the issues are "in the wrong", do you really think most of them will see it that way? I just can't see this being a particularly positive experience for users.

chrisjsewell avatar Feb 18 '22 15:02 chrisjsewell

As long as one of you guys is willing to man the issue boards when this happens ๐Ÿ˜…

Sure, happy to help! ๐Ÿ˜ƒ

But it is not just the problem of closing issues with a generic response; even if you think those that opened the issues are "in the wrong", do you really think most of them will see it that way? I just can't see this being a particularly positive experience for users.

Yeah, perhaps not ideal, but given specific instructions for how to resolve it, I don't see it as too negative. And anyway, that particular issue is theoretical at the moment for EB users, whereas the issue of problems because of the capped pins is real and being felt by at least three power users in this thread.

bryanwweber avatar Feb 18 '22 18:02 bryanwweber