Chrome 134 breaks Selenium tests - intermittent failures with visit(visit_uri)
My test suite runs fine with Chrome 133 (Selenium 4.29.0-20250222).
Upgrading to Chrome 134 (Selenium 4.29.0-20250303) results in intermittent failures due to the session's path not updating after a call to Capybara::Session#visit. In isolating the problem I did something like this:
...
visit '/foo/bar'
puts page.current_path # => '/' - i.e. path not updated
...
I've temporarily fixed the problem with a workaround which overrides Capybara::Session#visit like so in my test helper:
def visit(url)
10.times do
Capybara.current_session.visit url
return if page.current_path == url
sleep 0.1
end
abort 'Capybara::Session#visit broken after 1 second wait'
end
Happy to provide a MRE if this is not a known problem.
Note this does not appear to be duplicate of #2793 as the problem is not observed with Chrome 133.
Ruby 3.4.2 Capybara 3.40.0 selenium-webdriver 4.29.1 (in Docker container) OS: LMDE faye (based on Debian bookworm)
I'm experiencing the same issue.
I downgraded the following and all of my builds still fail, which makes me suspect there is a sub dependency that is causing the issue somewhere.
- capybara
- selenium-webdriver
- Chrome
- The Github workflow runner (tested
ubuntu-22.04andubuntu-latest
I even tried to run a build from many weeks ago and that fails with the same errors.
It's not only visit, we've been finding that every interaction that involves a page load will fail due to the browser not waiting for the next page load.
For example, this will fail:
assert_difference -> { Account.count } do
click_button "Sign up"
end
We're seeing this too. In our case we've only seen it w/ clicks that submit a form. It also seems to be a race condition as to whether the click will return immediately or wait for navigation. The browser still waits for the page to load, so if the next action is another action that uses the browser (another click_on, has_content? ..., etc) then things will still work as the next action will still have to wait for the page to load from the previous click. However if the next action does something outside of the browser (like check the DB, Account.count in the example above), and click_on returned immediately then the action might happen before the next page load.
We can work around the issue by checking for a result in the browser after the form submission, before checking for side effects, but it's still surprising to have inconsistent behavior.
We can make it very obvious by adding something like sleep 5 to the route handler, then we can see the click_on sometimes return after 5s and other times return immediately.
We're using Selenium and Chromedriver and these are the logs of the click w/ a 5s delay:
selenium
2025-03-14 15:18:52 DEBUG Selenium [:command] -> POST session/faa1e3e078d1ec72a9223ae77b4cf4c5/element/f.1320DB0BC7087CA385C0B8D07C31977E.d.641C2A4E3A8AED4C38CDA2CE45316E7C.e.59/click
2025-03-14 15:18:57 DEBUG Selenium [:header] <<< {"content-length"=>["14"], "content-type"=>["application/json; charset=utf-8"], "cache-control"=>["no-cache"]}
2025-03-14 15:18:57 DEBUG Selenium [:command] <- {"value":null}
chromedriver
[03-14-2025 15:18:52.307431][INFO]: [faa1e3e078d1ec72a9223ae77b4cf4c5] COMMAND ClickElement {
"id": "f.1320DB0BC7087CA385C0B8D07C31977E.d.641C2A4E3A8AED4C38CDA2CE45316E7C.e.59"
}
[03-14-2025 15:18:52.307472][INFO]: Waiting for pending navigations...
[03-14-2025 15:18:52.307881][INFO]: Done waiting for pending navigations. Status: ok
[03-14-2025 15:18:52.341655][INFO]: Waiting for pending navigations...
[03-14-2025 15:18:57.504210][INFO]: Done waiting for pending navigations. Status: ok
[03-14-2025 15:18:57.504354][INFO]: [faa1e3e078d1ec72a9223ae77b4cf4c5] RESPONSE ClickElement
...and these are the logs when it returns immediately:
selenium
2025-03-14 15:17:28 DEBUG Selenium [:command] -> POST session/9e2f02a196c30375806e6b7b2b547382/element/f.6CEA5FEDD794621E4A99DCAF55FB1F49.d.905E6E26A4B9B078C5A6ED2D5EDF03B7.e.59/click
2025-03-14 15:17:29 DEBUG Selenium [:header] <<< {"content-length"=>["14"], "content-type"=>["application/json; charset=utf-8"], "cache-control"=>["no-cache"]}
2025-03-14 15:17:29 DEBUG Selenium [:command] <- {"value":null}
chromium
[03-14-2025 15:17:28.968841][INFO]: [9e2f02a196c30375806e6b7b2b547382] COMMAND ClickElement {
"id": "f.6CEA5FEDD794621E4A99DCAF55FB1F49.d.905E6E26A4B9B078C5A6ED2D5EDF03B7.e.59"
}
[03-14-2025 15:17:28.968896][INFO]: Waiting for pending navigations...
[03-14-2025 15:17:28.969347][INFO]: Done waiting for pending navigations. Status: ok
[03-14-2025 15:17:29.000385][INFO]: Waiting for pending navigations...
[03-14-2025 15:17:29.000834][INFO]: Done waiting for pending navigations. Status: ok
[03-14-2025 15:17:29.000899][INFO]: [9e2f02a196c30375806e6b7b2b547382] RESPONSE ClickElement
...based on those my guess is that the race is in Chrome or Chromedriver and Capybara/Selenium may not be responsible for waiting for navigation after the click.
EDIT: Possibly related bug report for Chrome/Chromedriver: https://issues.chromium.org/issues/402796660
Also experiencing the issue since Chrome 133, capybara (3.40.0), selenium-webdriver (4.29.0) Example:
visit root_path
click_link "Create New user"
fill_in "new_user_form[first_name]", with: "Test"
click_on 'Save'
@user = User.find_by(first_name: 'Test')
visit user_path(@user)
Results in an intermittent failure when finding the record (i.e. an action outside the browser as mentioned above)
Failure/Error: visit user_path(@customer)
ActionController::UrlGenerationError:
No route matches {:action=>"show", :controller=>"users", :id=>nil}, missing required keys: [:id]
Adding a matcher, for exampleexpect(page).to have_content "New user created after the click_on gets the spec passing, but not practical for a large test suite. The override for Capybara::Session#visit is useful, thanks @MatzFan. I've been trying create a patch for the click method without much luck.
(Not sure this is really a Capybara issue, but since people might find this thread while looking for information, might as well comment here.)
We're experiencing the same issue. As mentioned earlier, downgrading Chromium/Chromedriver didn't solve the issue, and we didn't change anything in pull requests that were previously passing, so I guess there's been a different change in GitHub Actions (which we use to run the tests) 🤔.
We've removed the few database calls we had, like the mentioned .count or .find_by, after starting the browser with a visit in the tests. In some cases, we've seen that there were ambiguous expectations (content we checked after doing an action was already present before that action). However, we're still getting a dozen apparently random failures on every test run, and in many cases we couldn't find anything wrong with either the failing test or the tests that were executed before the one which failed.
I found https://github.com/teamcapybara/capybara/issues/2796 was helpful in solving our Chrome 134 related issues:
Specifically adding
browser_options.add_argument 'disable-backgrounding-occluded-windows'
@benalavi Thanks a lot for the info! :pray: We didn't want to add a sleep 5 call after every call to click_button, so for now we're having some success with this incredibly ugly patch. Ideas for simplifications are welcome!
module Capybara
module DSL
def click_button_and_check_path(...)
path = current_path
original_click_button(...)
page.has_no_current_path?(path)
end
alias_method :original_click_button, :click_button
def click_button(*, check_path: true, **)
if check_path
click_button_and_check_path(*, **)
else
original_click_button(*, **)
end
end
alias_method :original_accept_confirm, :accept_confirm
def accept_confirm(...)
define_singleton_method :click_button do |*args, **kw|
original_click_button(*args, **kw)
end
original_accept_confirm(...)
define_singleton_method :click_button do |*args, **kw|
click_button_and_check_path(*args, **kw)
end
end
alias_method :original_dismiss_confirm, :dismiss_confirm
def dismiss_confirm(...)
define_singleton_method :click_button do |*args, **kw|
original_click_button(*args, **kw)
end
original_dismiss_confirm(...)
define_singleton_method :click_button do |*args, **kw|
click_button_and_check_path(*args, **kw)
end
end
end
end
Basically, what this does:
- It monkey-patches
click_buttonso it checks the path has changed after clicking a button but, since we've got dozens of tests where that's not the case (AJAX calls, validation errors in forms, ...), we're usinghas_no_current_path?instead ofexpect(page).not_to have_current_path. - It disables the monkey-patch when clicking buttons that result in modal dialogs; otherwise, the test will fail.
- It adds an optional argument to
click_buttonso we can opt out of checking the path in a couple of specs checking unusual situations.
@axlekb Thanks for the suggestion! Unfortunately, adding those options didn't work in our case 🤔.
I have seen the same recently. Following Capybara setup made it fixed. --no-sandbox seems crucial to me.
Capybara.register_driver :selenium_chrome_headless do |app|
version = Capybara::Selenium::Driver.load_selenium
options_key = Capybara::Selenium::Driver::CAPS_VERSION.satisfied_by?(version) ? :capabilities : :options
browser_options = Selenium::WebDriver::Chrome::Options.new.tap do |opts|
opts.add_argument('--headless')
opts.add_argument('--no-sandbox')
opts.add_argument('--window-size=1280,1024')
end
Capybara::Selenium::Driver.new(app, **{:browser => :chrome, options_key => browser_options})
end
Capybara.javascript_driver = :selenium_chrome_headless
The underlying cause seems to be errors like this:
Selenium::WebDriver::Error::UnknownError:
unknown error: unhandled inspector error: {
"code":-32000,"message":"Node with given id does not belong to the document"
}
Depending on your code these can get swallowed up and failures manifest themselves in a variety of ways - for example I thought WebMock was failing with missing stubs but it was a request from the previous spec leaking into the next spec before stubs had been set. Similarly expecting on database records fails with not found errors or nil references.
Searching around I've seen some comments about the window handles list now contains the dev tools window and wonder whether the unhandled inspector error is a possible hint. In the meantime, it's a hack, but adding Selenium::WebDriver::Error::UnknownError to the list of invalid element errors seems to help with our system specs
RSpec.configure do |config|
config.before type: :system do |example|
if page.driver.respond_to?(:invalid_element_errors)
unless page.driver.invalid_element_errors.include?(Selenium::WebDriver::Error::UnknownError)
page.driver.invalid_element_errors << Selenium::WebDriver::Error::UnknownError
end
end
end
end
I was able to resolve our testing issues using the solution implemented in the PR referenced by @mike3985 above. We were already using the --no-sandbox option so the changes were implemented as follows:
- name: Remove image-bundled Chrome
run: sudo apt-get purge google-chrome-stable
- name: Setup stable Chrome
uses: browser-actions/setup-chrome@v1
with:
chrome-version: 128
install-chromedriver: true
install-dependencies: true
You can verify that the version changes were successful:
- name: Print installed Chrome version
run: chrome --product-version
You can verify that the headless browser driving your system specs is using the desired version by printing out the information from the browser capabilities hash:
puts page.driver.browser.capabilities[:browser_name]
puts page.driver.browser.capabilities[:browser_version]
ℹ We have fixed this by adding more assertions to ensure page finished loading https://github.com/rubygems/rubygems.org/pull/5542.
We have many cases of this error that appears to be caused by concurrent navigation:
click_on "Save" # page is reloading and nothing waits on the new page
visit "..." # not happening because of concurrent navigation
In some cases this happened across tests, e.g. the previous test finished with an open page navigation, and the next test failed.
The best hint we got was after enabling the BiDi mode, since visit raised this error then:
Selenium::WebDriver::Error::WebDriverError: unknown error: navigation canceled by concurrent navigation
Error
at new UnknownErrorException (<anonymous>:65:4788)
at BrowsingContextImpl.navigate (<anonymous>:599:12416)
at async #processCommand (<anonymous>:453:3697)
at async CommandProcessor.processCommand (<anonymous>:453:9389)
It still is a race between the browser loading the page and the test runner running the next commands.
We can work around the issue by checking for a result in the browser after the form submission, before checking for side effects, but it's still surprising to have inconsistent behavior.
I have been running into this exactly too.
Since the cases it's in make sense as race conditions, it wasn't clear to me why my old and large test suite only started displaying them recently. I was wondering if something had changed in my app, like my app becoming slower or something.
Glad to find this that I'm not the only one seeing the change, that these race conditions that didn't used to happen ever now happen frequently -- I still don't particularly understand why , but the situation @benalavi describes is exactly ours too, with the same "workaround" fix. For me it is not limited to form submissions, but ordinary clicks on GET links too.
I don't quite understand why this "workaround" wasn't always required, since it is sensible as a race condition once you see it. It may be a change but not a bug?
Until I fully diagnosed the problem, I had success by downgrading all the way to Chrome 128, like this:
# application_system_test_case.rb
ENV["SE_CHROME_VERSION"] = "128"
Ultimately, I found that all the failures in my tests were due to legitimate race conditions. It seems that when using the latest version of Chrome, methods like click_button return control to the test faster, before the click has actually "happened". I don't think this is a bug, it's just that timing is different.
@mattbrictson Similarly to you, we currently have Chrome pinned to 131 in CI. In Chrome 132, the "old" headless implementation is no longer available, but new headless causes tests to flake at random, and digging into the issue it looks like events such as click and fill_in often fail silently, causing the following expectations to raise. This is somewhat similar to the issue with visit, although I have no idea if they're related.
I have absolutely no idea if the issue is with Capybara, Selenium, Webdriver, Chromedriver, or Chrome, tests seem to work fine locally, and I haven't had any luck producing a repro. So I do agree, it's probably a race condition in our app or test suite which used to be magically handled and no longer isn't.
The primary mechanism that Capybara uses to ensure the test process remains synchronized with the browser process is through the synchronize method here:
https://github.com/teamcapybara/capybara/blob/0480f90168a40780d1398c75031a255c1819dce8/lib/capybara/node/base.rb#L76-L103
AFAICS, there's no mechanism that Selenium/Chromedriver can use to report back to Capybara that the page load is done so it just tries the next command and if it errors out in a certain way it will retry the command up until the max wait time. The list of exceptions that Capybara will retry for is in invalid_element_errors:
https://github.com/teamcapybara/capybara/blob/0480f90168a40780d1398c75031a255c1819dce8/lib/capybara/selenium/driver.rb#L297-L311
My working theory is that there's a number of things happening here:
- Chrome/Chromedriver has changed how it returns control back after a navigation. This may be related to the removal of the old headless mode but supposedly that happened in Chrome 132. However, it could be that Chrome 134 has exacerbated things - we were commenting before its release that our system specs seemed more flakey than before.
- Selenium is now seeing errors that it didn't see before which are raised as
UnknownError- it uses the error string to map errors so anything it doesn't know about will be an unknown error. - The
UnknownErrorexception isn't in the list of exceptions ininvalid_element_errorsso what would've be retried before is now being raised.
This is the reasoning behind my suggestion to add UnknownError to the list of invalid_element_errors.
The suggestion to always assert on the response is a good one that's always been true, however I've seen flakes where it was raising an unknown error exception on the assert because it hadn't finished navigating and as a consequence I've been asserting the path as well since that doesn't trigger the error. That's fine when there's a change of path but when you're redirecting back to the same page it causes a problem so I've had to wrap those in a retry block of my own.
A combination of the above has at least made our test suite stable again but it's not a long-term fix. Unfortunately don't have time to dig deeper into this during this week but will look at the weekend if no-one has a full explanation.
Experiencing the same, it's very frustrating, been tearing my hair out. Randomly visit will not actually visit the page.
Claude came up with this workaround that seems to work:
# In spec/support/capybara_extensions.rb
# 2025-03-22 Delete this file when this is fixed: https://github.com/teamcapybara/capybara/issues/2800
module CapybaraSeleniumExtensions
def visit(url)
super
wait_for_navigation_to_complete
end
def wait_for_navigation_to_complete(timeout = Capybara.default_max_wait_time)
browser_instance = browser
wait = Selenium::WebDriver::Wait.new(timeout: timeout)
wait.until do
browser_instance.execute_script('return document.readyState') == 'complete' &&
browser_instance.execute_script('return window.performance && window.performance.timing && (window.performance.timing.loadEventEnd > 0)')
end
rescue Selenium::WebDriver::Error::TimeoutError
puts "Warning: Navigation may not have fully completed within #{timeout} seconds"
end
end
# Apply to Selenium driver
if defined?(Capybara::Selenium::Driver)
Capybara::Selenium::Driver.prepend CapybaraSeleniumExtensions
end
# Now extend Capybara::Node::Element to make clicks reliable
module CapybaraElementExtensions
def alert_present?
if Capybara.current_session.driver.browser.respond_to?(:switch_to)
begin
Capybara.current_session.driver.browser.switch_to.alert
true
rescue Selenium::WebDriver::Error::NoSuchAlertError, Selenium::WebDriver::Error::UnexpectedAlertOpenError
false
end
else
false
end
end
def click(*keys, **options)
# Try to get the current path; if an alert is open, ignore it.
current_path = begin
Capybara.current_session.current_path
rescue Selenium::WebDriver::Error::UnexpectedAlertOpenError
nil
end
# Perform the click.
result = super
# Wait a short time for any navigation to start.
sleep 0.1
# Only run navigation wait logic if no alert is present.
unless alert_present?
if current_path && Capybara.current_session.current_path == current_path
begin
is_navigating = Capybara.current_session.evaluate_script(<<~JAVASCRIPT)
(function() {
return window.performance &&
window.performance.navigation &&
window.performance.navigation.type === 1;
})()
JAVASCRIPT
if is_navigating && Capybara.current_session.driver.browser.respond_to?(:switch_to)
wait = Selenium::WebDriver::Wait.new(timeout: Capybara.default_max_wait_time)
wait.until { Capybara.current_session.evaluate_script('return document.readyState') == 'complete' }
end
rescue => e
puts "Warning: Error checking navigation status: #{e.message}"
end
end
end
result
end
end
# Apply the extension to Capybara elements.
if defined?(Capybara::Node::Element)
Capybara::Node::Element.prepend CapybaraElementExtensions
end
# Also extend the Session for click_link, click_button, etc.
module CapybaraSessionExtensions
NAVIGATION_METHODS = [:click_link, :click_button, :click_on, :fill_in, :select,
:unselect, :check, :uncheck, :choose, :attach_file]
NAVIGATION_METHODS.each do |method_name|
define_method(method_name) do |*args, **kwargs, &block|
result = super(*args, **kwargs, &block)
sleep 0.2
alert_present = if driver.browser.respond_to?(:switch_to)
begin
driver.browser.switch_to.alert
true
rescue Selenium::WebDriver::Error::NoSuchAlertError, Selenium::WebDriver::Error::UnexpectedAlertOpenError
false
end
else
false
end
unless alert_present
begin
ready_state = evaluate_script("document.readyState")
if ready_state != 'complete'
document.synchronize(Capybara.default_max_wait_time) do
evaluate_script("document.readyState") == 'complete'
end
end
rescue => e
puts "Warning: Error stabilizing page after #{method_name}: #{e.message}"
end
end
result
end
end
end
if defined?(Capybara::Session)
Capybara::Session.prepend CapybaraSessionExtensions
end
I've been getting errors of not finding things after ajax form refreshes or perhaps turbo page loads. But when I look at the screenshots, the page looks like it is loaded just fine. Then I started noticing that my assertions that failed before seem to work if I execute save_screenshot just before the assertion, as if the save_screenshot would synchronize the headless chrome with selenium's cached. All these problems only occur during headless mode, when I use non-headless, there's no issue. I think pixeltrix is on-track when he is theorizing that something is wrong with synchronization.
I'm using the following in my circle config to force chrome version 128, as was mentioned above by @salmincp (https://github.com/teamcapybara/capybara/issues/2800#issuecomment-2731100953):
- run:
name: Remove image-bundled Chrome
command: sudo apt-get purge -y google-chrome-stable
- run:
name: Install stable Chrome 128 and matching Chromedriver
# A fix adapted from github actions suggestion that addresses
# an issue where chrome returns immediately after a page visit, as opposed to waiting
# for the request to complete: https://github.com/teamcapybara/capybara/issues/2800#issuecomment-2731100953
# Hopefully a future chrome version fixes this problem, but as it stands current stable chrome breaks it.
command: |
CHROME_VERSION=128.0.6613.137
wget https://dl.google.com/linux/chrome/deb/pool/main/g/google-chrome-stable/google-chrome-stable_${CHROME_VERSION}-1_amd64.deb
sudo apt install -y ./google-chrome-stable_${CHROME_VERSION}-1_amd64.deb
rm google-chrome-stable_${CHROME_VERSION}-1_amd64.deb
# Install matching chromedriver
wget -N https://edgedl.me.gvt1.com/edgedl/chrome/chrome-for-testing/${CHROME_VERSION}/linux64/chromedriver-linux64.zip
unzip chromedriver-linux64.zip
sudo mv chromedriver-linux64/chromedriver /usr/local/bin/chromedriver
sudo chmod +x /usr/local/bin/chromedriver
rm -rf chromedriver-linux64 chromedriver-linux64.zip
- run:
name: Check Google Chrome & Driver Versions
command: |
google-chrome --version
chromedriver --version
@vknightbd I encountered something similar to what you described on some test suites.
I think newer Chrome versions are backgrounding pages more intensively in headless mode, and this can be worked around by setting a few options:
# Disable timers being throttled in background pages/tabs
browser_options.add_argument 'disable-background-timer-throttling'
# Normally, Chrome will treat a 'foreground' tab instead as backgrounded if the surrounding window is occluded (aka
# visually covered) by another window. This flag disables that.
browser_options.add_argument 'disable-backgrounding-occluded-windows'
# This disables non-foreground tabs from getting a lower process priority.
browser_options.add_argument 'disable-renderer-backgrounding'
Source: https://github.com/teamcapybara/capybara/issues/2796#issuecomment-2678172710
I suspect calling save_screenshot before a flaky assertion wakes up pages that headless Chrome is backgrounding. Also for me flaky behaviour disappeared when running non-headless.
Thank you @eliotsykes !!! Adding browser_options.add_argument 'disable-backgrounding-occluded-windows' fixed my test issues that I had to use save_screenshot to hack around
Since we're seeing pretty much exactly the same thing with Selenium, this might be a Chrome/ChromeDriver issue (most possibly a Chrome one, since everyone reporting it sees it after a Chrome update). See this thread for some possible workarounds too: https://groups.google.com/g/selenium-users/c/vfG6A6XVkA0/m/F351RKv5BAAJw And the corresponding Chromium issue: https://issues.chromium.org/issues/405607581 I second disable-backgrounding-occluded-windows and some possibly useful other Chrome switches mentioned there.
eliotsykes chrome config suggestions seem to have significantly reduced the frequency of those Node with given id does not belong to the document race condition errors, but do not seem to have eliminated them altogether. Here's one recent one in github actions CI in our open source public repo:
https://github.com/sciencehistory/scihist_digicoll/actions/runs/14175532549/job/39709388068
I guess I'll look into overriding capybara methods to either rescue/retry or wait more for page change or something...
It would definitely be encouraging to know if capybara and/or selenium devs have eyes on this and are considering fixes!
Also, maybe using the Chrome switches under https://github.com/GoogleChrome/chrome-launcher/blob/main/docs/chrome-flags-for-tools.md, especially the ones about rendering background/occluded windows may help. We're using the switches that you can see from here: https://github.com/Lombiq/UI-Testing-Toolbox/blob/23f4312d65169a8c84cc16765765d541c0058fc7/Lombiq.Tests.UI/Services/WebDriverFactory.cs#L153 And they seem to be useful; our tests since implementing all of these workarounds/config changes see very little flakyness.
Apparently the above mentioned problems have been resolved in Chrome 135.
The ChromeDriver bug for the navigation race condition (click sometimes returns control before navigation has happened) is still open as of Chrome 135 release: https://issues.chromium.org/issues/402796660.
If you're impacted by that one it might be worth voting on the ChromeDriver bug (there is a +1 button at the top of the tracker) for the ChromeDriver team to prioritize.
The only thing that worked for is the following in my minitests:
setup do
# HACK: needed to make chrome work again.
# See https://github.com/teamcapybara/capybara/issues/2800
page.save_screenshot("tmp/hack.png")
sleep 0.1
end
browser_options.add_argument 'disable-backgrounding-occluded-windows' did not work for me.
Having struggled with this bug for a while and trying various combinations of browser options and retry hacks we ended up adding https://github.com/makandra/capybara-lockstep to our project and so far have not seen the dreaded Node with given id does not belong to the document - even with Chrome 135. Due to the intermittent nature of the bug we can't say for sure that capybara-lockstep has totally fixed it but so far so good.
Having struggled with this bug for a while and trying various combinations of browser options and retry hacks we ended up adding https://github.com/makandra/capybara-lockstep to our project and so far have not seen the dreaded
Node with given id does not belong to the document- even with Chrome 135. Due to the intermittent nature of the bug we can't say for sure that capybara-lockstep has totally fixed it but so far so good.
60 suite runs later and we're still good 🚀
Thanks to @hyodyllinentekniikka for pointing out capybara-lockstep, we have also just added it and it has seemingly fixed the issues we were having around this.