capybara icon indicating copy to clipboard operation
capybara copied to clipboard

Race condition between chromedriver and selenium with a good stack trace indicating it

Open keithschacht opened this issue 1 year ago • 7 comments

Meta

Capybara Version: 3.39.2 Driver Information (and browser if relevant): selenium-webdriver (4.23.0) using headless chrome

Expected Behavior

I've been using successfully using Capybara in Rails for quite some time (many months). But one day, about a month ago, my system tests started sporadically failing in my Github CI Actions with Net::ReadTimeout with "Net::ReadTimeout with #<TCPSocket:(closed)>". If I re-run the test suite a few times I can eventually get it to successfully run through. I've tried many different workarounds but none of them work around the issue. I've tried rolling back all changes in my repo to months ago when tests were consistently passing, and that doesn't seem to fix it either.

We've spent many hours investigating the cause and we currently think there is a race condition somewhere between chromedriver and selenium. My project is an open source project so here is a direct link to one of the failed CI runs where you can see the full stack trace: https://github.com/AllYourBot/hostedgpt/actions/runs/10533347868/job/29189182499?pr=498

The Net::ReadTimeout is coming from capybara (aka selenium) failing to hit chromedriver when attempting to set up the server. One of my engineers has outlined his read of that stack trace:

  • I think the tests run (and fail) before puma is started by capybara
  • The test hung because the server was still running and ruby wouldn't exit
  • It says the TCP socket was closed -- does this means the socket was open when it started but closed during the exchange? Or that it was never open? I suspect the former because the stack trace is in the middle of a read loop.
  • The failure is in the area of code which causes chromedriver to build a new session (ie, start chrome up):

Also, another thing that suggests a race condition is that when we SSH into the job mid-run, it sometimes fails or hangs for a bit. But if I interrupt the process (^c) and then re-run it, it goes fine.

Steps to reproduce

  1. On github you can fork this repo
  2. I've configured the Github CI Actions to not run system tests on forks, but (a) delete this line to remove the short circuit, and (b) change the very next "runs-on" line back to ubuntu-latest which are the default Github Action servers.
  3. Push a change to the repo to trigger Github CI to run

keithschacht avatar Aug 26 '24 15:08 keithschacht

We've also been experiencing this. We initially thought it was some problem with the first request timing out (like asset compilation?) but think we've eliminated those potential causes.

It still seems most likely there is a problem with our app, but maybe there is a regression with newer chrome or selenium versions.

We're also on GitHub Actions for what it's worth. One other theory we had is that the environment was cpu constrained on overloaded VMs within the GitHub actions runner pool.

woodhull avatar Aug 27 '24 12:08 woodhull

@woodhull try reverting your codebase to a point in time where CI was working, push that up to a branch/PR and see if it works. Ours doesn’t. This is how we determined it’s something outside of our codebase or gemfile.lock

keithschacht avatar Aug 27 '24 13:08 keithschacht

We resolved this by locking an older version of chromium. For us at least this started happening when alpine was upgraded to 3.19 to 3.20 and chrome along with it because we were letting the version float by basing our docker image on alpine without specifying an explicit version.

I tried the selenium-driver nightlies and the issue was still present there with the latest chrome version.

woodhull avatar Aug 27 '24 22:08 woodhull

Off-topic: I've lost so many hours to Selenium timing issues over the years, that I'm glad I bit the bullet and converted to Capybara+Playwright -- it's been rock solid so far, with approximately zero flakes in the two months since I switched. Here's the guide I wrote: https://justin.searls.co/posts/running-rails-system-tests-with-playwright-instead-of-selenium/

searls avatar Aug 28 '24 11:08 searls

@woodhull did you pin alpine and chromium+chromedriver, or just alpine?

robacarp avatar Aug 28 '24 15:08 robacarp

If there's a race condition between selenium and chromedriver shouldn't this be reported on once of those projects? Capybara doesn't really control their timing

twalpole avatar Aug 28 '24 18:08 twalpole

I opened this on Selenium's side too: https://github.com/SeleniumHQ/selenium/issues/14454

keithschacht avatar Aug 28 '24 22:08 keithschacht

I may be experiencing something similar, when switching from ubuntu to alpine to run my tests, no longer depending on CircleCI's chrome, and instead installing chromium from alpine, version 131.0.6778.108

has anyone in this thread found success in the meantime, either with a new version of chrome or a workaround?

jjb avatar Feb 11 '25 03:02 jjb

It was a few months ago since we fixed it, but we ultimately fixed this by getting our CI system to use a previous version of one of these tools. I believe we had to peg a docker image to an older version which had the previous version of chromium, but I could have that wrong. All I remember for sure is we figured out how to revert part of our stack back to the working state.

keithschacht avatar Feb 11 '25 17:02 keithschacht

gotcha, thanks! @woodhull (and @krschacht if relevant), how did you lock to an older chrome/chromium on alpine? seems like they don't make it easy to view historical versions.

jjb avatar Feb 12 '25 01:02 jjb

I think this is the PR where we fixed it on my project. We changed this in our Ci file: https://github.com/AllYourBot/hostedgpt/pull/513/files#diff-eae6f64c55dd40e3ea55d4d46a85edc20a8ed045a5816e3fa75e97817c45888dR77

keithschacht avatar Feb 12 '25 01:02 keithschacht

Thanks! can't seem to hack something analogous on alpine

jjb avatar Feb 12 '25 01:02 jjb