site-kit-wp icon indicating copy to clipboard operation
site-kit-wp copied to clipboard

Fix issue where VRT tests fail before all screenshots are captured

Open techanvil opened this issue 2 years ago • 4 comments

Bug Description

VRT runs in CI are intermittently failing before all the screenshots are captured, with the following error:

      COMMAND | Command "test" ended with an error after [309.764s]
      COMMAND | Error: Failed to launch the browser process!

Steps to reproduce

  • Watch for failing VRT test runs in CI.
  • They will intermittently fail with the above error.
  • See this example test run: https://github.com/google/site-kit-wp/actions/runs/7104955680/job/19341063414

Screenshots

This error has been seen with a couple of permutations - note there is an additional log message in the 2nd screenshot.

image.png image.png

Do not alter or remove anything below. The following sections will be managed by moderators only.

Acceptance criteria

  • VRT test runs should not fail with an error before all the screenshots are captured and the test has run to completion.

Implementation Brief

Test Coverage

QA Brief

Changelog entry

techanvil avatar Dec 08 '23 09:12 techanvil

Not seeing what can be the cause, since I this seems to be CI related, locally it doesn't fail. Unassigning myself

zutigrm avatar Dec 19 '23 12:12 zutigrm

@techanvil pinging you here after many days of testing, tweaking, testing some more (because you are the issue creator and may have more insight).

First off, there are not a ton of documented cases of this happening online. For the cases that do arise, the solutions vary, but the common suggestions and/or reasons include:

  • Needing to manually install chromium as a package within the OS, manually via the Dockerfile. We already do this.
  • Manually adding the openrc and dbus APK packages within the Alpine Linux powered OS, and adding it as a runtime service with rc-update. I have done this via the linked POC PR
  • Removing the headless mode of backstop, but I did not think that was feasible in this case
  • Adding the following flags to Backstop config: --no-sandbox, -disable-gpu, --disable-setuid-sandbox, --no-zygote. We already had --no-sandbox defined in our config, I added the other flags as part of the linked PR.

Another topic that came up a few times was that of disk availability. To test this theory and whether it was indeed a disk space issue, I updated the visual-regression.yml config to include steps that check disk space usage. In my testing, we seldom exceeded 69-72%, so we can rule that out entirely.

When I made the above changes, I saw an instant improvement in the occurrence of this error. In fact, for many runs and re-runs, I simply was not able to observe the error again. Unfortunately, it did return every now and then but far and few between (I ran many many test runs).

What I did however observe is that when I run the VRT workflow before around 9am my time, which I'd argue is when most of the US and EU tech industries are still relatively inactive compared to business hours, I had a 100% success rate. This could simply be a coincidence, but the later on in the day that I (re) ran the workflow, the more likely I was to encounter the error again, although arguably still less frequently than before.

Using the same Linux config in local Docker based testing, I was never able to reproduce the issue, but again this could too be a coincidence.

This leaves me to conclude, and really with not much support or information, that this is likely a GitHub resource availability or usage issue, as the workers are shared.

My suggestion would be to apply the following changes and see if we see a general decrease in the occurrence:

  • Update the backstop Dockerfile to include manual installation of the openrc and dbus APK packages
  • Add the following flags to Backstop config: --no-sandbox, -disable-gpu, --disable-setuid-sandbox, --no-zygote

Beyond that, I would not discount the fact that maybe this is only happening on the Alpine Linux image? We could try with another distro to test this theory.

Thoughts?

10upsimon avatar Jan 31 '24 07:01 10upsimon

Thanks @10upsimon, this sounds like a tricky one to diagnose so nice work on the R&D so far.

TL;DR: Let's try what you've suggested for now, see how we get on, and refine things/diagnose further in a separate issue.


As mentioned, nice work so far. The fact you've seen an apparent improvement makes me think we should take the pragmatic option of applying these changes in the hope we can immediately improve our CI reliability and thus save time in the development lifecycle.

That said, digging into the specific changes, it's a bit of an assortment of seemingly unconnected tweaks. Did you do enough testing of each in isolation to be sure they are all having an effect? At present it's not clear to me whether each of them is strictly necessary, and it would be nice to be more certain that we are only applying changes with a quantifiable effect. For example, I'm not sure the GH Action VM actually has a GPU for --disable-gpu to be relevant (unless the flag is introduced for some side effect).

The results are also a bit unquantified in terms of how much of a difference it makes. It's good you've seen an apparent positive change, but we could standardise our testing methods for some clearer results.

What I'd suggest, as mentioned is to apply these changes for now, but introduce a followup issue where we do something along these lines:

  • Create a script to run a GH action repeatedly and capture the results (success/failure at a minimum, and a deeper capture of logs etc would be nice).
  • Fork the repo to minimise disruption to regular development.
  • Using our script, run the VRT action on the fork for a set period of time, e.g. 1 day. Analyze the results for frequency and distribution of success/failures and potentially dig in for deeper analysis.
  • Repeat the above for various permutations to help to quantify which are having an effect.
  • Consider logging additional details - does Backstop have a verbose/debug logging mode? Or Docker etc? It could also be useful to log additional system resource usage e.g. memory, file handles.
  • I'd also try a newer version of the Alpine base image as well as other distros as you've suggested.

With the above testing framework in place we'd be at liberty to test ideas that occur to us in a relatively time-efficient manner. The scripts that come out of it could be useful for future GH action debugging too.

How does that sound to you?

techanvil avatar Jan 31 '24 18:01 techanvil

Removing next up label for now after discussion with the team it seems more thought and investigation is needed on this one.

binnieshah avatar Jul 02 '24 08:07 binnieshah

Thanks both. FWIW the --no-sandbox argument is already used, so there's no change there, only the others are added. https://github.com/google/site-kit-wp/blob/97816cc39b6cda34e459a863b2ec857608e48b45/tests/backstop/config.js#L30

IB LGTM, but we do need an estimate here @techanvil. Looking at the changes it should be very quick to do. The suggested testing of the workflow sounds good too but this should be tracked in an internal task.

Fork the repo to minimise disruption to regular development.

Agreed – all the other GH workflows can be disabled to avoid unnecessary GHA activity

Create a script to run a GH action repeatedly and capture the results (success/failure at a minimum, and a deeper capture of logs etc would be nice).

We can do this with an added workflow that runs using the schedule trigger and maybe update the VRT workflow to include the workflow_call trigger so we can invoke it from the new one.

The new workflow should be able to collect information about the run and write that to a file that could be committed back into a file since the repo is for debugging anyways. Analysis can be done later by copying the data over into a spreadsheet, etc.

aaemnnosttv avatar Jul 10 '24 15:07 aaemnnosttv

Thanks @aaemnnosttv, sorry I forgot to estimate :facepalm:

This one shouldn't take long, I've estimated it as a 7 to accommodate our estimation bump "experiment".

The suggested testing of the workflow sounds good too but this should be tracked in an internal task.

I had already created https://github.com/google/site-kit-wp/issues/8998 as a followup issue for this, would you prefer this to be migrated to an Asana task?

techanvil avatar Jul 10 '24 15:07 techanvil

I had already created #8998 as a followup issue for this, would you prefer this to be migrated to an Asana task?

Yeah I think so, particularly if we're going to get into testing on a fork and all. We could keep that issue as a placeholder for changes that will come out of the investigation though since it doesn't have much going on there yet.

IB ✅

aaemnnosttv avatar Jul 17 '24 20:07 aaemnnosttv

I had already created #8998 as a followup issue for this, would you prefer this to be migrated to an Asana task?

Yeah I think so, particularly if we're going to get into testing on a fork and all. We could keep that issue as a placeholder for changes that will come out of the investigation though since it doesn't have much going on there yet.

Thanks @aaemnnosttv, SGTM. I've created an Asana task for the investigative side and updated #8998 to be a placeholder for applying the findings.

techanvil avatar Aug 06 '24 14:08 techanvil

moving to approval since there is nothing to QA

eugene-manuilov avatar Aug 15 '24 14:08 eugene-manuilov

VRT now seems to run consistently although it is consistently failing one scenario in mobile. This should be fixed in #9101 .

image

aaemnnosttv avatar Aug 23 '24 16:08 aaemnnosttv