neve icon indicating copy to clipboard operation
neve copied to clipboard

Lighthouse Cypress Tests Proposal

Open preda-bogdan opened this issue 3 years ago • 11 comments

Description:

I was thinking of integrating lighthouse with our cypress tests. It looks like there is plugin support for this. https://github.com/mfrachet/cypress-audit

From my research you can run tests on specific pages with custom tests and values per test. eg:

cy.lighthouse({
    performance: 85,
    accessibility: 100,
    "best-practices": 85,
    seo: 85,
    pwa: 100,
  });

Also have a global template that is being used.

We can minimise inconsistencies from network conditions by having the tests run on a local instance of the project.

From my research for each cy.lighthouse call 3 tests are being ran and an average is made for each metric. You can also pass specific cases like (only desktop or mobile)

One thing that would need further research is to see how report values could be used for subsequent tests for cases where we would like to compare values with other pages.

Specific to Neve I think it should be possible to also have a process of testing between templates.

All feedback is welcomed.

preda-bogdan avatar May 04 '21 17:05 preda-bogdan

Another thought on this is that the PHP server that we would run inside GH action might not be super reliable and we might have big variations between runs due to TTFB which might influence a lot the speed index.

selul avatar May 23 '21 16:05 selul

We might have one WP instance set up on the staging server, and change the configurations and layout for what we want to test using the Customizer API. It should give a more reliable result.

gutoslv avatar May 23 '21 21:05 gutoslv

@gutoslv yes, this could be one solution, but how would we tackle tests per PR?

selul avatar May 24 '21 06:05 selul

@selul by having an extra GH Action that would deploy that code to the staging environment, and generate a new URL for it. Maybe each branch could have one staging site.

gutoslv avatar May 25 '21 20:05 gutoslv

@gutoslv yes, but I'm not aware of any service that could that and the technical complexity of creating one is not easy, basically we would need to create ephemeral environments per branch.

selul avatar May 26 '21 08:05 selul

@selul we could do this with Kubernetes, firing up a new instance for each branch. But, we could also run the performance tests on one site using the development branch, after each commit on the development branch and use it to track the performance.

gutoslv avatar May 26 '21 15:05 gutoslv

yes, Kubernetes is a good idea, we had some experiments in the past for this kind of workflow but unfortunately, the complexity that we reached to blocked us to come up with a ready-to-use version.

I agree that we could use a staging version for development only and target the lighthouse audits only for that branch, which is better than nothing that we have right now.

However, I'm thinking out loud about how the workflow would be, so please let me know what do you think about it. I.e we send a PR, merge it to dev, see that the metrics are failing, open again a pr, test the metrics with development and it might be good or not, eventually it might be a trial and error with opening and merging prs. It might not be the case, but I'm trying to outline a thought that I had.

A solution to ☝️ might be to run the deployment to the staging environment on-demand, along with the metrics test. I.e they comment on PR with @pirate-bot test:lighthouse and the workflow that would have run in development, will run on that branch. So basically the deploy to staging + lighthouse test will run on both commit on development+ comment on PRs

selul avatar May 26 '21 19:05 selul

also, we have Calibre on development, which is basically doing the same thing except for the comment trigger. I.e it pushes the development branch on staging, triggers calibre testing and do the tests. you can see this here https://github.com/Codeinwp/neve/blob/master/.github/workflows/sync-qa.yml

selul avatar May 26 '21 19:05 selul

I like the approach of triggering the test through a comment on PR. Maybe we could fire up a new instance of Kubernetes when the comment is made on a PR and make it an on-demand test.

My idea of workflow would be something like this if the:

Open a PR > Trigger e2e tests > trigger performance test > if failed, block merge or warn that the performance is under the threshold > if it passes, it's allowed to merge.

I've made a flowchart showing this

Since we have Calibre for testing the staging site on the development branch, I think there's more value added by testing it on the branches before merging. What do you think about it? I can start researching K8s and how to fire up the instance for testing; since we already have the script and Docker environments for automated tests, we can use them as the base.

gutoslv avatar May 30 '21 22:05 gutoslv

@gutoslv I agree with the workflow, seems solid, however, we can digest it a bit the idea before going forward with any work on it.

  1. Where the environment will be hosted? Kubernetes is just an orchestration service, we will need to bootstrap them somewhere, and if you are thinking that we could bootstrap them on CI, is not a good idea, I've already detailed this here https://github.com/Codeinwp/neve/issues/2827#issuecomment-846590051
  2. I don't consider this a priority right now since but we could evaluate it for the next Q and review the work effort required to do it as well as detailed planning on how we can approach it.

selul avatar May 31 '21 09:05 selul

For the first point, we may host it inside Docker containers into AWS, I really don't too much about hosting services for it. The last time I've worked with something like this was hosted into AWS. For the second one, we can review it for a goal for the next Q, it looks like it'll take a good amount of time and effort since we'll have to explore new tech.

gutoslv avatar Jun 02 '21 00:06 gutoslv