playwright
playwright copied to clipboard
[Feature]: Ramp-up time before spec starts to mitigate DDoS
🚀 Feature Request
Having more tests one adds more CPU to make parallel runs to make the test suite pass faster. More CPU means more threads, which causes some kind of DDoS for the server ( which usually is not such beefy as prod and not mentioned to hold some load tests) because all test specs are started simultaneously. Having 16 workers sometimes makes CI to suffer :) Is there a way to start each spec with some random delay to spread up the initial pressure of the server?
Example
No response
Motivation
more stable runs
Do you think running a server per worker would be a better solution? We are exploring the ideas in the area.
@dimkin-eu Any thoughts on the above?
@dgozman, best solution would be - first tests could be delayed with some predefined pause by the test runner.
- user has 20 specs
- user has 5 workers tests are starting and 1st test starts immediately, 2nd - after 1s, 3rd after 2s, and so on till all ( 5 ) workers are utilized. The rest tests starting as soon as the worker is free
@dimkin-eu I understand your proposal. However, we still would like to know whether you have considered running a server per worker?
As a workaround, you can achieve the pause behavior with an auto-worker fixture:
import { test as base } from '@playwright/test';
export const test = base.extend<{}, { delay: void }>({
delay: [async ({}, use, info) => {
const seconds = info.workerIndex < info.config.workers ? info.workerIndex : 0;
await new Promise(r => setTimeout(r, seconds * 1000));
await use();
}, { scope: 'worker', auto: true, timeout: 0 }],
});
Let me know whether that works for you.
@dgozman can you elaborate on server per worker ? we are running our tests using k8s, so a new pod with some CPUs ( depending on some calculations) is created. Is server per worker some kind of sharding?
@dimkin-eu Conceptually, you can have a single server for all the tests, either external or started by playwright with webServer option, or have multiple servers where each test talks to one server, effectively balancing the load on the server. We do not have anything built-in for this right now, so we are interested whether someone is already doing that, and/or what problems they have encountered/foresee with such an approach.
Our SUT is a bunch of services ( created for every test run in most MRs), so running a bunch of local ones ain't possible. The only thing we considering now - dig deeper to sharding possibilities, but this is something different. IIRC there was something similar to our needs in jmeter
1000 target threads with 1000 seconds ramp-up: JMeter will add one user each second
1000 target threads with 100 seconds ramp-up: JMeter will add 10 users each second
1000 target threads with 50 seconds ramp-up: JMeter will add 20 users each second
@dimkin-eu I see, thank you for the information. Let me know whether the workaround above works for you. Meanwhile, we'll think about your usecase and similar ones, and any possible solutions we can provide.
@dgozman do I get it right, I need to change the import in every spec?
import { test } from "@playwright/test"
to overridden in some file? and then hope, that every spec will use the proper one?
and
import { test, expect } from "@playwright/test"
will it be split into 2 lines?
@dimkin-eu Yep, that's right. You can also re-export expect from the helper file, so that you write:
import { test, expect } from './my-fixtures.ts';
That is very cumbersome and unsafe ( having 100+ specs ) :(