k6 icon indicating copy to clipboard operation
k6 copied to clipboard

Functional Testing with k6

Open oleiade opened this issue 6 months ago • 0 comments

Problem space

Lack of user-friendly API to mark a test as failed and provide clear failure information in k6 for functional testing use-cases. Existing constructs have limitations, leading to workarounds and unclear error reporting.

Why does it matter?

Grafana Synthetic Monitoring, k6 Browser, and further internal Grafana projects all need a way to reliably fail tests and provide detailed information about failures. Current workarounds produce junk data, are cumbersome, and lack clear error context, hindering effective testing and debugging. Users want to write assertions that can fail and stop tests with clear information, and to reuse scripts without relying on workarounds.

What we want to build

A robust functional testing experience in k6 with the ability to write assertions that can mark a test as failed, abort the test immediately on failure, and provide detailed, parsable error messages. This includes:

  • ✅ A mechanism to mark a test as failed and exit with a non-zero exit code: #4062
  • ✅ An expect() API with hard and soft expectations, compatible with Playwright’s assertions library: jslib-testing.
  • ✅ Improved error messages with context (expected/actual values, code location): jslib-testing.
  • A k6/test package for new assertion APIs (including ☝).

Considerations

  • Collaboration across teams (engine, backend, synthetics, frontend).
  • Avoiding major architectural changes; we prefer introducing a new API that becomes the officially supported one, superseeding some of the existing workarounds.
  • We may want to have assertions-related metrics, along the lines of what is done with checks?
  • End-of-test summary relevance for functional testing?

Nice to have

  • ✅ Matchers negation in expect() API.
  • ✅ Configurable timeout for retrying expectations.
  • ✅ Custom messages for assertions and expectations.
  • ✅ Configuration options for expect() API (display format, colorization).

What success looks like

Users can write assertions in k6 scripts that reliably fail tests and provide clear, detailed information about the failure. Scripts can be reused for load and functional testing with minimal modification. Error messages can be made to be easily parsable for automation integration, and provide sufficient context for debugging. The k6/test package feels like a first-class citizen and integrates seamlessly with k6.

oleiade avatar May 22 '25 10:05 oleiade