nemo-docs
nemo-docs copied to clipboard
Nemo 3 and Nemo Runner value prop
nemo-cor@1 (formerly nemo)
- selenium@3 support (async/await, no promise manager, headless chrome support)
- nemo-view@3 with destructure support
nemo (formerly nemo-runner)
- single dependency (
nemowill requirenemo-corandnemo-view) - profiles configuration
- test runner
- wraps programmatic mocha.js
- injects configured nemo instance (this.nemo)
- includes nemo-view plugin automatically (
nemo.viewis the project-supported "page" abstraction) - automatic screenshots (afterEach, and programmatic (
nemo.runner.snap()) - parallel runs (by file, data, grep, profile)
- debug easily (single process)
- historical reporting (influxdb)
- configure external (future: automatic local if not configured)
- each mocha test run associated to
- date/time
- tags (uuid[, grep, profile, file, key, custom])
- test run configuration
- data also associates parent runs/parallel runs
- graphQL/RESTful interface (WIP)
- deploy as a service (docker image to cloud server) or run locally
- query historical data
- manage configuration/test assets
- kickoff test runner
Release and Support
- important plugins updated (see #19)
- docs needed
- examples needed
- website update needed
- screencast training needed
- RESTful interface features
- RESTful interface UI
Just want to share some thoughts based on our experience of setting up Nemo recently, and the accessories we built around it to make it especially useful for us.
-
Could we consider adding some
nemo initfunctionality to the cli instead of requiring someone find thegenerator-nemotool? It may add some bloat to the project, but would ostensibly reduce friction to getting started. It took us several weeks to get setup with everything we have now. It would be great if that took 2 min. -
It's been incredibly important for us to stuff arbitrary data in the reports and in what we store in influxdb (for us it's the same thing). Here is a list of what we keep:
title
profile
state <-- like PASS/FAIL
——————
sluggishStage
jawsFailure
(These two are just helper flags based on stack trace which we use to easily identify our two biggest sources of flakiness (fake user creation, and stage sluggishness).)
——————
file
startTime
endTime
metadata <— err maybe we should have put more in here, but this is random data like email, CAL ID, etc.
duration <— helps us identify slow tests
errorMessage <— makes for use queries to see how often some flake issues are happening
errorStack
- Include default reporter (which supports aggregation), but allow use of third-party solutions. Perhaps the biggest issue we had getting value out of Nemo was merely seeing the results of a suite that, for example, ran 150 tests against 6 different browsers. Because we run each test in parallel, we originally got the choice of peeking at 900 report.json files (literally), or quick glancing at the console output to get some sense of the damage (very hard). We ended up making our own reporter which aggregates the tests by file/test and by browser run. This seems like an extremely common use case and should be considered. If it were covered by a default reporter, that could be really helpful out of the box.
Additionally, our reporter lets you drill down into the health of a particular test/browser combination within the html itself. The data as you drill down comes from influxdb. It's really helpful for us.
(e.g. here are two tests I just ran in chrome and firefox — if both fail the test shows red, if one fails the test shows yellow)

👍 for how easy storage in influx would enable everyone to set up a dashboard. Our grafana dashboard helps us monitor our regression runs at a high-level. We can see the most failing tests, slowest tests, how browsers are performing, etc. We could save a custom grafana image that plugs into whatever we want to do officially with the influxdb image.