Skip to content

Instantly share code, notes, and snippets.

@aslushnikov
Last active July 13, 2020 18:16
Show Gist options
  • Save aslushnikov/610db3ec42569ad1f01418983c06ab2b to your computer and use it in GitHub Desktop.
Save aslushnikov/610db3ec42569ad1f01418983c06ab2b to your computer and use it in GitHub Desktop.

Looking into TestRunners 🧐

Market Share

Test Runner market share, as per developer survey:

image

(source: State of JavaScript 2019)

NPM Downloads:

image

(source: npm trends)

Github stats:

image

(source: npm trends)

Take-Aways:

  • Enzyme uses jest internally, so jest downloads include all Enzyme downloads as well
  • Mocha is on par with Jest
  • There are 4 "just testrunners" that are used today: Jest, AVA, Mocha, Jasmine. Other "testing solutions" either rely on some of these testrunners, and/or provide a very different core value (like "cypress").

For the rest of this document, we'll look into Jest, AVA, Mocha and Jasmine.

General TestRunner Ergonomics

For TestRunner ergonomics, I find the following lists fascinating to read:

Below is a rough summary of the concerns from the current test-running solutions.

  1. Fast & Lean! Fast to install, fast to run: #6 jest bug, #9 jest bug, make jest small! bug
  2. First-class support for ESModules: #1 jest bug, #3 ava bug
  3. Capable programmatic API, allowing to fully inspect the set of tests, their expected results, timeouts, environments and so on. This is to power future test-related tooling (linters, flakiness dashboards, test explorers e.t.c.): jest bug, jest blogpost
  4. Sourcemaps support: #3 jasmine bug
  5. Optional opt-out from global poisoning: #5 mocha bug
  6. Suites support: #1 ava bug
  7. Async suites: popular closed jest bug

e2e Testing Perspective

These come from my experimentation with running Playwright & Puppeteer tests, as well as the client interviews that we participated together with Arjun.

The following tasks are pretty common yet are hard to accomplish with either of the 4 testrunners out there:

  1. Setup a cross-browser testing to satisfy the following scenarios:
    • write a single test that captures screenshot and run it across 3 browsers.
    • configure some subtests to run in certain browsers only ("regression tests")
    • support running tests with playwright-webkit, playwright-firefox and playwright-chromium. (according to NPM download stats, at least 1/3 of our users uses per-browser playwight packages)
  2. Image and text expectations are first-class citizens.
    • easy to save image and text snapshots per-browser, per-platform
    • easy to configure custom snapshot strategies that depend on my environments / test specifics (e.g. Playwright snapshots are the same across OS'es)
    • generate snapshot name automatically based on the test in a reliable way. Make sure snapshots won't clash across tests!
  3. TestRunner artifacts is a first-class citizen.
    • setup all snapshot diffs, HTML report, screencasts, DOMSNapshots and e.t.c. go to a single //artifacts folder.
    • make all plugins write to the same //artifacts folder as well
    • //artifacts folder is what we upload on CI after every run
  4. HTML & JUNIT reports are first-class citizens and are generated after each run into the same //artifacts folder.
  5. Run tests in parallel in a highly-efficient manner (e.g. re-using browser instance and using browser contexts as a test isolation primitive).
    • It should be trivial to re-use browser across parallel workers as well. (this is one of the top-requested issues in ava)
  6. Collect fail-over artifacts. For example, it's tedious to setup jest to take a page screenshot only after all 3 retries of the test failed.
  7. Per-test retries. Even though we believe that tests should not be flaky, progmatic retries are still very useful.

Debugging

Ideally, test debugging is not different from any other node.js debugging. If there's already a node.js debugging set up in VSCode, or CLI-based approach with --inspect-brk node flag, having same experience in tests would benefit users.

Today, this is not the case.

  • JEST: VSCode debugging requries custom launch config in vscode; the --inspect-brk docs are tedious.
  • Mocha implements a custom "debug" command for the framework. According to StackOverflow, how do I debug mocha tests? is still a popular question
  • AVA implements the "debug" command as well
  • Jasmine doesn't have any debugging story and falls back to manually running jasmine-cli with node with --inspect-brk.

However, in all of testrunner debugging experiences:

  • if using --inspect-brk approach, I was facing with the breakpoint in some test runner internals
  • everywhere besides jest, test runner wouldn't disable timeout so the test will timeout while I'm debugging it

TypeScript Support

TypeScript support is very important, given how big TypeScript adoption is. For example, AVA was faced with a huge demand to run .ts files: avajs/ava#1109 (this is #2 overall ava bug, closed now).

Today's TypeScript support across test runners is powered by 3rd party modules:

However, almost everywhere besides Jest, TypeScript-running solutions are very unpopular:

image

It's unclear why, but one of the reasons might be due to the jest-jsdom-react-typescript combination that is very popular and is bringing TypeScript into jest testing land.

Parallelization

Test parallelization is a very popular topic in front-end world:

  • jest runs in parallel by default (that's why it's important to pass --runInBand for e2e tests on CI)
  • ava runs tests in parallel by default
  • mocha recently released long-awaited parallel support: announcement

All current test runners use child processes to parallelize jobs.

PROs:

  • the only "real" way to parallelize work. CPU-heavy node.js tests can be effectively parallelized

CONs:


However, browser e2e-tests are not CPU-heavy: majority of the work is happenning in the driven entity (browser, database, other microservices). Node.js tests are purely orchestrating (e.g. with playwright) and running assertions (e.g. with jest-expect or chai).

This allows to parallelize tests in a single node.js process relying on async I/O. This approach is currently explored in the playwright's testrunner

PROs:

  • easy for users to share resources across workers
  • fastest performance

CONs:

  • might not be efficient if there's a lot of CPU work in node.js process
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment