Skip to content

Instantly share code, notes, and snippets.

@aaronj1335
Last active December 15, 2015 08:09
Show Gist options
  • Select an option

  • Save aaronj1335/5228885 to your computer and use it in GitHub Desktop.

Select an option

Save aaronj1335/5228885 to your computer and use it in GitHub Desktop.
unit testing -- from test.js to continuous integration

unit testing

unit tests are good for number of reasons, but it's important to know when writing them is worth the time effort. we're not dogmatic about TDD, but we try to make it as easy as possible for the developer to write meaningful tests.

when deciding whether or not to write a test, keep the following in mind:

  • javascript is INSANELY dynamic, so refactoring can cause a lot of subtle bugs, and it's nearly impossible without good unit tests. in as much as you expect a piece of code to live for a while and change a lot, you should write unit tests.

  • when developers are collaborating on a project, it's easy to screw each other up. the most effective way of avoiding that is to unit test your work such that when someone else writes something that messes up your hard work, you have a failing unit test you can use to keep them in check.

the basics

making a unit test starts with creating a .js file with 'test' somewhere in the name. the convention we try to follow is putting a test.js file in a directory with the same name as the file you're testing, for example:

views/
├── myview.js
│   └── test.js
└── myview.js

1 directory, 2 files

the test file is a require.js module just like any other, except QUnit will be pre-loaded into the environment (more on that later).

/*global test, asyncTest, ok, equal, deepEqual, start, strictEqual, notStrictEqual, raises*/
define([
    './../myview'
], function(MyView) {
    asyncTest('instantiating myview', function() {
        var myView = MyView();
        myView.somethingAsync().then(function() {
            ok(myView);
            start();
        });
    });
    start();
});

breaking it down:

  • line 1: some people use js{lint,hint}, so it helps to declare the globals provided by QUnit at the top.

  • line 5: here we're declaring a typical QUnit test named 'instantiating myview'. if this doesn't look familiar, then read the QUnit cookbook, keeping in mind that you only need to focus on the javascript, since we don't embed that in the markup.

  • line 6: this is where the actual testing happens. specifics about assertions and stuff you can use are available in the api docs.

  • line 9: note that because this is an asyncTest and not just a test, we have to call start() to let QUnit know we're done. if you have a test with many different exit points, it's easy to forget to call start() at the end of them, and you can get into some really frustrating situations debugging those, so don't for get start()!

  • line 12: because of how we set up our unit test environment, this needs to be at the end of every file. it's a long boring story, so just put it at the end of your files.

real-world unit tests

that's about all you need to know to get started, but most of the meaningful tests you write will need things like setup code and some kind of data mocking. we also have several singleton classes that manage state on the page, and these need to be reset between each unit test.

setup code

we seemed to run into race conditions when we used the QUnit module() setup/teardown functions, so instead we typically define a setup() method in each test file that returns a deferred once it has done all of the typical prerequisite tasks. a good examples of this is the model consistency tests.

mocking data

this is a big topic that's covered in another write-up. the important thing to remember is to reset the data stores and managers between tests, which is typically accomplished through the mockReset() method.

under the hood

the csi unit test server takes care of setting up the QUnit environment and serving your unit tests at a different URL for each test.js file. start the server at the root of your repository directory with:

$ csi test -l
[csi test] serving at http://localhost:1335

this will let you know the hostname and port at which it is serving. you can get a list of all the available test URLs by running:

$ csi test --listtests
http://localhost:1335/components/lookandfeel/test
http://localhost:1335/components/vendor/test
http://localhost:1335/components/vendor/test_requirejs_shim
...

the URL of the unit tests tells you where the test can be found in your development directory by just adding a /static right after the port number, so http://localhost:1335/components/lookandfeel/test can be found at ./static/components/lookandfeel/test.js.

there are a lot of unit tests, and if you're working on library code, you often times want to be able to run through them quickly. this process is designed to be scriptable, and there are examples of scripts that automate the process of opening up the tests in browser tabs here and here.

csi's unit test server also provides hooks for things like interposing on requests and adding html template snippets to the environment it serves up. these are some of the dark corners of the system, and if you feel like you need to be using this stuff you should either (a) know what you're doing or (b) find some other way to do it by augmenting your setup() function or mocking more data.

continuous integration

we have a jenkins job that spins up the test server and then uses the phantomjs headless web browser to visit (nearly) all of the unit test URLs. the job runs after every build, so if you're a UI developer and there was a failed unit test, you'll get a super verbose email informing you that something is broken. follow the link to the job's run page in in jenkins and you'll see something like this:

jenkins unit test failure

that 'test result' section tells you the failure was:

components.gloss.widgets.powergrid.test.timeout

if jenkins was more flexible about how it parses and understands junit xml, we may be able to show something more descriptive, but as it is you'll need to figure out for yourself that the failure corresponds to the unit test at:

http://localhost:1335/components/gloss/widgets/powergrid/test

you should be able to spin up a local server and see and address the issue. occasionally (rarely) the test works fine locally but fails in the jenkins environment, in which case you'll need to understand a bit more about the machinery used to automate the process.

automation machinery

we have a repo creatively named csutr (client side unit test runner), which jenkins uses to clone repos and run the tests. i can't over-state how much easier this should be, but it's just not. the jenkins job kicks off run-jenkins-tests.sh, which roughly:

  • clones the app repo (daft, glad, etc.) TODO: we need to add dashboard to this list
  • runs npm install to get the js dependencies
  • runs npm install <path to csutr> so that it can later call the csutr command from within the app directory
  • runs csi install to set up the static directory within the app
  • runs the csutr command, outputting the results into testResults.xml

note that if you'd like to run csutr locally, you can go to your app directory, npm install git://github.com:siq/csutr.git and kick it off with csutr > testResults.xml.

now it gets complicated

csutr directs a phantomjs process to each of the unit test URLs. that's strait-forward enough, but it also has to figure out which tests passed, which ones failed, and when to call it a timeout and move on. this is complicated because it's difficult to marshal data and control flow back and forth between the browser and the node.js process that's writing it out to disk. csutr uses spookyjs from within node to control the phantomjs process via the casperjs api. the tricky part is functions defined in the node.js process are serialized and eval'ed in the phantomjs process and browser contexts, which breaks all of the static lexical scoping that you would expect to hold. for this reason anything that runs outside the node.js context lives in the lib directory to try and minimize confusion.

hopefully you won't need to mess with any of that.

configuration and quirks

there are several hooks for configuring what unit tests are included in the csutr report. for instance you can instruct it to include or exclude certain files, modules within those files, or even specific tests. you configure which tests run by setting the package.json csi.test{Exclude,Include,Ignore} keys (examples in the list below).

the way we've configured our apps is to have csutr:

design moving forward

in the past we took a couple approaches that complicated things:

generated javascript

the js files generated by mesh api bindings were generated at build time. so we had a bunch of code that depended on classes that couldn't be installed with the typicall npm and csi commands, and we essentially needed a bunch of python build machinery just to run client-side unit tests. since then we've made the api-js repo, which can be installed by npm/csi and it solves the problem.

mock servers

originally we thought that api implementors could provide mock servers for unit tests because:

  • since mesh had all this metadata about the api, it could easily mock this up in a server
  • the implementors were the ones that knew the api details, so they were in a good position to mock the behavior

it turned out to be a lot of effort to mock even basic functionality in mesh, and at the end of the day, api implementors are busy enough with their own stuff, they're not going to spend time mocking data as well. beyond that, test servers require a massive amount of infrastructure (just have a look at a build manifest). including them as a dependency kills the unit tests' portability, and makes it harder to justify their maintenance costs, especially where continuous integration is concerned.

suggestions

  • keep the client-side dependencies within the npm/csi toolchain
  • mock data on the client side
  • don't write unit tests that rely on proxy'ing to dev or mock servers
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment