Skip to content

Instantly share code, notes, and snippets.

@justin-vanwinkle
Last active July 14, 2017 15:50
Show Gist options
  • Save justin-vanwinkle/d91dbae413119081a6f74aa56d1fbdf2 to your computer and use it in GitHub Desktop.
Save justin-vanwinkle/d91dbae413119081a6f74aa56d1fbdf2 to your computer and use it in GitHub Desktop.
Best viewed in raw since gist only supports 3 levels of nesting

Test-Driven Development -- by Craig Oliver (https://github.com/PurpleGuitar)

  1. What is it?
  • A software development approach that emphasizes short, rapid cycles where tests are written before implementation.
  1. How is it done?
  • Start with requirements
  • For each requirement, write one or more test cases that will demonstrate correct behavior when they pass
    • (At this point, all new test cases probably fail, or even don't compile. That's OK. in fact, that's part of the point.)
  • Write code until all tests pass, both new tests and existing regression tests.
  • Refactor code as needed, making sure all tests still pass.
  • Add new tests to the pool of regression tests.
  • STOP. :)
  1. Why do it?
  • Keeps focus on requirements / user stories
  • Forces developer to think about what correct behavior is
  • Often surfaces bugs in the requirements or design before code ever gets written
  • Forces consideration of the new code's API and client contract
  • Incentivizes writing small, modular, testable pieces of code
  • Naturally builds a suite of regression tests that can catch future bugs
  1. Example
  • Requirement: a function that returns the first n characters of its input
  • Create simple test for a single use case
  • Create slice implementation of function
  • Demonstrate test passes
  • Expand test for a few use cases
  • Demonstrate tests fail
  • Create better implementation of function
  • Demonstrate tests pass
  • Refactor test into data-driven version
  • Demonstrate tests pass
  1. Best practices

    5.1. Tests should be automated

    • Unit tests should pretty much always be automated
    • Integration tests can sometimes be cost/time prohibitive to automate

    5.2. Tests should not depend on each other

    • Start with the assumption the system is in a neutral state
    • Set up the system to match the prerequisites of the test
    • Execute the test
    • Validate the results
    • Restore the system to a neutral state
    • Sometimes it's more practical to do this cycle for a small group of tests

    5.3. Code functions should have as few dependencies and side effects as possible

    • Every dependency makes testing the function harder
    • Example: function that accepts a file name
    • To test this, we have to create the external file that the function reads every time we want to test it
    • Instead, consider accepting a file object; then the test can (for example) use a StringIO object to create the contents and pass it to the function without having to create the file
    • Example: function that reaches out to a 3rd party API and queries it
    • This is very hard to test -- we can't control the 3rd party API, and it may be slow, buggy, unstable, or a host of other problems; verifying correct behavior is difficult
    • Instead, consider passing in a proxy object to the 3rd party API; we can verify that the function called the correct proxy methods with the correct parameters.
    • Example: function that internally writes status data to the system log
    • To test correct logging behavior, we would have to open the log file after the function call and try to verify that the writes were correct
    • Instead, consider passing a logging object into the function; we can then query the object after the function run to see if we got the data we expected
    • Passing dependencies in through parameters is much easier to test than internal dependencies
    • Object mocking is a great strategy to use when external dependencies are unavoidable; but writing smaller, less dependent functions can often avoid the need for mocking

    5.4. Develop data-driven tests where possible

    • A data-driven test is a small core test driven by a table of parameters and expected results
    • A great fit for testing the bounds of a function's behavior
    • Makes writing new tests easy, which encourages more tests

    5.5. Test for negative paths as well as positive ones

    • Make sure the system responds appropriately to error conditions
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment