Even without refactoring, writing good tests increases my effectiveness as a programmer.
He also mentioned that "Agile" is formed with the following factors:
- CI
- Self-testing code
- Refactoring
Fixing the bug is usually pretty quick, but finding it is a nightmare. And then, when you do fix a bug, there's always a chance that another one will appear and that you might not even notice it till much later. And you'll spend ages finding that bug.
I was nodding a lot with this above. And I've been blessing when there's a good test.
When I need to add a feature, I begin by writing the test. This isn't as backward as it sounds. By writing the tests, I'm asking myself what needs to be done to add the function. Writing the test also concentrates me on the interface rather than the implementation (always a good thing).
^ Interface First!
When I investigate a bug, sometimes I'm taking the following steps:
- Write tests to confirm the bug. It is sometimes the fastest way to reproduce the bug.
- Add test items that don't fail. Add some that you know they are actualy wrong but can pass with the bug.
- Fix the bug. This time, the test items you added must be failing.
- Update the test items that you knew they're wrong.
A suite of tests is a powerful bug detector that decapitates the time it takes to find bugs.
(...) I'm only going to concentrate on the business logic part of the software - that is, the classes that calculate the profit and the shortfall, not the code that generates the HTML and hooks up the field changes to the underlying business logic. This chapter is just an introduction to the world of self-testing code, so it makes sense for me to start with the easiest case - which is code that doesn't involve user interface, persistence, or external service interaction. Such separation, however, is a good idea in any case
When you write code, keep asking yourself what is the business logic, what has to be involved with displaying UI, or interaction. In backend code, what methods need to rely on the HTTP request object, what methds can be indepented so you can test.
Always make sure a test will fail when it should.
The following command is equivalent to our looptest
command for frontend unit tests:
yarn run test:with <test file path> -w
-w
is "watch" option.
The style I follow is to look at all the things the class should do and test each one of them for any conditions that might cause the class to fail. This is not the same as testing every public method, which is what some programmers advocate. Testing should be risk-driven; remember, I'm trying to find bugs, now or in the future.
I think it's ok to add tests for some private methods if they are worth testing. I appreaciate keeping the code testable and paying attention for risks the private method could have.
It is better to write and run incomplete tests than not to run complete tests.
.
The
const
keyword in JavaScript only means the reference toasia
is constant, not the content of that object.
Variables defined with const
are not immutable. Even with this simple test example, you can tell how difficult coding with mutable variables is.
In this test, I'm verifying two different characteristics in a single
it
clause. As a general rule, it's wise to have only a single verify statement in eachit
clause.
This is the ideal pattern ^. But if it's easy to check what's failed, I think it's ok to not follow this pattern.
Think of the boundary conditions under which things might go wrong and concentrate your tests there.
Same for writing code. If a parameter does not have an unexpected value, the method / function should fail fast, with a clear message. We tend to not throw an error immediately but set a default value and let other functions deal with those values, but it makes code complicated. The author is also saying:
One approach is to add some handling that would give a better error response - either raising a more meaningful error message, or just setting
producers
to an empty array.
Adding assertions in the product code is encouraged. It helps developers what parameters the function expect.
if this errror could lead to bad data running arround the program, causing a failure that will be hard to debug, I might use Introduce Assertion (302) to fail fast. I don't add tests to catch such assertion failures, as they are themselves a form of test.
We have some tests like this ^ in our unit tests. I think the quote above is right.
when you get a bug report, start by writing a unit test that exposes the bug.
I think writing testable code is really really important. If the code is not really testable, we can't even write this type ^ of test.
A common question is, "How much testing is enough?" There's no good measurement for this. Some people advocate using test coverage as a measure, but test coverage analysis is only good for identifying untested areas of the code, not for tassessing the quality of a test suite.
This is true. We still want to keep test coverage, though. Because the difference between 0% and 10% is huge. 10% and 50% is huge as well. I don't advocate that it has to be more than 80% or something.
It is possible to write too many tests. One sign of that is when I spend more time changing the tests than the code under test - and I feel the tests are slowing me down. But while over-testing does happen, it's vanishingly rare compared to under-testing.