A couple of years ago, I had the opportunity to join a development team that was being expanded to create new product features. As I started to build out parts of the product, I wrote supporting unit tests as I went along. A month or so into the project our Team Lead approached me and asked (in not these exact words), "Why are you creating automated tests?" This wasn't a practice within the rest of the team.
Whether or not this comes naturally to you, I highly recommend asking this post's question to a team member or friend you respect. "Why do we create automated tests?" Whether for or against, do you agree with your team member or friend? Do you disagree? Discuss why over bagels or donuts. After you've done this, please read on and see if the following generates any further debate.
Having my project lead ask such a direct question led to a great conversation. He was able to share the reasons why the team had chosen not to write automated tests and I was challenged to justify why I felt compelled to include them. The discussion was very positive and stuck with me as something that should perhaps be done more often.
Here's some benefits that I have found come from carefully writing automated tests. These observations come from my experience using a test-first practice of development, your mileage may vary. I find that we put automated tests in place for two main purposes: to characterize new requirements, and to prove the quality of existing ones.
Characterizing New Requirements
The first is to fulfill a new requirement. This provides a host of benefits in how we do our work.
- Each test embodies how a requirement can be validated. Communicating the requirements of a piece of software often starts in the form of a text document communicated between stakeholders. This is very helpful at the beginning of a project to describe ideas in a form everyone can understand. As the project progresses, though requirements are refined and must conform into the actual releasable product. It is then much more maintainable to keep the description of requirements as close to the product as possible. Since the automated test suite is written in code it is often stored in the same repository as the product code, making it easy to find when needed. Since it is designed to run frequently the tests quickly return a status on whether they are up to date with the software product (or vice versa), giving a level of assurance that a test document cannot provide.
- If tests are designed in such a way as to isolate each atomic piece of logic, the architecture of our software is naturally guided by the Single Responsibility Principle.
- With each new test comes a discrete amount of work to focus on. Development efforts can often be complex. Even the implementation of a single new feature can include multiple problems to solve. Since, though we often don't accept it, our Developer Brains are just like everybody else's, having a single task or problem to focus on at a time is important. It allows us to work creatively. It allows us to work linearly. It reduces the stresses of a large project by giving small, consumable tasks that can be accomplish with each new test we write. The new test starts out failing. We know we're done when the test finally passes along with all others. Each little success may feel laughable while we're doing it. That is, until we realize the scope and quality that it helped us accomplish.
- The quality feedback loop is kept as short as possible. When an unrelated test fails on a new feature, perhaps we have fragility in our system. Knowing this as we're working instead of weeks or months down the road guides us to a better design. The earlier any issue can be detected and fixed, the cheaper it is.
Proving Continued Quality
We also keep those tests in place throughout the lifecycle of the software to prove that old requirements are still being satisfied. This leads to:
- Continual regression testing of the code base. There is no need to guess at the health of the software when we can rerun our suite of tests at a moment's notice. The test results are concrete numbers that can be used by the team and by the company supporting the software.
- Confidence to refactor whenever necessary. No requirements are ever forgotten when a suite of tests enforce each one. This allows us to rework any implementation logic without fear of unintended side effects. If the original design paradigm no longer suites the requirements, we should be comfortable enough in our tests to be able to change it. This greatly minimized code rot.
Quick Note: Llewellyn Falco has grouped the benefits of good tests into four categories: Specification, Feedback, Regression, and Granularity. The above touches on some of these but he's really insightful so I encourage you to read some of his articles on the subject.
Time to Talk
The reasons above appear to be powerful forces for good that can be taken advantage of when we write automated tests with discipline. But do you agree? Do you strongly suspect that the author is off his gourd? Perfect! Spend some time over coffee tomorrow morning, over lunch, or in your next team retrospective talking about automated testing and what your response was to this article.
Do you ask your team or yourself questions like this every once in a while? Do you question the value of the habits and practices in your professional life to ensure that they are still serving their intended purpose?
Want to dig deeper?
Here's a bunch of respected professionals writing about subjects touched on in this article. Great for discussing over bagels or donuts.
- Belshee, Arlo. "Llewellyn Falco – What makes a good test suite?"
- Clear, James. "The Myth of Multitasking." James Clear. 26 February 2015
- Csaba, Patkos. "SOLID: Part 1 – The Single Responsibility Principle." Envato Tuts+. 13 December 2013
- Falco, Llewellyn. "BDD vs TDD (explained)." YouTube. 23 January 2013
- Fowler, Martin. "Unit Test." Martin Fowler. 5 May 2014
- Martin, Robert C. "Design Principles and Design Practices." Military Open Source Software. 2000
- Martin, Robert C. "The Single Responsibility Principle." 8th Light. 8 May 2014
- Wells, Dan. "Surprise! Software Rots!" Agile Process. 2009