When discussing food, an expiration date is a date by which a supplier suggests you consume their product. Some foods, like dairy or meat, go bad and can cause us harm. Other foods, like tortilla chips or crackers, just get stale; they may taste OK and not kill us, but usually, we don’t enjoy stale food because part of the value we derive from food is enjoyment. When food items have an expiration date, we say they have a shelf life; the shelf life ends once this date has passed.

Some of the tests that we perform release-over-release may also have a shelf life. Eventually, these tests become of so little value that they are no longer worth running, or they are not worth running frequently. If we view automation as assistance for testers, it makes sense that some of our automation has an expiration date as well – our automation also has a shelf life.

What do I mean? Most organizations don’t deliver a software release with fewer features than the previous version. Even if we remove a few features, we generally deliver more features than we remove. Since we are adding more features, we need to add more tests; adding more tests usually means adding more automation. Over time, our ever-growing automation takes longer to run and requires more effort to maintain.

As a real-world example, I once worked with an organization that had created a proprietary tool to help test the complex algorithms in one of their products. Based on the product and its interface, creating a proprietary tool was quite appropriate. The issue was how they added test scenarios: in addition to adding scenarios when new features were created, a scenario was added that reproduced each bug found. As we can imagine, over the years, many test scenarios were added, resulting in thousands of test scenarios. As they aged, many of the scenarios were no longer valuable because the regulatory compliance for which they were testing had changed or was rescinded. Also over time, the data on which the tests relied aged such that it was no longer representative of data in real life. These tests and their associated data had exceeded their shelf life.

Ideally, automation of this sort would be executed on each deploy to each environment, but that was no longer feasible. For each automation run, all of these automated scenarios needed to be executed. It took several hours to run and several more hours to assess the data produced by each execution. The more scenarios that were added, the longer the automation run took.

With this situation, some obvious questions come to mind…

Why not just run the scenarios in parallel? Unfortunately, the tool was not designed to run test scenarios in parallel.

Why not just remove the stale tests now? Many of the tests were old and their authors moved on long ago. This made it difficult to know which scenarios have “gone bad”; unlike expired milk, these stale tests don’t always have a foul odor.

Why not start over from scratch with a new tool and test scenarios? Risk, mainly. The set of algorithms under test provided the core value to the customers; sometimes there were even financial penalties for incorrect algorithm performance. The amount of effort required was also quite large. The product leadership was unwilling to undertake such a risky and effort-intensive initiative.

What if they created new and relevant data, at least? The test scenarios and the oracle files were tightly coupled to that data. The team would have to update the data, the scenarios, and the oracle. This initiative required a huge actual and opportunity cost.

Though this is an extreme case, many organizations will have a similar situation if they follow the “new test on every bug” path. Similarly, when automating “in-sprint”, such as a BDD or ATDD approach, we can wind up in a similar situation if we have no strategy for handling aging or stale tests.

Clearly, we can reduce the time it takes to run our automation by running some of it concurrently, provided we’ve written and configured it appropriately. This approach, however, can require additional automation creation time and additional resources such as unique user data, more computing power, and additional system under test resources for non-shareable systems. We’re also deferring the handling of a problem that we will likely have in the future.

What can we do about this? The main thing we can do is create and execute on a plan to manage our automation growth. One approach is to perform periodic automation audits. Any automation that is not providing value is a candidate for refactor or removal. We can do this on a cadence (e.g. once a month), on an event (e.g. our smoke suite has exceeded a duration of X minutes), or on any other trigger that is appropriate for our organizations.

Obviously, even with this kind of plan in place, our automation will grow over time; it’s the nature of the beast. We need additional strategies and plans to address the execution, results, and maintenance of our ever-growing automation; don’t forget that we need to throw away things that no longer provide value. I don’t want to eat stale potato chips and I don’t want to mess with stale automation.

Like this? Catch me at an upcoming event!