While at a previous job, I was invited to an automation meeting by three product development managers and their leader, the VP of product development. It was a rather uncomfortable meeting because I’d underestimated an aspect of our relationship; I wrote a bit about that here.
Underestimation aside, there was an unexpected subtopic brought up during the meeting: automation wasn’t finding any bugs. The opinion of the development leadership was that the automation effort had been of little value because it found no bugs. I was astounded that the sole value assigned to automation was whether or not it found bugs; automation provides other value as well.
First, let’s think about when automation is most likely to find bugs: upon automation’s creation and upon breaking code changes. Why? Here are some scenarios:
- The product or feature being tested is new as well; with new software comes new bugs
- The product or feature being tested is being modified; bugs here too
- New automation is being created and the automation is performing activities with the product that are not exactly what a human was previously performing; humans and machines are different entities, at least of this writing
This begs the question “did we automate the wrong thing?”; after all, we weren’t finding bugs. A better question, though, is this: based on the organization’s goal, did we automate the activity that seemed to be the most valuable at that time? In this case, yes, we did.
Part of the product organization’s desire to automate was to move to a more frequent release cadence; they determined automation would help with that. Specifically, the test team chose smoke testing as the initial target. The definition we used for “smoke testing suite” was “a set of scripts that ran in a reasonable amount of time and checked that the system was not egregiously broken”; in other words, we wanted to know if the core features of the product could work, but we didn’t want to wait forever to know.
Now, let’s think about when we usually execute our automation: when there is a change in the application that is being tested. My talk on periodic automation notwithstanding, most teams do not execute their automation on an application that “hasn’t changed”. If the automation is written to check against code that’s not change frequently, then the likelihood of finding a bug is rather low.
Back to the issue at hand: why wasn’t the automation finding bugs? The answer to this question has several parts:
- The product was mature and many of the core features did not change frequently
- The developers proceeded with extra care when modifying these features
- The pesticide paradox
While it’s true the automation didn’t find any bugs, I submit that it was not valueless. It enabled the delivery teams to proceed faster and therefore aided in achieving the ultimate goal of a more rapid release cadence.
It’s important for us to not get hung up on a single value proposition for our automation. Automation provides value in multiple forms including bug finding (especially regressions), increased pace, and increased coverage. We should keep in mind our ultimate goals and use our tools as appropriate to achieve those goals.