I was a speaker at the 2019 Automation Guild Conference. As part of the conference, I participated in a live Q&A session, but we ran out of time before I answered all the questions. I decided to blog the answers to some of those questions.

I have seen the term used ‘test coverage’ too many times. Is investing in automation just because I can run 500 test cases instead of a 100 worth it? Or, is it better to look at what current changes are and only plan to run that automation?

I, like many others of my generation, grew up watching Sesame Street; it might even have been on twice a day, but I’m not sure about that. My favorite character was Mr. Snuffleupagus, but my second favorite was The Count. Maybe it was the accent, or maybe it was the counting. I loved the counting: one test script, ah, ah, ah…two test scripts, ah, ah, ah… You get my drift.

But, could we have watched too much Sesame Street? We have developed a fascination with counting things. Sometimes that’s good; knowing the quantity of something can certainly be valuable. That said, sometimes counting things is not beneficial. Generally, test case or test script counting is not beneficial.

The first of the two Automation Guild questions above makes me think of counting: “is investing in automation just because I can run 500 test cases instead of 100 worth it?” As usual, the answer depends on your context, your situation. Deciding to automate is a business decision because there’s an opportunity cost there: if you’re spending time automating, you’re not spending time doing some other activity. The value that the automation provides must be greater than the effort to create and maintain the automation; if there’s insufficient value, then don’t automate. We need to recoup that opportunity cost in some way. Also, remember that automation is in support of testing activities. If creating, running, and maintaining 500 test scripts is not helping your testing effort, then don’t do it, even it’s possible to do it.

The second question, though related to the first, requires a different answer. The second question is:” is it better to look at what current changes are and only plan to run that automation?” There’s a tendency today to think “we have all this automation, so we may as well run everything on each deployment”. We’ll certainly get more coverage more often, but this additional coverage comes with a cost.

The more scripts we execute on each deployment, the longer it takes to get feedback about the quality of the deployment. We can somewhat mitigate this by first running the scripts that are “likely to exercise the changed product code” and then running the remaining scripts. Note, however, there’s an opportunity cost associated with running the remaining scripts; someone must review the results of the executions. For some organizations, the opportunity cost is sufficiently low, or the value is sufficiently high, to make running all their scripts on every deployment a valuable endeavor; for other (dare I say “most”) organizations, the opportunity cost will likely outweigh the value.

Getting back to The Count, counting things is valuable in some cases. From a testing standpoint, the question of “how many automated test scripts do we have?” is generally unhelpful and may, in fact, detract from important testing considerations. From a standpoint of execution duration or code hygiene, however, the number of automated scripts can be interesting because that number may affect business decisions pertaining to owning and executing automation. So, let’s keep counting, but let’s be responsible about it; it’s what The Count would want.

Like this? Catch me at an upcoming event!