In the United States, the IRS is the Internal Revenue Service; these are the tax people. I’m oversimplifying the process here, but once per year, we submit tax documentation related to our income. Based on the amount of money we made, and the taxes already paid, we either owe additional taxes or we are due a refund. If, however, your income tax documentation looks “suspicious”, you’ll be flagged for an audit: a formal, structured examination of your tax records that can go back several years; I hear they can be rather unpleasant. But I’m heading off topic.

In a previous blog post, Don’t Eat Stale Automation, I wrote about performing audits on our automation so that we can get rid of automated scripts that are no longer providing value; if the scripts are not providing value, they’re likely costing us too much to execute and maintain. In this post, I write about how automation audits help with other maintenance activities as well.

As I’ve been known to say in some of my talks, we don’t release the exact same software twice, meaning we’ve changed one or more things between Release N and Release N+1. Most often changes are fixes and additions to existing code but may also be the removal of capabilities.

So, how does that affect our automation?

We have several questions to ask when our application’s code is changed:

  • Should we add any automation to help us test this changed code? If applying some technology to help us do our jobs is valuable, then yes, yes, we should.
  • Are all our automation scripts still valid? If not, what should we do about it? Should we fix the scripts that are no longer valid, or should we remove them?
  • Did we expect the changed code to break any of our existing automation? If so, did those scripts, in fact, break? If not, perhaps we have a flawed understanding of what our automation is doing for us.

We cannot answer these questions unless we know and understand what our automation is doing for us. Automation audits help us obtain this information in two ways:

  • Script reviews when the automation is first created. These reviews can range from structured code reviews to informal script walkthroughs. The intent of these reviews is to ensure, as best we can, that we understand what the scripts are doing for us.
  • Audits of our scripts at appropriate times. Appropriate times is a context-dependent term but will usually be associated with times of change: change of the product, change of the scripts, change of the automation tool version, change of supporting infrastructure (e.g. operating system or database versions).

We must obtain and maintain this knowledge for our automation to continue being valuable. If this knowledge is not current, we cannot trust that the automation is performing the expected tasks for us; this lack of trust will cause us not to use the automation and we’ll receive no value from the effort used to create it. That’s bad business.

We should also audit the logs, results, and error messages that are produced by our automation. Though those topics could have included here, I think they are important enough to warrant their own blog post. Stay tuned!

Like this? Catch me at an upcoming event!