One of the more popular segments from The Ren & Stimpy Show was the “commercial” for Log (from Blammo). In large part, the popularity came from the commercial’s jingle which was like the jingle for Slinky, part of which went as follows:

It’s log, it’s log,

It’s big, it’s heavy, it’s wood.

It’s log, it’s log, it’s better than bad, it’s good.

When it comes to a discussion of automation’s logging and results, I’m always reminded of this song, especially line “better than bad, it’s good”. Sadly, so much of our automation’s logging and results are not better than bad, they’re bad…or worse. How can we go about addressing this condition?

In my previous blog post, Be the Automation IRS – Auditing Your Automation, I wrote about auditing our automated scripts to make sure they are still valid and current, as well as to make sure we know what they are and aren’t doing for us. While we’re at it, it’s a good idea to review our logs and results as well. These execution artifacts contain our trail of breadcrumbs showing what happened during an automation run; these artifacts are what we will use to triage problems uncovered by our automation. It behooves us to try to make sure these artifacts are fit for use. We can only really do that via a review.

When we review our logs, we must remember the three -ables of logging; logs must be available, applicable, and understandable.

We run our tests and they, hopefully, generate some results: report files, log files, trace files, error messages, etc. If no data is generated, we can’t determine what did and didn’t happen, making our automation useless. Also, we must investigate the generated data; if we don’t, we could just as well not have run them because we are getting no value from the automation. In order to investigate the data, we must know where to find it. To accomplish all of this, our logs must be available to all those who need to consume the logs. This sounds like a trivial matter, but in practice it often is not. Logs might be written on remote servers or in data storage that not all have access to; sometimes, those that need the logging information may simply not know where it is.

I once worked with a client whose previous automation attempt was unsuccessful. Among the reasons it was unsuccessful was that the people who needed to have easy access to the logs and results did not have that access. We cannot fully realize the value of the data in these files if they remain on some virtual machine where we must go on an Easter egg hunt to find them. Some team members may not even have access to those VMs (or wherever the files are stored). The result and log information must be in a location that is known and reachable by anyone that needs to access that data.

What we include in logs and result reports must be applicable to helping us understand what the test script did, what it didn’t do, and if there might issues that require further investigation. If there is too little information available, we can’t gain that understanding; if there’s too much information available, we will have a harder time distilling out the nuggets that are valuable.

Log applicability will be different for different teams and different applications. A few questions we can ask about out logs are

  • Does the needed data exist in the logs?
  • Is it sufficiently straightforward to attain?
  • Is each log actionable, meaning, is each log something that might result in us obtaining the data we need?

Having available logs with applicable information is great, but if those logs are not understandable, we still have a limitation in being able to do our jobs. Logs, errors, and results should be delivered in a vocabulary that is understandable by everyone who needs to consume the logs. Now, that doesn’t mean that each log message should be instantly intuitive every human alive. After all, many of us are testing complex systems; training in the domain we’re testing is essential, but that’s the point: the logs must mean something to those who are working in that domain.

So now that we know, in general, what makes a good log, how do we make sure ours are, in fact, good.

Looking at logging, error, and result statements during code review is a great opening bid, but that only gets us part of the way there. We can check for errors (“hey, we’re logging the wrong value here”), but we can only imagine how the logs will look when they are in their respective files or locations.

The only way we can really know what our logs will look like is to, well, actually look at them in their “written state”.  Two important ways to review that written state are

  • Run the scripts. This will, or at least is should, generate logs and reports that can be reviewed.
  • Make the scripts fail. If we don’t have failures, how can we tell what the failure logs and reports will look like?

By remembering the three -ables of logging, we can strive to make our results and logs like Blammo’s Log, better than bad…let’s make them good!

Like this? Catch me at an upcoming event!

Advertisements