In 1980, Ozzy Osbourne, of whom I’m a huge fan, released his first post-Black Sabbath solo album entitled Blizzard of Ozz. When I first heard that album, I was hooked. The opening song, with its slow volume build-up followed by an infectious guitar riff, is called I Don’t Know and includes the lyrics:
How am I supposed to know
Hidden meanings that will never show
These days, the lyrics often remind me of a conversation I had with the manager of the automation support team that I worked with; the manager’s name was Charles (Hi, Charles!). Part of their job was testing the features that the base automation team (the one I was on) had created. I’d implemented a feature to set a timeout for test scripts; if a script didn’t complete in the specified timeout, it was supposed to be marked as failed.
When it came time for Charles’s team to test my new feature, he came by and asked, “what’s the maximum value I can specify for the timeout?”. I, being a bit of a know-it-all, said “there’s no maximum”. Charles responded, “there has to be a maximum”. I said, “fine, it’s
Then, Charles said something I carry with me to this day, “How am I going to test this? If you don’t tell me the boundaries, how do I test it to make sure it does what you say it’s supposed to do?”
Certainly, my feature would have barfed if someone entered a sufficiently large number, i.e. a number larger than the maximum value that could be stored in an
int32. But what was intended to happen if a “too large” number was provided? A software crash was not the right answer, of course, but what was the right answer? Was
MAXINT even an appropriate maximum value? I hadn’t really considered these things.
I didn’t know it at the time, but this was my introduction to testability and by association automatability. When I say testability, I mean the extent to which an application or feature can be tested. To have higher testability, the application or feature needs to provide facilities by which a tester can manipulate the software and note how it responds to that manipulation. Similarly, when I say automatability, I mean the extent to which testing-related activities can be performed by some automated mechanism, be that mechanism traditional test scripts or other mechanisms that help testers be more effective or efficient at their jobs. Generally, the more manipulation aspects available in a piece of software increases the testability of that software; the more of these programmatically accessible aspects, the more automatable that software is.
So, back to Charles. What he wanted to know was what were the expected minimum and maximum values that were expected in the context of what we were expecting. This was totally reasonable; he wanted to test that the intended values were also the actual values, but he also wanted to be able to give feedback on those values.
My solution to this situation was to change the implementation to have an explicit maximum value; this made the feature easier to test but also allowed me to provide a much more reasonable maximum value for this time out. Making this change was also my first exposure to modifying software to make that software more testable: not only could the minimum and maximum values be tested but the acceptability of the values could be discussed.
But what about the developers’ time? Won’t adding testability and automatability increase their effort? Generally, yes, but that increase is usually insignificant or is smaller than the effort required to test or automate in the absence of sufficient testability and automatability. One typical example is adding appropriate, stable locators to HTML elements. The effort for this activity is generally small but it makes the automation easier to create and significantly less costly to maintain; stable locators usually don’t change when styling or structural HTML changes are made.
Yes, as is well documented, we can test without requirements. We can and should use our innate expectations, applied learnings, and comparisons to other applications. That said, we must develop software with testability and automatability in mind; part of these activities is assessing the developers’ decisions and assumptions. Doing so can facilitate better and less expensive testing and automation; if it’s easier to test and automate, it’s typically less expensive to do so. Additionally, when we are planning our software development, we must ask, “how are we going to test this?” and make appropriate implementation decisions based on how we answer that question. We don’t want our software to have “hidden meanings that will never show”.
Like this? Catch me at an upcoming event!