//
you're reading...
Automation, Quality Assurance, Software Testing

The principle attributes of an automated software test

I really like interviewing if its done well.  The reason I like it is a good interview feels like an interesting conversation more than a date with a person you met online who you could swear said was taller than they appear now.  A good interview sparks debate about subjects about which you feel passion.  You should end up learning something or walking away with a few points or references you mean to research as soon as you get home.  I prefer these sorts of interviews because it really lets both of us in on how it would be to actually work together.  Answering a couple of technical questions the interviewer probably Googled up themselves fifteen minutes prior only proves the interviewee was lucky enough to get a question they know.  It’s like the contestants on the game showJeopardy.  Winning the day’s contest simply means you were lucky and either got categories about which you knew something or that your competitors were slightly more dense that you that day.  It also means you remembered to phrase your answer in the form of a question, which actually comes up during bad interviews more often than one would think.  The only way winning at Jeopardy proves you are smart is if you win consistently over time against a variety of other winners.

I bring this up partly to wedge another pop culture reference into my blog (and thus hopefully drive up my page views) but also because I had one of those good interviews a few weeks ago.  Well it wasn’t really so much an interview as a questionnaire they sent me to fill out.  To end the suspense, I didn’t even make it to a technical phone screening because I don’t have any Android or Windows Mobile experience.   But I once again digress.  The questions were nearly all those simple seeming one sentence things that end up requiring an answer in book form to address completely.  The one I’d like to discuss further here simply asked what I thought were the qualities of a good automated test.

Good question, eh?

It surprised me how quickly I was able to respond, which tells me my trusty subconscious has been mulling over this for a while.  Let me begin by referring you all to the eminent Michael Bolton’s answer for the nature of automation and why it is not “testing” so much as “checking.”  As usual he makes a very compelling case in approximately a thousand words or less, so I’ll simply let his statements stand and pretend you’ve all clicked on the link and read it already.

So now that we’ve established what automation can and cannot do (again, click the link if you skipped ahead … and shame on you for skipping ahead) I’d like to make the case for better practice when designing an automated check or test case.

There are five specific primary attributes of an automated test case and so far five secondary attributes.  The five primary attributes are the setup, the conditions, the check, the tear down, and the log.  The five (so far) secondary attributes are some sort of common logging mechanism and repository, a common data library, some sort of common runtime engine separate from the application under test, and a queuing mechanism for sorting, organizing, and scheduling test execution without direct human initiation.  I’ll expand on the primary attributes for the purposes of this blog.  Let me know if you’d like to discuss the secondary attributes later.

In order to run an automated test/check, we first need something to check or test.  This is realm of the setup attribute.  Setup encompasses anything we need to do in order to arrive at a state where we can reasonably test.  This may include everything from navigating through a user interface up to completely rebuilding an entire IT ecosystem from scratch.  In some cases the teardown and setup attributes can get blurry if checks are included to just make sure things like test data and environments are the way we expect, but I’m splitting them out for the sake of cohesion.

After we’ve set up our environment, we need something to verify either positively or negatively.  I call this primary attribute the conditions.  Conditions are the patterns or actions we’d like to actively verify.  The active portion is very important as automation cannot passively verify anything since it is only a check.  In manual testing the conditions are usually used to define and organize the testing.  Exploratory testing is primarily a condition set with no checks since the checks are experienced and cannot be known ahead of time.  In terms of a simple manual test, the conditions would read something like “enter the text string ‘WWWWWWWWW’ and press the submit button.”

Which brings us to the actual check.  The check attribute is the binay/boolean value the computer uses to mark a test cases as passing or failing.  The check is the heart of an automated test/check and like a heart it works best when there is only one.  It’s crucial an automated test only have one check because the point of automation is to limit the need for human involvement as much as possible.  If a test case has fifteen checks, it then requires a great deal of human involvement to reason out which of the fifteen checks failed (if not all of them) when a failure status is registered.

After the check has been performed, a good automated test cleans up after itself with a teardown or “garbage collection” attribute.  Better practices separate out the clean up from the setup in order to take advantage of reuse and aid in reverse engineering, but often you will see a sort of built in fail safe mechanism in the setup.  This mechanism checks the state of the environment and tears it down before building it out if the prior test did  not clean up after itself.  While this redundancy reduces your risks for false failures, it does sacrifice performance, reuse, and scalability.  It’s better practice to break these two actions out so you can call them separately at will.

Finally all messages and statistics judged meaningful should be recorded somewhere using the log attribute.  The only required message is the state of the check, or the “pass/fail/not run” status determined by the binary check attribute.  A case could be  made for a historical log not being necessary, but I don’t personally believe this is to be valid.  Remember the goal of automation is no human involvement during the test run, which I take to include monitoring the tests in order to record if the entire suite.  If a test was run in a complete atomic vacuum, then a simply output window would suffice.  Automation is nearly never run in this manner, however.  This means we’ll need a physical log somewhere we can review at our leisure.

So that’s my thesis in a nutshell.  There’s another question I culled from that interview I’d like to address, since it sent me down a very profitable pathway.  But that’s for my next entry.

Advertisements

About cowboytesting

Hi. My name is Curtis S, and I'm a tester.

Discussion

One thought on “The principle attributes of an automated software test

  1. I love that you included “Jeopardy” as one of the tags. 🙂

    Like

    Posted by Max Guernsey, III | July 2, 2010, 1:10 pm

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 1,077 other followers

%d bloggers like this: