Sunday, 31 October 2010

Test Automation: What Are The Real Costs?

For quite some time now i've found myself questioning the adoption of test automation. Is it that i'd seen no value in it or believed that it's had no valid applications? No, but I had witnessed a rise in its adoption that had left me questioning the broad embrace of automation that appeared to be happening.

In the cost cutting world we live in today it could not be of any real surprise that companies and thus people are looking to ways which will cut costs, reduce overheads and streamline operations. It seems quite logical really, the alternatives could be seen to result in a loss of jobs but is that the real cost and likewise is the approach of automation a real benefit in the wider scope of things?

The answer to this of course is not straight forward, it's not simple and much depends on what is being worked upon. Why? Because a project's scale, the people behind its testing, its time frame and other factors all contribute to either making a case for or against test automation.

A simple example would be taking a smaller scale project or one with a short time frame where manual execution of testing would be quicker than any automation could provide. Whilst i'm sure this is a situation that i'm sure many test automation fans would agree on that automation would not be appropriate for, it does not prevent organisations employing these testers requesting such things.

A more complex example is where manual testing facilitates someone performing testing without requiring a completed framework in place due to the cognitive skills applied, so where an ever developing product is being tested it will not fail if elements are presently missing and likewise if revisiting the same area but with new content it allows a distinction between spending time covering existing content in an area vs covering new content only. An automation suite may look to cover everything in a particular area and therefore the implications of separating the automation into separate scripts may potentially double the overhead involved in test automation design / implementation and maintenance for that area.

Another example is where an ever changing (dynamic) product is in development, the ability for anything automated to respond to a constantly changing goal post would likely result in such a large overhead on an on-going basis due to the test automation that with the turn around times involved it may be unable to match the pace that development on the product is occurring.

Critically, there are two primary areas of risk I would identify that automation introduces. The first is it only confirms existing beliefs / existing knowledge, or to rephrase it, it doesn't know what it doesn't know. When we manually test something this hands-on approach allows us to identify that which was never documented and during a previous iteration of testing may have not existed in the application through a process of cognitive analysis.

Think about this for a moment, what do we do when we test, we look to inform on issues with the programming of others. So yes, through review one can test the automation that is used to do the testing through a tester by testing the automation tests but surely that sounds like a whole lot of extra overhead that is introduced through this approach (..and is quite a mouthful to say too!).

Likewise this approach if not properly tested becomes just as fallible in terms of risks as what we are attempting to test in the first place, so instead we end up with both the risks the product itself may hold in addition to the risks the automation in place may hold. To make assumptions that the automation is any less fallible is like a programmer claiming their code has no defects, and considering that when people are primarily testers and not programmers then it also likely means that a tester is not able to say they are as refined in that skill as they might be with their testing either.

Due to the significantly clearer traceability that manual testing often involves it means that the time involved to maintain tests and identify what they do / do not cover is likely significantly quicker than the time spent debugging, re-writing or removal / addition of code used within automation too.

The answer to many of the points above then often turns to 'well we can do exploratory testing too' which then begs the question as to how much of what is covered in exploratory testing then only duplicates what the automation is doing, meaning the same area may now be covered twice (..if not more). The exploratory process is something that only goes to confirm the validity and importance of both cognitive and emotive approaches to testing.

Now it's not that in saying all this that I believe there is no value in automation. Automation to my mind makes for a handy and useful tool for sanity / smoke checking or a simplified regression check on a longer term and larger scale project. It does streamline this area to provide us with a cursory impression as to the state of a product as well as allowing people to quickly re-confirm their existing beliefs / knowledge as to a product and the state of previously known issues. In addition to this automation can also be utilised for things such as concurrency checks and data creation (where a large volume of test data is required for testing to be performed).

Whilst the results of automation can be interpreted and explored by people who can then further investigate these things it must be remembered that due to the absence of a cognitive and emotive application during the actual process that all automation is able to achieve during its execution is checking and even that checking is then more fallible than that of the manual tester due to various of the reasons listed above.

So when automation is introducing new risks into something being tested one must always properly evaluate as to whether what it does provide really is of greater benefit to a project or not.

No comments:

Post a Comment