You can check out my latest blog piece on this subject, examining the relationship of curiosity with testing and what impact it can have on a tester and their behaviour too when they behave in a curious fashion!
Click the following link to view the article via the Ministry of Testing's website…
The Curious Behaviour
Exploring The Digital Divide - TestingQA Blog
Blog for items that are unable to be posted on the Twitter network that consist of a collection of concise or not so concise thoughts!
Monday, 2 December 2013
Thursday, 26 September 2013
Let's Test 2013: Sweden - Experience Report
This year I had the opportunity to attend
the Let’s Test testing conference in a location not too far from Stockholm in
Sweden. The idyllic Scandinavian backdrop for this conference assisted with
creating the kind of environment that other testers have told me is unique to Let’s
Test, one in which there seems to be a conspicuous absence of cliques and ego
that I had been told is often quite present at some of the other bigger test
conferences throughout the globe, resultantly creating an environment that
seemed both friendly and laid back.
This friendly environment that I
experienced definitely provided the event with more of a community-driven feel,
with many a tester that I encountered more than happy to just have a chat or
just hang out. This same more inclusive nature is reflected in the talks
themselves, where every talk included the requirement for an ‘open’ part. These
open sessions required the presenters for each talk to put aside a specific
period of time with each talk that invited an open discussion from those
attending, but they also often contained friendly banter and related conversations
which sprung up from these open sessions too.
This more open environment though was
hardly the only merit of this conference, more importantly, and what brought
many of the people (and even speakers) there was the more progressive nature of
the talks themselves. Rather than just having talks that got lost in academics,
technicalities or discussions about specific practices there were a significant
number of talks that focused on ideas and perspectives. More specifically, and
where I felt there was the greatest value to be found, was the re-framing of
specific ideas and perspectives.
Various of the talks that I attended
brought up subjects that reflected ideas of my own but presented those ideas in
a new light, re-framing those same ideas. The power of re-framing is a skill
that has great value to the craft of testing (but is also a skill that has
value that extends way beyond the craft too). The reason for this is that the
ability to appropriately frame an idea or perspective can be the distinctions
between the ability to successfully advocate / sell an idea or have it fall on
deaf ears.
So much of the effectiveness of the testing
craft can be measured by our ability to communicate ideas and have those ideas
understood. Given this, having a new way to frame an existing idea provides the
tester with another potential approach for sharing the idea with others. In
addition to this, an idea that has been re-framed in a way that connects with
other testers has a greater potential to spread and be adopted by those
testers.
Whilst the conference did have a focus on
the ideas from the Context Driven Community, I found that numerous of the ideas
aligned with my own ideas or presented variations of my own ideas, ideas that I
had independently developed, yet I still found common grounds here.
The conference also held Test Labs sessions
run by James Lyndsay that provided a fun hands on way to do some actual
testing, without any real formalities and in an environment that was not too
serious, but gave testers an opportunity to flex their skill without having to
burn too many brain cells in the process!
For any tester looking for a conference
with some progressive ideas on testing, an opportunity for an honest and open
dialogue on the subject and a pretty sweet location I’d definitely recommend
checking it out next year either in Australia (for the first time) or back over
in Europe.
Friday, 5 July 2013
The Assumption Bias and Testing: How Does It Influence You?
When testing a product, I aim to perform the kind of testing that will cover all of the areas that I believe are of value to the business. There have been times in my own testing where I have discovered only later in the piece additional areas to cover, areas that I had not originally thought to scope in due to assumptions of my own. Whilst I have been grateful to pick up on such things before the product ships, there always remained the risk that should there be a failure to do so, or should there be issues which arise around it later in the piece, this may impact the ability to deliver the product on time and to the expected specification.
Contrary to what some might assume, this is not a product of inexperience, but is instead a product of becoming so familiar with something you are testing, often based on extensive experience with testing similar things (and where the tester holds an extensive level of domain knowledge), that this increased level of confidence can impact our perspectives with the testing that we perform.
We can try and shape our perspectives by taking a focus on the bigger picture through asking questions, questions that guide the tester as to what knowledge about the product may offer the greatest value for the business. We can then use this information to help guide us with what areas to focus on when we are testing too.
This however still does not eliminate these assumptions we hold when performing this task. This is because it too remains an externally facing exercise, as we tend to not include ourselves as completely within the equation when performing such analysis.
The issues that stem from this are comparable to something that is labelled tacit knowledge. Tacit knowledge represents knowledge that is shared on a social level, but has not yet been documented so as to exist, at least on some level, in an explicit form.
Like tacit knowledge there exists an undocumented aspect, an aspect that can equally be influenced by the social, but in this case is much more centric to the individual. In these circumstances it relates to the absence of an evaluation of the biases and the assumptions that we bring to the table when performing test design and evaluating what we feel to be relevant test coverage.
If we take the time to first analyse and document these biases and assumptions before launching in and evaluating what testing we are looking to perform, we can use this knowledge to help shape our testing, so that what is and is not covered is no longer as influenced by such factors.
Such information gives us an opportunity to identify additional areas where test coverage might have otherwise been missed, and it becomes an additional source that we can utilise for future test planning too. In addition it serves as an opportunity to gain a greater awareness and understanding of these influences that we hold too.
Taking this very human element and being mindful of it and its influences, when creating test plans, can assist with giving us greater confidence that the testing we perform will be less likely to fall short due to such influences. This then helps us achieve the kind of coverage that can better assist with the delivery of a quality product.
Contrary to what some might assume, this is not a product of inexperience, but is instead a product of becoming so familiar with something you are testing, often based on extensive experience with testing similar things (and where the tester holds an extensive level of domain knowledge), that this increased level of confidence can impact our perspectives with the testing that we perform.
We can try and shape our perspectives by taking a focus on the bigger picture through asking questions, questions that guide the tester as to what knowledge about the product may offer the greatest value for the business. We can then use this information to help guide us with what areas to focus on when we are testing too.
This however still does not eliminate these assumptions we hold when performing this task. This is because it too remains an externally facing exercise, as we tend to not include ourselves as completely within the equation when performing such analysis.
The issues that stem from this are comparable to something that is labelled tacit knowledge. Tacit knowledge represents knowledge that is shared on a social level, but has not yet been documented so as to exist, at least on some level, in an explicit form.
Like tacit knowledge there exists an undocumented aspect, an aspect that can equally be influenced by the social, but in this case is much more centric to the individual. In these circumstances it relates to the absence of an evaluation of the biases and the assumptions that we bring to the table when performing test design and evaluating what we feel to be relevant test coverage.
If we take the time to first analyse and document these biases and assumptions before launching in and evaluating what testing we are looking to perform, we can use this knowledge to help shape our testing, so that what is and is not covered is no longer as influenced by such factors.
Such information gives us an opportunity to identify additional areas where test coverage might have otherwise been missed, and it becomes an additional source that we can utilise for future test planning too. In addition it serves as an opportunity to gain a greater awareness and understanding of these influences that we hold too.
Taking this very human element and being mindful of it and its influences, when creating test plans, can assist with giving us greater confidence that the testing we perform will be less likely to fall short due to such influences. This then helps us achieve the kind of coverage that can better assist with the delivery of a quality product.
Tuesday, 19 June 2012
Has Microsoft learnt from history with the new "Surface" tablets?
Microsoft announced their new Surface tablets this week. Having spent the last week or so living the 'Post PC' reality (having had my primary computer away on repairs) I was interested to see the details on what Microsoft was planning to offer here. Reading the specs for the Surface, the general internals and parts listed for the most part appear to be a solid offering.
As things go what it appears Microsoft has delivered here has resulted in what potentially may result in a case of one step forward and two steps back again for the Windows platform within the mobile realm. Microsoft's investment in supporting their existing developers has clearly never been a bridge they have been willing to more than singe, but it's also a bridge that has kept them under water on far too many occasions. It has time and time again prevented them from making a clean break and moving forward.
One brand to rule them all?
With one of Microsoft's new tablets they are doing what may have been previously inconceivable and making that break from legacy titles available previously with their Windows RT ARM-based 'Surface' tablet. Unfortunately, this is not their only offering here, they are also, in a move that is sure to dilute this new brand also offering an Ivy Bridge Core i5-based Windows 8 Pro mobile device, which as things go is also called the 'Surface'.
As such for consumers looking to purchase this device, the distinction between the two may result in frustrated consumers, particularly where one's spouse/partner/company purchases the devices on their behalf and orders the wrong one. Further to this issue, for a consumer of this device looking for supported apps, when a developer says their app / game supports 'Microsoft Surface' tablets will the consumer end up purchasing something that does not then even support their hardware?
Ports, Ports, Ports!.. Keyboards, trackpad and more!
Whilst anyone whose used a touch screen device will be aware one of the primary factors that is perceived to affect productivity is a lack of a physical keyboard integrated to the device. The integration of the keyboard into the cover is in itself, in concept a really smart idea.
How useful that keyboard will ultimately be is yet to be seen, but as anyone who has ever purchased one of those iPad covers with an integrated keyboard can testify, more often than not it turns out to be better in theory than it does in practice. In fact the use of a real wirelessly connected keyboard would actually be a far better option in terms of genuine productivity here.
With the Surface though, Microsoft didn't stop there, they also integrated a trackpad, a couple of USB ports, a Mini-DisplayPort, a card reader (and more?!). All of a sudden you end up with a mobile device which is acting like it's a laptop computer, except for its odd lacks of any kind of cellular support (being Wi-Fi only).
Whilst some competing tablets could in my opinion offer more than just a single port, the number of available ports here has gone in the opposite direction. Anyone seeking what is a truly portable laptop would be far better served with a PC Ultrabook or a MacBookAir rather than this if their primary purpose is to use a desktop-oriented system. A well designed mobile device is one that does not try to be all things to all people but rather a focused device that delivers a specific experience.
So what's with the trackpad?!
The presence of the trackpad though (and any potential support for USB mice) is however what I consider most troubling for any device that claims itself to be a modern tablet. A modern tablet requires content that is primarily designed for touch screens, by providing the option to return to using a mouse it negates the requirement for a developer to create anything touch screen specific there at all. As such it still is bringing the old desktop paradigm along for the ride.
Windows RT, Windows 8 Pro, Desktop, Metro, ARM, x86?! Surface??
Microsoft's refusal to drop legacy support in Windows 8 on tablets, even if it is limiting the number of applications it can run at once it still drags the desktop paradigm further along. Anyone who has attempted to use Windows 7 tablets will know what limited joy the touch screen experience with desktop designed applications is like, likewise anyone with an iPad or Android tablet who has used a remote desktop/VNC like tool such as 'Splashtop' will have experienced how 'natural' this experience isn't on a touch screen.
As such you would've hoped they would've learnt from this and only supported 'Metro' applications when it came to tablet devices, but as things go, for the Windows 8 Pro tablet this hybrid mess remains where legacy and Metro apps will be available.
If Apple had attempted such a thing with their own devices, then it would have been a fair bet to say that this lack of focus would have negatively impacted on any success the iOS platform would have received. As such they kept the worlds of iOS and OSX separate. Likewise Amazon's Kindle Fire's focus in providing a superior reading experience has (as with the rest of it's Kindle line) resulted in the popularity of such devices.
Also, what of developers who have delivered apps on competing tablet platforms, platforms that utilise ARM-based CPUs? Porting between operating systems is one thing, re-coding for a different CPU to support a device such as the x86-based Surface would potentially be another matter all together. Also will they support a legacy oriented UI so they can maintain a universal UI across mobile and desktop devices or go all in with Metro? One has to wonder whether this devision will also result in them having created an instantaneous fragmentation of their own newborn platform and whether this lack of focus on a unified platform with a unified hardware platform will come back to bite them.
And so we wait...
As these new devices are released to publications we will undoubtedly learn more about them and as Windows 8 grows closer to release Microsoft may still have a trick or two up its sleeve to address some of the concerns listed above. Only time will tell now whether this will be another case of history repeating itself or if Microsoft really is taking a brave step forwards into this Post-PC world.
As things go what it appears Microsoft has delivered here has resulted in what potentially may result in a case of one step forward and two steps back again for the Windows platform within the mobile realm. Microsoft's investment in supporting their existing developers has clearly never been a bridge they have been willing to more than singe, but it's also a bridge that has kept them under water on far too many occasions. It has time and time again prevented them from making a clean break and moving forward.
One brand to rule them all?
With one of Microsoft's new tablets they are doing what may have been previously inconceivable and making that break from legacy titles available previously with their Windows RT ARM-based 'Surface' tablet. Unfortunately, this is not their only offering here, they are also, in a move that is sure to dilute this new brand also offering an Ivy Bridge Core i5-based Windows 8 Pro mobile device, which as things go is also called the 'Surface'.
As such for consumers looking to purchase this device, the distinction between the two may result in frustrated consumers, particularly where one's spouse/partner/company purchases the devices on their behalf and orders the wrong one. Further to this issue, for a consumer of this device looking for supported apps, when a developer says their app / game supports 'Microsoft Surface' tablets will the consumer end up purchasing something that does not then even support their hardware?
Ports, Ports, Ports!.. Keyboards, trackpad and more!
Whilst anyone whose used a touch screen device will be aware one of the primary factors that is perceived to affect productivity is a lack of a physical keyboard integrated to the device. The integration of the keyboard into the cover is in itself, in concept a really smart idea.
How useful that keyboard will ultimately be is yet to be seen, but as anyone who has ever purchased one of those iPad covers with an integrated keyboard can testify, more often than not it turns out to be better in theory than it does in practice. In fact the use of a real wirelessly connected keyboard would actually be a far better option in terms of genuine productivity here.
With the Surface though, Microsoft didn't stop there, they also integrated a trackpad, a couple of USB ports, a Mini-DisplayPort, a card reader (and more?!). All of a sudden you end up with a mobile device which is acting like it's a laptop computer, except for its odd lacks of any kind of cellular support (being Wi-Fi only).
Whilst some competing tablets could in my opinion offer more than just a single port, the number of available ports here has gone in the opposite direction. Anyone seeking what is a truly portable laptop would be far better served with a PC Ultrabook or a MacBookAir rather than this if their primary purpose is to use a desktop-oriented system. A well designed mobile device is one that does not try to be all things to all people but rather a focused device that delivers a specific experience.
So what's with the trackpad?!
The presence of the trackpad though (and any potential support for USB mice) is however what I consider most troubling for any device that claims itself to be a modern tablet. A modern tablet requires content that is primarily designed for touch screens, by providing the option to return to using a mouse it negates the requirement for a developer to create anything touch screen specific there at all. As such it still is bringing the old desktop paradigm along for the ride.
Windows RT, Windows 8 Pro, Desktop, Metro, ARM, x86?! Surface??
Microsoft's refusal to drop legacy support in Windows 8 on tablets, even if it is limiting the number of applications it can run at once it still drags the desktop paradigm further along. Anyone who has attempted to use Windows 7 tablets will know what limited joy the touch screen experience with desktop designed applications is like, likewise anyone with an iPad or Android tablet who has used a remote desktop/VNC like tool such as 'Splashtop' will have experienced how 'natural' this experience isn't on a touch screen.
As such you would've hoped they would've learnt from this and only supported 'Metro' applications when it came to tablet devices, but as things go, for the Windows 8 Pro tablet this hybrid mess remains where legacy and Metro apps will be available.
If Apple had attempted such a thing with their own devices, then it would have been a fair bet to say that this lack of focus would have negatively impacted on any success the iOS platform would have received. As such they kept the worlds of iOS and OSX separate. Likewise Amazon's Kindle Fire's focus in providing a superior reading experience has (as with the rest of it's Kindle line) resulted in the popularity of such devices.
Also, what of developers who have delivered apps on competing tablet platforms, platforms that utilise ARM-based CPUs? Porting between operating systems is one thing, re-coding for a different CPU to support a device such as the x86-based Surface would potentially be another matter all together. Also will they support a legacy oriented UI so they can maintain a universal UI across mobile and desktop devices or go all in with Metro? One has to wonder whether this devision will also result in them having created an instantaneous fragmentation of their own newborn platform and whether this lack of focus on a unified platform with a unified hardware platform will come back to bite them.
And so we wait...
As these new devices are released to publications we will undoubtedly learn more about them and as Windows 8 grows closer to release Microsoft may still have a trick or two up its sleeve to address some of the concerns listed above. Only time will tell now whether this will be another case of history repeating itself or if Microsoft really is taking a brave step forwards into this Post-PC world.
Tuesday, 18 October 2011
A look into the world of HTML5 support and browsers
When it comes to figuring out what browsers supported HTML5 and when they supported it you discover things are grey as grey can get.
Various browsers initially supported features that made it into the HTML5 specification which included the likes of even IE6-IE8, but this support represented a comparably limited subset of the available features in the current version of HTML5 (thus these browsers scored rather low scores in HTML5 browser tests).
2008 is when HTML5 started seeing its first semi-official support implemented.
Safari 3.1 (March 2008), Opera 9.5 (June 2008) had introductory support for the standard under the title of being HTML5. Firefox 2.0 had some limited support but they never announced it as HTML5 support but they included a few additions that were some of the key areas of HTML5.
Opera 9.6 (June 2008) / Safari 3.2 (November 2008) / Firefox 3.0 (June 2008) all extended support for these features too. Opera 9.6 also introduced HTML5 audio support in a limited form.
By the time Chrome hit version 2.0 (January 2009) it had a fair subset of HTML5 support in this release, but this support paled compared to its current support for the standard.
With the release of Firefox 3.5 though (June 2009) HTML5 Video and Audio tags were supported and helped HTML5 'go mainstream' as they announced it to all new users of the browser. Likewise with Safari 4 (June 2009) HTML5 support was greatly enhanced and video/audio support was also added. Opera 10 in June 2009 was the highest rated HTML5 supporting browser in browser tests but had no initial HTML5 video support till Opera 10.6 (Late 2009).
Chrome continued to throughout its releases grow increasing support for HTML5 at a rapid rate (so it's harder to pin point specific support) but by Chrome 6 (May 2010) it had gained HTML5 video/audio support and by Chrome 8 (October 2010) there was already support for all but two of the key elements of HTML5.
The first official version of IE to support HTML5 (despite having had some incidental support since IE7/8 days) was IE9 which only hit final release in March 2011.
So as things have progressed there has been browsers with significant support for HTML5 since mid-2009 (with the exception of IE which didn't hit the market with proper support till this year). With Safari 4.0, Opera 10.6, Firefox 3.5, Chrome 8 browsers onwards all offering support representing the majority of the HTML5 standard. HTML5 support continues to evolve across all mainstream browsers.
Additional References:
Those interested in learning more can check out a few references to see more details on this subject:
http://caniuse.com/
http://html5readiness.com/
http://www.deepbluesky.com/blog/-/browser-support-for-css3-and-html5_72/
http://www.findmebyip.com/litmus
Browser Market Share:
For those interested in the approximate market share of browsers offering significant degrees of HTML5 support as of August 2011 the stats site show:
W3C Counter: Supporting browsers 54.38%~
Firefox 3.6+ - 24%~
IE 9 - 6.53%
Chrome 12+ - 18.52%
Safari 5 - 5.33%
W3Schools: Supporting browsers 78.7%~
IE 9 - 4.2%
Firefox 3.5+ - 39.6%
Chrome 8+ - 29.3%
Safari 4+ - 3.8%
Opera 10+ - 1.8%
StatCounter: Supporting browsers 55.3%~
Firefox 3.5+ - 22%~
Chrome 8+ - 23%~
Safari 5 - 2.25%
IE9 - 8.05%
StatOwl: Supporting browsers 57.09%~
Firefox 3.x - 8%~
Firefox 4+ - 14%~
Opera 11 - 0.32%
IE9 - 10.7%
Chrome 9+ - 13.37%
Safari 4+ -10.7%~
NetApplications: Supporting browsers 52.75%~
Firefox 3.5+ - 23%~
Chrome 8+ - 16%~
Opera 10.x+ - 1.54%
IE9 - 7.91%
Safari 4.0+ - 4.3%~
Various browsers initially supported features that made it into the HTML5 specification which included the likes of even IE6-IE8, but this support represented a comparably limited subset of the available features in the current version of HTML5 (thus these browsers scored rather low scores in HTML5 browser tests).
2008 is when HTML5 started seeing its first semi-official support implemented.
Safari 3.1 (March 2008), Opera 9.5 (June 2008) had introductory support for the standard under the title of being HTML5. Firefox 2.0 had some limited support but they never announced it as HTML5 support but they included a few additions that were some of the key areas of HTML5.
Opera 9.6 (June 2008) / Safari 3.2 (November 2008) / Firefox 3.0 (June 2008) all extended support for these features too. Opera 9.6 also introduced HTML5 audio support in a limited form.
By the time Chrome hit version 2.0 (January 2009) it had a fair subset of HTML5 support in this release, but this support paled compared to its current support for the standard.
With the release of Firefox 3.5 though (June 2009) HTML5 Video and Audio tags were supported and helped HTML5 'go mainstream' as they announced it to all new users of the browser. Likewise with Safari 4 (June 2009) HTML5 support was greatly enhanced and video/audio support was also added. Opera 10 in June 2009 was the highest rated HTML5 supporting browser in browser tests but had no initial HTML5 video support till Opera 10.6 (Late 2009).
Chrome continued to throughout its releases grow increasing support for HTML5 at a rapid rate (so it's harder to pin point specific support) but by Chrome 6 (May 2010) it had gained HTML5 video/audio support and by Chrome 8 (October 2010) there was already support for all but two of the key elements of HTML5.
The first official version of IE to support HTML5 (despite having had some incidental support since IE7/8 days) was IE9 which only hit final release in March 2011.
So as things have progressed there has been browsers with significant support for HTML5 since mid-2009 (with the exception of IE which didn't hit the market with proper support till this year). With Safari 4.0, Opera 10.6, Firefox 3.5, Chrome 8 browsers onwards all offering support representing the majority of the HTML5 standard. HTML5 support continues to evolve across all mainstream browsers.
Additional References:
Those interested in learning more can check out a few references to see more details on this subject:
http://caniuse.com/
http://html5readiness.com/
http://www.deepbluesky.com/blog/-/browser-support-for-css3-and-html5_72/
http://www.findmebyip.com/litmus
Browser Market Share:
For those interested in the approximate market share of browsers offering significant degrees of HTML5 support as of August 2011 the stats site show:
W3C Counter: Supporting browsers 54.38%~
Firefox 3.6+ - 24%~
IE 9 - 6.53%
Chrome 12+ - 18.52%
Safari 5 - 5.33%
W3Schools: Supporting browsers 78.7%~
IE 9 - 4.2%
Firefox 3.5+ - 39.6%
Chrome 8+ - 29.3%
Safari 4+ - 3.8%
Opera 10+ - 1.8%
StatCounter: Supporting browsers 55.3%~
Firefox 3.5+ - 22%~
Chrome 8+ - 23%~
Safari 5 - 2.25%
IE9 - 8.05%
StatOwl: Supporting browsers 57.09%~
Firefox 3.x - 8%~
Firefox 4+ - 14%~
Opera 11 - 0.32%
IE9 - 10.7%
Chrome 9+ - 13.37%
Safari 4+ -10.7%~
NetApplications: Supporting browsers 52.75%~
Firefox 3.5+ - 23%~
Chrome 8+ - 16%~
Opera 10.x+ - 1.54%
IE9 - 7.91%
Safari 4.0+ - 4.3%~
Monday, 27 June 2011
Test Automation: Let’s Break It Down
Whilst having a conversation with a friend of mine recently we got onto the topic of test automation. During the discussion on the topic they came up with the suggestion that if automation required a programmer to produce the code so that it could be executed, should a programmer not also be the person responsible for the creation of said code?
Now the person I was speaking to was not a tester themselves but it was precisely that thinking outside the box mentality that allowed them to come up with an idea that on reflection appeared to be a real no-brainer.
Why should the testers not be the ones to use their skill of working out areas to test, what to test and how? Likewise why should the programmers not be the ones who use their experience and their knowledge to produce the code to automate this?
As I had covered in my piece where I had examined what the real costs of test automation are, I raised the point that as individuals we most likely will only possess a single skill that we can truly consider our primary skill, our core strength, the area of greatest focus. To use the cliché, those who attempt to become a jack-of-all-trades often then become the master of none.
As such an approach that enables a development team to allow people to draw on their greatest strengths and areas of experience, an approach that through working together still produces the desired results, could potentially allow the team to amplify the quality of what could be produced.
How this could be broken up in terms of the development team could vary based on resources and skill sets within the team but where possible there could be the following groups within the team: the testers, the test automation programmers and the application/web programmers. Where resources are more limited the application/web programmers could potentially be used for the creation of the test automation suites.
In breaking the team up in to their core strengths it allows people to really focus on their responsibilities. It minimises the requirement for having a tester side track work they were in the midst of doing just to update an existing automation script or suite, causing them to lose focus and potentially overlook or forget something they might had otherwise covered.
It also reduces the risks created through having someone either less trained or focused on coding producing the automation code. As is often already the practice the automation that then requires testing could be still tested by the testers to verify that the intended functionality has been implemented (...and to ensure there is greater than zero degrees of separation between creator and tester).
Much in the way a good development team has testers working directly with programmers, in such an environment the testers and the test automation programmers would be even more intrinsically linked. As a tester often has an evolving knowledge of the product, its quirks and its re-occurring issues, they can feedback this knowledge throughout the process to the automation programmers. Likewise any feedback on issues encountered during automation could also be communicated.
This feedback loop will allow automation suites to continue to improve and provide more meaningful coverage without interfering in the responsibilities of the tester, will assist with reducing risks in the development process and through this approach allow all team members to really concentrate on their core areas and thus produce a superior product.
Now the person I was speaking to was not a tester themselves but it was precisely that thinking outside the box mentality that allowed them to come up with an idea that on reflection appeared to be a real no-brainer.
Why should the testers not be the ones to use their skill of working out areas to test, what to test and how? Likewise why should the programmers not be the ones who use their experience and their knowledge to produce the code to automate this?
As I had covered in my piece where I had examined what the real costs of test automation are, I raised the point that as individuals we most likely will only possess a single skill that we can truly consider our primary skill, our core strength, the area of greatest focus. To use the cliché, those who attempt to become a jack-of-all-trades often then become the master of none.
As such an approach that enables a development team to allow people to draw on their greatest strengths and areas of experience, an approach that through working together still produces the desired results, could potentially allow the team to amplify the quality of what could be produced.
How this could be broken up in terms of the development team could vary based on resources and skill sets within the team but where possible there could be the following groups within the team: the testers, the test automation programmers and the application/web programmers. Where resources are more limited the application/web programmers could potentially be used for the creation of the test automation suites.
In breaking the team up in to their core strengths it allows people to really focus on their responsibilities. It minimises the requirement for having a tester side track work they were in the midst of doing just to update an existing automation script or suite, causing them to lose focus and potentially overlook or forget something they might had otherwise covered.
It also reduces the risks created through having someone either less trained or focused on coding producing the automation code. As is often already the practice the automation that then requires testing could be still tested by the testers to verify that the intended functionality has been implemented (...and to ensure there is greater than zero degrees of separation between creator and tester).
Much in the way a good development team has testers working directly with programmers, in such an environment the testers and the test automation programmers would be even more intrinsically linked. As a tester often has an evolving knowledge of the product, its quirks and its re-occurring issues, they can feedback this knowledge throughout the process to the automation programmers. Likewise any feedback on issues encountered during automation could also be communicated.
This feedback loop will allow automation suites to continue to improve and provide more meaningful coverage without interfering in the responsibilities of the tester, will assist with reducing risks in the development process and through this approach allow all team members to really concentrate on their core areas and thus produce a superior product.
Sunday, 31 October 2010
Test Automation: What Are The Real Costs?
For quite some time now i've found myself questioning the adoption of test automation. Is it that i'd seen no value in it or believed that it's had no valid applications? No, but I had witnessed a rise in its adoption that had left me questioning the broad embrace of automation that appeared to be happening.
In the cost cutting world we live in today it could not be of any real surprise that companies and thus people are looking to ways which will cut costs, reduce overheads and streamline operations. It seems quite logical really, the alternatives could be seen to result in a loss of jobs but is that the real cost and likewise is the approach of automation a real benefit in the wider scope of things?
The answer to this of course is not straight forward, it's not simple and much depends on what is being worked upon. Why? Because a project's scale, the people behind its testing, its time frame and other factors all contribute to either making a case for or against test automation.
A simple example would be taking a smaller scale project or one with a short time frame where manual execution of testing would be quicker than any automation could provide. Whilst i'm sure this is a situation that i'm sure many test automation fans would agree on that automation would not be appropriate for, it does not prevent organisations employing these testers requesting such things.
A more complex example is where manual testing facilitates someone performing testing without requiring a completed framework in place due to the cognitive skills applied, so where an ever developing product is being tested it will not fail if elements are presently missing and likewise if revisiting the same area but with new content it allows a distinction between spending time covering existing content in an area vs covering new content only. An automation suite may look to cover everything in a particular area and therefore the implications of separating the automation into separate scripts may potentially double the overhead involved in test automation design / implementation and maintenance for that area.
Another example is where an ever changing (dynamic) product is in development, the ability for anything automated to respond to a constantly changing goal post would likely result in such a large overhead on an on-going basis due to the test automation that with the turn around times involved it may be unable to match the pace that development on the product is occurring.
Critically, there are two primary areas of risk I would identify that automation introduces. The first is it only confirms existing beliefs / existing knowledge, or to rephrase it, it doesn't know what it doesn't know. When we manually test something this hands-on approach allows us to identify that which was never documented and during a previous iteration of testing may have not existed in the application through a process of cognitive analysis.
Think about this for a moment, what do we do when we test, we look to inform on issues with the programming of others. So yes, through review one can test the automation that is used to do the testing through a tester by testing the automation tests but surely that sounds like a whole lot of extra overhead that is introduced through this approach (..and is quite a mouthful to say too!).
Likewise this approach if not properly tested becomes just as fallible in terms of risks as what we are attempting to test in the first place, so instead we end up with both the risks the product itself may hold in addition to the risks the automation in place may hold. To make assumptions that the automation is any less fallible is like a programmer claiming their code has no defects, and considering that when people are primarily testers and not programmers then it also likely means that a tester is not able to say they are as refined in that skill as they might be with their testing either.
Due to the significantly clearer traceability that manual testing often involves it means that the time involved to maintain tests and identify what they do / do not cover is likely significantly quicker than the time spent debugging, re-writing or removal / addition of code used within automation too.
The answer to many of the points above then often turns to 'well we can do exploratory testing too' which then begs the question as to how much of what is covered in exploratory testing then only duplicates what the automation is doing, meaning the same area may now be covered twice (..if not more). The exploratory process is something that only goes to confirm the validity and importance of both cognitive and emotive approaches to testing.
Now it's not that in saying all this that I believe there is no value in automation. Automation to my mind makes for a handy and useful tool for sanity / smoke checking or a simplified regression check on a longer term and larger scale project. It does streamline this area to provide us with a cursory impression as to the state of a product as well as allowing people to quickly re-confirm their existing beliefs / knowledge as to a product and the state of previously known issues. In addition to this automation can also be utilised for things such as concurrency checks and data creation (where a large volume of test data is required for testing to be performed).
Whilst the results of automation can be interpreted and explored by people who can then further investigate these things it must be remembered that due to the absence of a cognitive and emotive application during the actual process that all automation is able to achieve during its execution is checking and even that checking is then more fallible than that of the manual tester due to various of the reasons listed above.
So when automation is introducing new risks into something being tested one must always properly evaluate as to whether what it does provide really is of greater benefit to a project or not.
In the cost cutting world we live in today it could not be of any real surprise that companies and thus people are looking to ways which will cut costs, reduce overheads and streamline operations. It seems quite logical really, the alternatives could be seen to result in a loss of jobs but is that the real cost and likewise is the approach of automation a real benefit in the wider scope of things?
The answer to this of course is not straight forward, it's not simple and much depends on what is being worked upon. Why? Because a project's scale, the people behind its testing, its time frame and other factors all contribute to either making a case for or against test automation.
A simple example would be taking a smaller scale project or one with a short time frame where manual execution of testing would be quicker than any automation could provide. Whilst i'm sure this is a situation that i'm sure many test automation fans would agree on that automation would not be appropriate for, it does not prevent organisations employing these testers requesting such things.
A more complex example is where manual testing facilitates someone performing testing without requiring a completed framework in place due to the cognitive skills applied, so where an ever developing product is being tested it will not fail if elements are presently missing and likewise if revisiting the same area but with new content it allows a distinction between spending time covering existing content in an area vs covering new content only. An automation suite may look to cover everything in a particular area and therefore the implications of separating the automation into separate scripts may potentially double the overhead involved in test automation design / implementation and maintenance for that area.
Another example is where an ever changing (dynamic) product is in development, the ability for anything automated to respond to a constantly changing goal post would likely result in such a large overhead on an on-going basis due to the test automation that with the turn around times involved it may be unable to match the pace that development on the product is occurring.
Critically, there are two primary areas of risk I would identify that automation introduces. The first is it only confirms existing beliefs / existing knowledge, or to rephrase it, it doesn't know what it doesn't know. When we manually test something this hands-on approach allows us to identify that which was never documented and during a previous iteration of testing may have not existed in the application through a process of cognitive analysis.
Think about this for a moment, what do we do when we test, we look to inform on issues with the programming of others. So yes, through review one can test the automation that is used to do the testing through a tester by testing the automation tests but surely that sounds like a whole lot of extra overhead that is introduced through this approach (..and is quite a mouthful to say too!).
Likewise this approach if not properly tested becomes just as fallible in terms of risks as what we are attempting to test in the first place, so instead we end up with both the risks the product itself may hold in addition to the risks the automation in place may hold. To make assumptions that the automation is any less fallible is like a programmer claiming their code has no defects, and considering that when people are primarily testers and not programmers then it also likely means that a tester is not able to say they are as refined in that skill as they might be with their testing either.
Due to the significantly clearer traceability that manual testing often involves it means that the time involved to maintain tests and identify what they do / do not cover is likely significantly quicker than the time spent debugging, re-writing or removal / addition of code used within automation too.
The answer to many of the points above then often turns to 'well we can do exploratory testing too' which then begs the question as to how much of what is covered in exploratory testing then only duplicates what the automation is doing, meaning the same area may now be covered twice (..if not more). The exploratory process is something that only goes to confirm the validity and importance of both cognitive and emotive approaches to testing.
Now it's not that in saying all this that I believe there is no value in automation. Automation to my mind makes for a handy and useful tool for sanity / smoke checking or a simplified regression check on a longer term and larger scale project. It does streamline this area to provide us with a cursory impression as to the state of a product as well as allowing people to quickly re-confirm their existing beliefs / knowledge as to a product and the state of previously known issues. In addition to this automation can also be utilised for things such as concurrency checks and data creation (where a large volume of test data is required for testing to be performed).
Whilst the results of automation can be interpreted and explored by people who can then further investigate these things it must be remembered that due to the absence of a cognitive and emotive application during the actual process that all automation is able to achieve during its execution is checking and even that checking is then more fallible than that of the manual tester due to various of the reasons listed above.
So when automation is introducing new risks into something being tested one must always properly evaluate as to whether what it does provide really is of greater benefit to a project or not.
Subscribe to:
Posts (Atom)