Friday, 5 July 2013

The Assumption Bias and Testing: How Does It Influence You?

When testing a product, I aim to perform the kind of testing that will cover all of the areas that I believe are of value to the business. There have been times in my own testing where I have discovered only later in the piece additional areas to cover, areas that I had not originally thought to scope in due to assumptions of my own. Whilst I have been grateful to pick up on such things before the product ships, there always remained the risk that should there be a failure to do so, or should there be issues which arise around it later in the piece, this may impact the ability to deliver the product on time and to the expected specification.

Contrary to what some might assume, this is not a product of inexperience, but is instead a product of becoming so familiar with something you are testing, often based on extensive experience with testing similar things (and where the tester holds an extensive level of domain knowledge), that this increased level of confidence can impact our perspectives with the testing that we perform.

We can try and shape our perspectives by taking a focus on the bigger picture through asking questions, questions that guide the tester as to what knowledge about the product may offer the greatest value for the business. We can then use this information to help guide us with what areas to focus on when we are testing too.


This however still does not eliminate these assumptions we hold when performing this task. This is because it too remains an externally facing exercise, as we tend to not include ourselves as completely within the equation when performing such analysis.

The issues that stem from this are comparable to something that is labelled tacit knowledge. Tacit knowledge represents knowledge that is shared on a social level, but has not yet been documented so as to exist, at least on some level, in an explicit form.

Like tacit knowledge there exists an undocumented aspect, an aspect that can equally be influenced by the social, but in this case is much more centric to the individual. In these circumstances it relates to the absence of an evaluation of the biases and the assumptions that we bring to the table when performing test design and evaluating what we feel to be relevant test coverage.

If we take the time to first analyse and document these biases and assumptions before launching in and evaluating what testing we are looking to perform, we can use this knowledge to help shape our testing, so that what is and is not covered is no longer as influenced by such factors.

Such information gives us an opportunity to identify additional areas where test coverage might have otherwise been missed, and it becomes an additional source that we can utilise for future test planning too. In addition it serves as an opportunity to gain a greater awareness and understanding of these influences that we hold too.

Taking this very human element and being mindful of it and its influences, when creating test plans, can assist with giving us greater confidence that the testing we perform will be less likely to fall short due to such influences. This then helps us achieve the kind of coverage that can better assist with the delivery of a quality product.

Tuesday, 19 June 2012

Has Microsoft learnt from history with the new "Surface" tablets?

Microsoft announced their new Surface tablets this week. Having spent the last week or so living the 'Post PC' reality (having had my primary computer away on repairs) I was interested to see the details on what Microsoft was planning to offer here. Reading the specs for the Surface, the general internals and parts listed for the most part appear to be a solid offering.

As things go what it appears Microsoft has delivered here has resulted in what potentially may result in a case of one step forward and two steps back again for the Windows platform within the mobile realm. Microsoft's investment in supporting their existing developers has clearly never been a bridge they have been willing to more than singe, but it's also a bridge that has kept them under water on far too many occasions. It has time and time again prevented them from making a clean break and moving forward.

One brand to rule them all?
With one of Microsoft's new tablets they are doing what may have been previously inconceivable and making that break from legacy titles available previously with their Windows RT ARM-based 'Surface' tablet. Unfortunately, this is not their only offering here, they are also, in a move that is sure to dilute this new brand also offering an Ivy Bridge Core i5-based Windows 8 Pro mobile device, which as things go is also called the 'Surface'.

As such for consumers looking to purchase this device, the distinction between the two may result in frustrated consumers, particularly where one's spouse/partner/company purchases the devices on their behalf and orders the wrong one. Further to this issue, for a consumer of this device looking for supported apps, when a developer says their app / game supports 'Microsoft Surface' tablets will the consumer end up purchasing something that does not then even support their hardware?

Ports, Ports, Ports!.. Keyboards, trackpad and more!
Whilst anyone whose used a touch screen device will be aware one of the primary factors that is perceived to affect productivity is a lack of a physical keyboard integrated to the device. The integration of the keyboard into the cover is in itself, in concept a really smart idea.

How useful that keyboard will ultimately be is yet to be seen, but as anyone who has ever purchased one of those iPad covers with an integrated keyboard can testify, more often than not it turns out to be better in theory than it does in practice. In fact the use of a real wirelessly connected keyboard would actually be a far better option in terms of genuine productivity here.

With the Surface though, Microsoft didn't stop there, they also integrated a trackpad, a couple of USB ports, a Mini-DisplayPort, a card reader (and more?!). All of a sudden you end up with a mobile device which is acting like it's a laptop computer, except for its odd lacks of any kind of cellular support (being Wi-Fi only).

Whilst some competing tablets could in my opinion offer more than just a single port, the number of available ports here has gone in the opposite direction. Anyone seeking what is a truly portable laptop would be far better served with a PC Ultrabook or a MacBookAir rather than this if their primary purpose is to use a desktop-oriented system. A well designed mobile device is one that does not try to be all things to all people but rather a focused device that delivers a specific experience.

So what's with the trackpad?!
The presence of the trackpad though (and any potential support for USB mice) is however what I consider most troubling for any device that claims itself to be a modern tablet. A modern tablet requires content that is primarily designed for touch screens, by providing the option to return to using a mouse it negates the requirement for a developer to create anything touch screen specific there at all. As such it still is bringing the old desktop paradigm along for the ride.

Windows RT, Windows 8 Pro, Desktop, Metro, ARM, x86?! Surface??
Microsoft's refusal to drop legacy support in Windows 8 on tablets, even if it is limiting the number of applications it can run at once it still drags the desktop paradigm further along. Anyone who has attempted to use Windows 7 tablets will know what limited joy the touch screen experience with desktop designed applications is like, likewise anyone with an iPad or Android tablet who has used a remote desktop/VNC like tool such as 'Splashtop' will have experienced how 'natural' this experience isn't on a touch screen.

As such you would've hoped they would've learnt from this and only supported 'Metro' applications when it came to tablet devices, but as things go, for the Windows 8 Pro tablet this hybrid mess remains where legacy and Metro apps will be available.

If Apple had attempted such a thing with their own devices, then it would have been a fair bet to say that this lack of focus would have negatively impacted on any success the iOS platform would have received. As such they kept the worlds of iOS and OSX separate. Likewise Amazon's Kindle Fire's focus in providing a superior reading experience has (as with the rest of it's Kindle line) resulted in the popularity of such devices.

Also, what of developers who have delivered apps on competing tablet platforms, platforms that utilise ARM-based CPUs? Porting between operating systems is one thing, re-coding for a different CPU to support a device such as the x86-based Surface would potentially be another matter all together. Also will they support a legacy oriented UI so they can maintain a universal UI across mobile and desktop devices or go all in with Metro? One has to wonder whether this devision will also result in them having created an instantaneous fragmentation of their own newborn platform and whether this lack of focus on a unified platform with a unified hardware platform will come back to bite them.

And so we wait...
As these new devices are released to publications we will undoubtedly learn more about them and as Windows 8 grows closer to release Microsoft may still have a trick or two up its sleeve to address some of the concerns listed above. Only time will tell now whether this will be another case of history repeating itself or if Microsoft really is taking a brave step forwards into this Post-PC world.

Tuesday, 18 October 2011

A look into the world of HTML5 support and browsers

When it comes to figuring out what browsers supported HTML5 and when they supported it you discover things are grey as grey can get.

Various browsers initially supported features that made it into the HTML5 specification which included the likes of even IE6-IE8, but this support represented a comparably limited subset of the available features in the current version of HTML5 (thus these browsers scored rather low scores in HTML5 browser tests).

2008 is when HTML5 started seeing its first semi-official support implemented.

Safari 3.1 (March 2008), Opera 9.5 (June 2008) had introductory support for the standard under the title of being HTML5. Firefox 2.0 had some limited support but they never announced it as HTML5 support but they included a few additions that were some of the key areas of HTML5.

Opera 9.6 (June 2008) / Safari 3.2 (November 2008) / Firefox 3.0 (June 2008) all extended support for these features too. Opera 9.6 also introduced HTML5 audio support in a limited form.

By the time Chrome hit version 2.0 (January 2009) it had a fair subset of HTML5 support in this release, but this support paled compared to its current support for the standard.

With the release of Firefox 3.5 though (June 2009) HTML5 Video and Audio tags were supported and helped HTML5 'go mainstream' as they announced it to all new users of the browser. Likewise with Safari 4 (June 2009) HTML5 support was greatly enhanced and video/audio support was also added. Opera 10 in June 2009 was the highest rated HTML5 supporting browser in browser tests but had no initial HTML5 video support till Opera 10.6 (Late 2009).

Chrome continued to throughout its releases grow increasing support for HTML5 at a rapid rate (so it's harder to pin point specific support) but by Chrome 6 (May 2010) it had gained HTML5 video/audio support and by Chrome 8 (October 2010) there was already support for all but two of the key elements of HTML5.

The first official version of IE to support HTML5 (despite having had some incidental support since IE7/8 days) was IE9 which only hit final release in March 2011.

So as things have progressed there has been browsers with significant support for HTML5 since mid-2009 (with the exception of IE which didn't hit the market with proper support till this year). With Safari 4.0, Opera 10.6, Firefox 3.5, Chrome 8 browsers onwards all offering support representing the majority of the HTML5 standard. HTML5 support continues to evolve across all mainstream browsers.

Additional References:
Those interested in learning more can check out a few references to see more details on this subject:
http://caniuse.com/
http://html5readiness.com/
http://www.deepbluesky.com/blog/-/browser-support-for-css3-and-html5_72/
http://www.findmebyip.com/litmus

Browser Market Share:
For those interested in the approximate market share of browsers offering significant degrees of HTML5 support as of August 2011 the stats site show:

W3C Counter: Supporting browsers 54.38%~
Firefox 3.6+ - 24%~
IE 9 - 6.53%
Chrome 12+ - 18.52%
Safari 5 - 5.33%

W3Schools: Supporting browsers 78.7%~
IE 9 - 4.2%
Firefox 3.5+ - 39.6%
Chrome 8+ - 29.3%
Safari 4+ - 3.8%
Opera 10+ - 1.8%

StatCounter: Supporting browsers 55.3%~
Firefox 3.5+ - 22%~
Chrome 8+ - 23%~
Safari 5 - 2.25%
IE9 - 8.05%

StatOwl: Supporting browsers 57.09%~
Firefox 3.x - 8%~
Firefox 4+ - 14%~
Opera 11 - 0.32%
IE9 - 10.7%
Chrome 9+ - 13.37%
Safari 4+ -10.7%~

NetApplications: Supporting browsers 52.75%~
Firefox 3.5+ - 23%~
Chrome 8+ - 16%~
Opera 10.x+ - 1.54%
IE9 - 7.91%
Safari 4.0+ - 4.3%~

Monday, 27 June 2011

Test Automation: Let’s Break It Down

Whilst having a conversation with a friend of mine recently we got onto the topic of test automation. During the discussion on the topic they came up with the suggestion that if automation required a programmer to produce the code so that it could be executed, should a programmer not also be the person responsible for the creation of said code?

Now the person I was speaking to was not a tester themselves but it was precisely that thinking outside the box mentality that allowed them to come up with an idea that on reflection appeared to be a real no-brainer.

Why should the testers not be the ones to use their skill of working out areas to test, what to test and how? Likewise why should the programmers not be the ones who use their experience and their knowledge to produce the code to automate this?

As I had covered in my piece where I had examined what the real costs of test automation are, I raised the point that as individuals we most likely will only possess a single skill that we can truly consider our primary skill, our core strength, the area of greatest focus. To use the cliché, those who attempt to become a jack-of-all-trades often then become the master of none.

As such an approach that enables a development team to allow people to draw on their greatest strengths and areas of experience, an approach that through working together still produces the desired results, could potentially allow the team to amplify the quality of what could be produced.

How this could be broken up in terms of the development team could vary based on resources and skill sets within the team but where possible there could be the following groups within the team: the testers, the test automation programmers and the application/web programmers. Where resources are more limited the application/web programmers could potentially be used for the creation of the test automation suites.

In breaking the team up in to their core strengths it allows people to really focus on their responsibilities. It minimises the requirement for having a tester side track work they were in the midst of doing just to update an existing automation script or suite, causing them to lose focus and potentially overlook or forget something they might had otherwise covered.

It also reduces the risks created through having someone either less trained or focused on coding producing the automation code. As is often already the practice the automation that then requires testing could be still tested by the testers to verify that the intended functionality has been implemented (...and to ensure there is greater than zero degrees of separation between creator and tester).

Much in the way a good development team has testers working directly with programmers, in such an environment the testers and the test automation programmers would be even more intrinsically linked. As a tester often has an evolving knowledge of the product, its quirks and its re-occurring issues, they can feedback this knowledge throughout the process to the automation programmers. Likewise any feedback on issues encountered during automation could also be communicated.

This feedback loop will allow automation suites to continue to improve and provide more meaningful coverage without interfering in the responsibilities of the tester, will assist with reducing risks in the development process and through this approach allow all team members to really concentrate on their core areas and thus produce a superior product.

Sunday, 31 October 2010

Test Automation: What Are The Real Costs?

For quite some time now i've found myself questioning the adoption of test automation. Is it that i'd seen no value in it or believed that it's had no valid applications? No, but I had witnessed a rise in its adoption that had left me questioning the broad embrace of automation that appeared to be happening.

In the cost cutting world we live in today it could not be of any real surprise that companies and thus people are looking to ways which will cut costs, reduce overheads and streamline operations. It seems quite logical really, the alternatives could be seen to result in a loss of jobs but is that the real cost and likewise is the approach of automation a real benefit in the wider scope of things?

The answer to this of course is not straight forward, it's not simple and much depends on what is being worked upon. Why? Because a project's scale, the people behind its testing, its time frame and other factors all contribute to either making a case for or against test automation.

A simple example would be taking a smaller scale project or one with a short time frame where manual execution of testing would be quicker than any automation could provide. Whilst i'm sure this is a situation that i'm sure many test automation fans would agree on that automation would not be appropriate for, it does not prevent organisations employing these testers requesting such things.

A more complex example is where manual testing facilitates someone performing testing without requiring a completed framework in place due to the cognitive skills applied, so where an ever developing product is being tested it will not fail if elements are presently missing and likewise if revisiting the same area but with new content it allows a distinction between spending time covering existing content in an area vs covering new content only. An automation suite may look to cover everything in a particular area and therefore the implications of separating the automation into separate scripts may potentially double the overhead involved in test automation design / implementation and maintenance for that area.

Another example is where an ever changing (dynamic) product is in development, the ability for anything automated to respond to a constantly changing goal post would likely result in such a large overhead on an on-going basis due to the test automation that with the turn around times involved it may be unable to match the pace that development on the product is occurring.

Critically, there are two primary areas of risk I would identify that automation introduces. The first is it only confirms existing beliefs / existing knowledge, or to rephrase it, it doesn't know what it doesn't know. When we manually test something this hands-on approach allows us to identify that which was never documented and during a previous iteration of testing may have not existed in the application through a process of cognitive analysis.

Think about this for a moment, what do we do when we test, we look to inform on issues with the programming of others. So yes, through review one can test the automation that is used to do the testing through a tester by testing the automation tests but surely that sounds like a whole lot of extra overhead that is introduced through this approach (..and is quite a mouthful to say too!).

Likewise this approach if not properly tested becomes just as fallible in terms of risks as what we are attempting to test in the first place, so instead we end up with both the risks the product itself may hold in addition to the risks the automation in place may hold. To make assumptions that the automation is any less fallible is like a programmer claiming their code has no defects, and considering that when people are primarily testers and not programmers then it also likely means that a tester is not able to say they are as refined in that skill as they might be with their testing either.

Due to the significantly clearer traceability that manual testing often involves it means that the time involved to maintain tests and identify what they do / do not cover is likely significantly quicker than the time spent debugging, re-writing or removal / addition of code used within automation too.

The answer to many of the points above then often turns to 'well we can do exploratory testing too' which then begs the question as to how much of what is covered in exploratory testing then only duplicates what the automation is doing, meaning the same area may now be covered twice (..if not more). The exploratory process is something that only goes to confirm the validity and importance of both cognitive and emotive approaches to testing.

Now it's not that in saying all this that I believe there is no value in automation. Automation to my mind makes for a handy and useful tool for sanity / smoke checking or a simplified regression check on a longer term and larger scale project. It does streamline this area to provide us with a cursory impression as to the state of a product as well as allowing people to quickly re-confirm their existing beliefs / knowledge as to a product and the state of previously known issues. In addition to this automation can also be utilised for things such as concurrency checks and data creation (where a large volume of test data is required for testing to be performed).

Whilst the results of automation can be interpreted and explored by people who can then further investigate these things it must be remembered that due to the absence of a cognitive and emotive application during the actual process that all automation is able to achieve during its execution is checking and even that checking is then more fallible than that of the manual tester due to various of the reasons listed above.

So when automation is introducing new risks into something being tested one must always properly evaluate as to whether what it does provide really is of greater benefit to a project or not.

Thursday, 9 September 2010

Think For Yourself, Question Everything

Earlier this year I wrote a piece entitled The Importance of Being Independent. I covered my belief that those who are still new to the field of testing or have limited experience would be best served by avoiding social networks and to avoid joining any form of community groups either.

I suggested instead that they would be best served purely observing and experiencing on their own the various systems and methodologies out there, not just on a theoretical level but within a real workplace environment where their decisions ultimately impact the outcome of that which they are working with. In addition to this I suggested that they seek to learn as much as they can by both reading up on their topics of interest that could relate to or assist them within their roles as well as to read articles, opinion pieces and tutorials that relate to testing.

It is my belief that these foundations give us a structure, a basis from which to judge experiences we have after that point, a basis where we can come from knowledge and not just opinion as to what we believe is the most appropriate thing. These experiences when formed in an environment largely free of much of the social influences that may otherwise pollute or bias ones belief system allow us to become better independent thinkers. It is only at this point that it is my belief that a tester should then explore the world of testing beyond that and join these social structures that exist based around an interest in testing.

The Internet is full of self and socially appointed experts, whether by merit or otherwise these people have become known figures within their respective communities. Whether these people were just those making the most out of an existing system or those looking to break the rules those that succeeded became figure heads that others would listen to or follow.

The emperor's new clothes...
It is common enough that the rebels, the fighters and the rule setters become the people's new leaders, particularly amongst those who are disillusioned with existing systems. This frequently however leads to something which creates an illusion, one that is in itself misnomer, in that those following the new leader are the rebels, the individuals, the defiant ones.

The reality however becomes that the rebels simply become the new conformists whose greatest point of distinction is that their rules are different to the group(s) that came before them but nevertheless these new collectives end up with rules of their own and ultimately contradict the whole point of being rebels in the first place.

Those of us who work to address quality are aware that there is no such thing as perfection and thus all any of us can do is seek to find something that works for them self and abide by their own beliefs. Subsequently the very idea of people who primarily just follow others whether they are new thinkers or existing ones are putting a level of trust into these people that essentially implies that these systems are somehow perfect models or methodologies in their own right, a flaw in it self.

Always the rebel...
A true rebel will never stop questioning, never follow others but instead take inspiration and ideas from others, allows themselves to be challenged and is willing to challenge others, embraces failure for the opportunity it can be and never stops learning.

As such whether you question or agree with this piece, to be a real critical thinker I believe one should form your opinion and hold your own view and to never let the view of others dictate or cloud your objectivity. For some to conform and follow is something that some choose to do and that is their right as much as it is the true for the rebels who go against it.

So one should never stop questioning authority and never stop thinking for yourself, as those who become true leaders in this world, those who change the world are those who never just rest on the shoulders of others and who never just accept the status quo but those who seek to challenge it.

Tuesday, 13 July 2010

The Case of the Cases, an iPhone 4 and Your Protection

There's been a lot of winging in recent weeks. It has been around the
launch of the iPhone 4, Apple's golden child, well its new one. It
comes from the line of products that much of the world fell in love
with, drove up Apple's profit margins to new heights and turned the
mobile industry on its head as well as helping facilitate the rebirth of
the bedroom programmer.

Apple's iPhone 4 though unfortunately came along with a design defect,
one they have in fashion that was quite typical of their previous
behaviour been in denial over and whilst there is no question the issue
regarding the aerial is a reflection of some poor testing what strikes
me is the reaction to this issue given what the solution to the problem
actually is, a case.

It struck me as odd that people would with such fervour be so quick as
to strike Apple down so harshly over this issue, whilst they could be
criticised for their (poor) PR when it's come to the handling of the
matter, what the remedy is is actually something users should be
adhering to anyhow.

The whole incident made me cast my mind back to an experience of mine
in the Apple Store. I was there to book a Mac in for repairs and there
were a number of those around me who had brought in their iPhones
having either cracked the device or it ceased to function after being dropped. The first thing that passed my mind is what kind of fools were these people to buy something as volatile as a smart phone yet not get some kind of casing for it. A year later when finally purchasing one myself the first thing I purchased with it was a case and a screen protector. Why? Because I like to protect my investments.

I understand the value of aesthetics to many people but in a world where
one can purchase an infinite variety of different types of phone cases
it can hardly be said that there isn't a phone case to match peoples
tastes for this aspect to even be an issue. So is the issue price? Well
considering they are spending up to £600 on the device is investing a
further £15, £20, £30 really spending that much more?

As someone with a lot of technology I like to protect my investments.
I have, not often, but it has happened on occasions, dropped items that
I have owned and the cases could be in part thanked for saving these
devices from otherwise potentially being destroyed or damaged. So when
reports say people can remedy the one usability issue of a device that
Consumer Reports says is otherwise the best smart phone on the market
with a measure that protects their investment is it really such a problem?

Would some of these same people go to equal lengths with other devices
they own? Let me list some examples here...
  • dSLR / Compact cameras - The screens on these often can quite easily be scratched so a screen protector makes sense to avoid this, it also is a form of protection against impact if the device is dropped and falls on its back. In addition to this most people also buy cases for their cameras which also protect them
  • dSLR and other lenses - These are extremely sensitive pieces of glass and much more so than any screen on an electronic device, it is both advised and extremely common that people have an associated UV/Sky filter that fits the thread size of the lens they have bought. This once again protects the lens from impact and prevents the glass from being scratched. Lenses also come with cases too
  • Laptops - There's nothing preventing people from carrying these around without a case or a laptop bag, but you don't see many who do. It's not just a matter of practicality but also ensures that this piece of machinery does not get too battered or damaged as easily
    when transported in a case/bag
  • iPods - Anyone who has ever owned an iPod would know they scratch and like any other device with a portable hard drive in they can also be damaged by shock / impact, so there was no shortage (if not a plethora) of cases and screen protectors one could buy for these
  • Portable gaming devices - These too often have a variety of cases available for them as like the other electronic devices it is just a matter of fact that it is good practise to have one
But let's take the analogy one step further and go to a software level.
Would you put a Windows based machine onto the internet without some
form of anti-virus / anti-spyware software? or would you do on-line
banking / purchases without a secure connection?

Those who do not take these measures I would take a moment to thank.
As these people can be credited with keeping companies more buoyant
due to the frequency with which they replace items due to having had
some kind of accident with them and subsequently the amount of cash
they hand over due to not just repairing but often having to replace
these items as they did not wish to spend a minor amount protecting
them.

So this whole situation with the iPhone 4 and the remedy to its
primary issue is left with one question begging to be asked. If the
remedy is something which will protect your investment, is of relatively
minor expense and at the same time provide you with greater satisfaction then what REALLY is the issue?