Friday 23 September 2011

To automate or not to automate, that is the question

Test tools can be a valuable resource in software testing – but they are not a substitute for testers

The key fits beautifully and turns like a dream. But did you check if it locks the door?
(Photo: Alan Cleaver, Flickr)

There is a lot of talk and excitement over the use of test tools in software testing. A whole chapter of the ISEB foundation syllabus is devoted to it. Test tools range from small-scale open source applications to comprehensive commercial packages. A favourite selling point is how test tools eliminate human error. Some vendors promise you savings beyond your wildest dreams (one company even promises benefits within an hour of use). But a good tester needs an eye for what can go wrong in software, and software they’re using for testing is no exception. So: are these tools any good?

Obviously there are cases when you’d have to use a test tool (such testing a website to see if 10,000 people can log on at once, unless you happen to have 10,000 people at your disposal), and cases where you’d never use one (such as tester user-friendliness for IT novices). But there is a vast range of test tools out there covering every kind of activity you can imagine, so it would be impossible to cover them all in one blog entry. Instead, I’ll concentrate on Selenium IDE, which I’ve been using over the last few weeks. It’s an open-source extension to Firefox which allows you to automatically test websites; you can either automatically record yourself clicking through all the links and entering data into forms for replay later, or manually program the test yourself.



I must stress that I have no issues with Selenium: the makers are clear about what Selenium does and doesn’t do, they don’t make extravagant claims on commercial benefits, and anyone who can develop a free application that is competitive with commercial rivals has my respect. But no computer program, however good, is safe from being used badly.

An easy mistake in software testing is to declare a feature in working order, when the tests used didn’t actually check what mattered . You could test a lock by reproducing the steps to use it: close the door, put the key in the lock, turn it until it goes “clunk” and take the key out again. But if you don’t check if the door can still be opened afterwards, the test is almost worthless. That hypothetical situation would require unimaginable stupidity, but in software testing, the flaws are more subtle. It’s not always easy to explain, when software runs perfectly every time you test it, that the tests may still be leaving serious bugs undetected. And if you’re trying to explain this to a manager who wants the product declared ready for live deployment in four weeks, the temptation must be there to instead just agree that it’s all ready to go.

It’s not quite fair to blame this problem on test tools – it’s perfectly possible to conduct a series of non-tests without a test tool in sight. But test automation can make this mistake a lot easier to make. At the risk of stating the obvious, computers have zero common sense and can fail to notice something a human would spot in an instant. Since there’s only so much you can explain on non-computer analogies (I’ve already used black swans, hot air balloons and door locks), here’s an example of a real test I did and how test automation could have gone wrong ...

Over the last two weeks or so I’ve been devising tests for Autism Works’s own website, including, amongst other things, testing all the links on this page:


Unfortunately, external web pages have an annoying habit of moving after you’ve linked to them. It’s not always practical or necessary to test every single link on a website to perfection, but when you’re a software testing company, it doesn’t look good if your own website doesn’t work properly. Link checking is actually an excellent example of useful test automation, because the process of laboriously clicking on every link is something you can easily instruct a computer to do.

The simplest way of doing this is through a capture/replay test. This involves clicking through all the links whilst Selenium (or whichever tool you’re using) records your actions. At a later date, you can then get Selenium to retrace your actions automatically, and if all is well and nothing’s changed, it will get through the test fine. But before recording the actions, it makes sense to check the links to check all the links make sense. So I clicked through all the links and everything was fine until I tried this link to the Autism Centre at Sheffield Hallam University:


That doesn’t look right. Presumably someone moved this page since we first linked to it. A quick search of Sheffield Hallam’s site tracked down the page we were looking for here:


One quick tweak to our page later and the link was fixed, no harm done. But supposing this page had been moved after I’d written the automated test. Would it have been picked up then? Possibly not, if I’d simply relied on the capture/replay technique. I knew something was wrong the moment I arrived at a page titled “Unfortunately the page you have requested could not be found”, but a computer would have gone straight to this page and straight back to the links page without reporting any problems. An error would have been reported had no page been loaded at all, but this was not one of those errors. With the wrong test script, it would be possible that half the links on the page could have become broken over time and we would have been none the wiser.

Thankfully, the automated test script I had in mind safeguarded against this scenario, so I now have a test script that both follows the links and checks that the link destinations make sense.[1] With the test scripts now testing hundreds of links in less than 10 minutes on a daily basis, and one out-of -date link already detected within a fortnight of beginning these tests (Passwerk, whose default language on their front page switched from English to Dutch), everybody’s happy.

Testing tools can be extremely useful in the right situation, but they are no substitute for actual software testers. You cannot reduce testing to pressing the big green Start button, much as we wish you could. People are needed who understand what is actually being tested, how the test tools contribute to the task, and anything is left unaccounted for. No company worth its salt would put their faith in a newly-acquired piece of software without testing it first, so it is reckless for software testers to put their faith in test automation tools without properly examining what they do. Yes, humans are prone to make human errors, but don’t underestimate their ability to spot them.

[1] For those who really want to know how I did this, I instructed Selenium to check each website for text that should be present on the correct page – for this page I used the phrase “We are an evolving and developing centre dedicated to enabling people on the autism spectrum, parents, families and professionals to access information about the autistic spectrum.” Should the text not be found, Selenium would flag this and give me the chance to investigate the problem. Naturally, this doesn’t cover every scenario – the website might change the text, or the old URL might get changed to a page which quotes the text I’m searching for but doesn’t display the actual page. But good test plans are based on risk management, and the chance of getting a false positive this way is low enough to let it go.

No comments:

Post a Comment