Menu
Menu
Test Drive

Test Drive

Despite all the horror stories for which application errors have been responsible throughout the history of computing, many organisations have traditionally underestimated the effort involved in software testing and subsequently not prioritised and resourced it appropriately.

In addition, there is constant pressure on developers to deliver and "get it out the door" - ready or not. When things consequently go wrong, the result is usually selective memory loss by those who exerted the pressure (more often than not, senior management) and punishment of the innocent.

Error-free software may remain an unrealistic expectation, but in the days of back-end-only systems the problems were not so transparent. Now that so many systems, particularly Web-based ones, are in the public eye and have a direct impact on their customers, organisations stand to lose a lot of credibility, and ultimately business if they don't work as they should. It can even have legal implications. As Australia Post's chief information officer Valda Berzins puts it, "If the systems aren't tested properly, I'm in so much trouble it's not funny".

According to Colin Mackinnon, Deloitte Consulting's regional leader for technology integrations, the implications of poorly developed software are now factors of times more than they were in the past. "If you got a module of code wrong in a big mainframe development, someone in the organisation processing some transaction would potentially suffer some inconvenience while the bug was sorted out. There may have been delays or problems with processing in the interim, but generally they [users] could be accommodated.

"But in a Web-based application the chances are you may never know of the problem end users encounter with your software. And you certainly may not know about it quickly enough and be able to respond to it in time to stop those customers clicking off to a competitor's site and losing them. So the stakes are much higher."

In addition, development cycles are now much shorter. Instead of throwing 100 people into a back room to beaver away for two to three years producing the large applications of the past, modern development requires a production-line approach in order to churn out new releases on an almost monthly basis. This, says Mackinnon, presents another set of problems.

"The Web environment involves very different development tools that provide a far less structured and disciplined development environment than the older tools, and they're often used by people with less formal training in structured techniques. So if you look at how the development environment has changed, the first question you have to ask is: how does testing have to change in response to that?"

Enter automated testing tools: tools that use software themselves to assist in the testing process. Or, more accurately: re-enter automated testing tools.

Everything Old is New Again

While not exactly new, according to Mackinnon, automated testing has traditionally been perceived as being expensive, especially if people sought to justify it purely in relation to a project. Even though organisations would spend many millions of dollars on implementing ERP, he says, most systems went in without formal automated testing, because there was resistance to spending the extra money on such tools.

"There was a general feeling that you could get away with old-fashioned user acceptance testing in a far less disciplined environment. Organisations expected packages to be more reliable than custom-built systems and saw testing as being the implementer's job, not theirs," Mackinnon says.

However, when Colonial State Bank underwent a systems upgrade in 1998, replacing the bulk of its legacy systems with state-of-the-art core applications, it realised what a huge task it would be and that it needed to be smarter, more disciplined and structured in how it operated.

The bank consequently purchased and implemented Mercury Interactive's TestDirector and WinRunner to help in the process. TestDirector organises and manages the entire testing process by enabling organisations to coordinate their testing efforts and control testing projects at each step of the cycle, from planning to managing defects. WinRunner verifies that applications work as expected. It identifies defects by capturing, confirming and replaying user interactions automatically.

Colonial has a centralised user acceptance testing team. According to its manager of business testing services, Joe Bertuccio, the main benefit TestDirector and WinRunner have brought is standardisation. "We now use the tools for all of the projects we test across Colonial: from a complete core systems replacement, to maintenance projects; or implementing Internet banking or electronic bill payment - anything that requires applications development," Bertuccio explains.

"While development is under way, we're using the signed-off specifications to construct our test cases. At the same time we get both the business side of the project and the developers to review them, so that everyone knows and understands what we're going to do."

Another benefit is reuse. Tests can be saved. Then, as in a Word document, they can be amended as required, resaved as a new file at a later date and used as a new test.

Bertuccio admits that quantifying the return on investment in the tools is somewhat subjective. However, he claims that Colonial has a long tradition of user acceptance testing before applications go into production. He says that senior management understands his team must focus a lot of its energy in preparation for testing so that the test runs smoothly. Also, given the overall cost of the systems upgrade, according to Bertuccio, the expenditure wasn't even "a blip on the screen".

"Executive management of this organisation were very keen about us being clever in how we went about our testing. It was well understood that we would gain benefits in both the short and long term from having these tools, and this was accepted from the outset. Now, if I didn't have them tomorrow, we'd be in big trouble," he says.

Passing the Test

Australia Post also used TestDirector and WinRunner in its Y2K and GST compliance testing. In addition, the organisation used LoadRunner in its initial deployment of SAP R/3 in 1997 and, more recently, in testing SAP upgrades. LoadRunner helps developers predict how well applications will perform under a variety of load conditions by emulating large numbers of users.

Like Bertuccio, Berzins finds that automated testing brings consistency in that systems are tested in a similar fashion, especially important given that they are usually interrelated. In addition, because Australia Post's systems are so large, she says manual testing would be impractical.

According to Berzins, applications are initially tested by the area that develops them, together with the intended user group. They are then subject to a production assurance test to see if they are going to have an impact on other systems. If they don't pass this test, Berzins says, they don't go anywhere. A quality assurance group is also involved in applications development from the outset, as are Australia's Post's internal audit people.

As Berzins puts it: "Obviously, it's a lot better and more efficient to have them [testing tools] involved in the beginning in order to get things right, rather than suffer the pain and extra cost and time of having to repeat work."

Berzins admits she and her team are under pressure to deliver and that sometimes people suggest they cut down on testing time.

However, this is an area in which she won't compromise, because of the huge risk involved. From discussions with her peers in other organisations, she believes testing is being taken more seriously these days.

As was the case with Australia's Post's SAP implementation and upgrades, load or stress testing is a typical application of automated testing tools. The reason for this is that it is usually impossible to organise and replicate the workload and performance of hundreds or even thousands of users of a large application without putting it into the production environment. Automated load-testing tools, on the other hand, use software robots to generate and replicate that kind of transaction traffic, pit this against the application and see how it performs.

Withstanding the Load

Load testing played a key role before Westpac's January launch of its online stock trading service, Westpac Broking, because the system needed to be able to withstand heavy trading periods with a high number of concurrent users.

Westpac Broking customers can trade all ASX-listed shares by phone or Internet and access free company profiles, market depth information and portfolio reports. At the end of the first week of March this year, Westpac Broking had 12,000 registered customers, 90 per cent of whom used the Internet, with trades peaking at around 900 a day.

"This is a highly competitive, transparent business - if the system were to go down, the impact on our customers and our reputation would be tremendous," says Dorai Sundaram, senior IT manager for Westpac Broking. "We offer Westpac Broking as part of a total solution to our existing bank customers. Any downtime could potentially alienate customers valuable to other areas of the bank and impact every division of Westpac.

"Therefore, we needed to be absolutely confident that the system could not only perform at optimum levels during peak periods of trade, but that it could scale up with customer demand to handle more concurrent users."

Westpac began testing the Westpac Broking system in August 1999. Compuware provided Westpac with its load-testing tool, QALoad, for trial, under a proof-of-concept scheme that allows organisations piloting the testing devices to ensure these meet the potential customer's strict criteria. The basis of Westpac's criteria was delivery of a working solution within two days of testing, with the load-testing tool emulating both the quantity of trades and trading patterns.

"We needed to simulate appropriate load conditions, measure response times and compare them against acceptable levels. QALoad allowed us to simulate a load test on a client/server application without involving end users or their equipment. It performed the load test by creating virtual users that simulate physical users. It then generated graphs and reports to help us understand the performance of the system," Sundaram says.

On day one, Westpac tested up to 10,000 concurrent users to ensure the system could perform well, even during the heaviest trading periods. Testing included a combination of typical investor actions, such as 10 per cent of them trading, 70 per cent examining their portfolios and 20 per cent making general enquiries.

Sundaram says he expects online trading to supersede traditional stockbroking as the preferred method of trading, and that he will consider using the tool again for future projects.

A Word of Caution

Mackinnon agrees with Berzins that organisations are now waking up to the reality and cost of getting software development wrong. However, he offers a word of caution about automated testing tools.

"Unless you have someone developing the testing strategy and plan, who understands software quality assurance and testing, then you're not going to use automated testing tools appropriately," he says. "They don't solve the problem of actually understanding how to go about testing complex environments, what the general threats are that you should be looking to address, and how to make sensible judgements about what amount of effort you'll put into testing what.

"That requires someone with skills and understanding of application architectures and what can go wrong during development and implementation, and who can convert that into a set of testing plans that will provide adequate assurance in the most cost-effective manner. You're never going to test everything, and you'd be stupid to even if you could, because it would take far too long and cost far too much."

The ROI of Testing

Attempting to cost-justify the purchase of automated testing tools against a single project or implementation can be difficult. However, ERP vendors, for example, tend to release a new version of their software every 12 or 18 months. Unless an organisation conducts regression testing on each new version before going live to ensure it works appropriately, it leaves the business exposed.

According to The Standish Group in its 1997 report, "Making Quality Pay: What's Your ROI?", organisations should not measure the return on investment in automated testing by time, cost or shortened delivery cycles, but by the increased quality achieved in an equal time slot.

The Standish Group advises conservatism in calculating a tool's ROI as investments are traditionally large and the payback period can be lengthy. The total investment, the report continues, includes the price of the testing tool, quality assurance staff salaries, training, test equipment and test maintenance time. The bottom line, it says, is that testing applications entails a lot of work, even with an automated testing tool, and estimates that a successful testing process requires more than half the total application development cost.

According to the report, a common mistake organisations make is underestimating the amount of effort needed to maintain test libraries. After tests are created for the first release of an application, they must be updated for subsequent releases.

This, the report says, is where organisations fall short in reaping the rewards of automated testing tools, because the greatest returns come with testing subsequent releases of applications.

- K Power

Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Error: Please check your email address.

More about Australia PostCompuwareDeloitte ConsultingDeloitte ConsultingDeloitte ConsultingMercury GroupSAP AustraliaStandish GroupWestpacWestpac

Show Comments

Market Place