CIO

Testing times for slack software development

Shrinking time-to-market deadlines degrade software quality

Local businesses are wasting up to 50 percent of software development budgets because they lack the resources, tools and knowledge to conduct efficient software testing, according to market analysts.

Gartner application development analyst Partha Iyengar said shrinking time-to-market project deadlines and naivety about the benefits of software testing means that businesses are missing out on an opportunity to streamline and improve the quality of software development.

"The quicker turnaround of application development is affecting quality [and] regulatory issues and pressure to meet customer demand can directly impact the testing effort, which is why software testing will become centre stage," Iyengar said.

"A strong [software testing life-cycle] moves from testing the quality of software, to building quality into software, but this type of change is beyond most enterprises and most will need some form of third-party assistance.

"Software testing typically consumes about 25 percent of resources, including time frames and budgets, however most companies spend 45 to 50 percent in a typical testing life-cycle," he said.

He said most money is lost in rectifying post-implementation mistakes which originate in the initial requirements phase.

The lack of [industrial] benchmarking controls is a big problem for software testers because there isn't a culture of benchmarking in software development,

Gartner analyst Partha Iyengar

Post-implementation application errors are more damaging to business because of shrinking time-to-market deadlines, which increase regulatory, customer and competitive demands.

While large US and European corporates have recognized the importance of software testing, those which outsource the process are struggling to tie it to software development.

Iyengar said this failure undermines the effectiveness of software testing because the testers will not fully understand the project requirements, and the disconnect is heightened when the process is outsourced.

"Software testing is only successful in a mature organization which has strong knowledge of software development methodology," Iyengar said.

He said the attitude Australian businesses have to software testing is so far behind the US that they "do not even consider [software] testing as separate from development".

However, testing is set to take off in Australia over the next few years beginning with the largest enterprises, as quality improvements become more obvious in larger projects.

Companies losing the most on lax project development processes will spearhead the shift, according to Iyengar who said resilient business will augment in-house testing through slow incremental changes, while enterprises stressed by market demands will opt for an outsourced quick-fix.

Businesses must first create a solid benchmarking strategy if they want to re-engineer their software testing strategy. Projects should be measured against a small number of critical metrics throughout the entire development phase, while the few industrial benchmarking controls can be referenced for anecdotal feedback.

"The lack of [industrial] benchmarking controls is a big problem for software testers because there isn't a culture of benchmarking in software development," Iyengar said, adding that benchmarking can also be used to measure the performance of outsourced contracts.

Page Break

"Don't crawl over bugs-per-function point or per capture; you need five or 10 basic metrics to make meaningful results, like the ability to development versus quality [output], and they should be taken over six months and balanced against industrial data," he said.

The need for software testing is determined by the context, scale and risk environment of the project, according to the Software QA and Testing Resource Center. The Web site claims testing may not be required for small projects if software programmers are experienced and skilled, and initial testing is employed.

"In some cases [where] an organization is too small or new to have a testing staff, it may be appropriate to use contractors or outsourcing, or to adjust the project management and development approach," the site states.

"Inexperienced managers sometimes gamble on the success of a project by skipping testing or getting programmers to do post-development functional testing of their work, [which is] a high-risk gamble."

Greg Sherwood, unit head of Sydney-based Squiz.net's open source content management tool MySource Matrix, said software testing should be retained in-house because business has a better understanding of its client and product requirements than outsourcing companies.

"We know our product best and we know how our clients use it and what they are most concerned about [and] we tailor our tests and testing frameworks based on that knowledge," Sherwood said, adding that testing the surface functionality of its flagship product is inadequate because it is highly complex.

"You need to measure the effectiveness of your software testing processes and constantly improve them. Each organization needs to find metrics that suit their product, like code coverage or database performance, and measure them constantly because of product changes during the development phase.

However, Sherwood said while automation tools are available for most programming languages and unit testing has grown in popularity, open source products are "notorious for being badly tested". He said automation is the best way to ensure consistent quality of testing.

Sherwood refuted figures that claim 85 percent of defects in software projects originate in the requirements phase, and said software defects creep into the product during the system architecture design, and development phases.

"It's easy to see that fixing bugs in post-implementation is significantly more expensive than bug fixes made before the product is released to users, when you consider the time it takes to upgrade and test a client's system after a bug fix," Sherwood said.

"Is it 100 times more costly? That depends on how many client systems are affected by the bug out in the wild and how those systems are patched."