Menu
Menu
Inside the Software Testing Quagmire

Inside the Software Testing Quagmire

There are few things worse than being responsible for a software project mired in testing. To those waiting to use the software, the project seems done. But it isn't.

Software testing reveals the human failings behind the code. That's why it can become a never-ending exercise in denial. Here are five questions that you can ask to help you cut through to testing's root problems.

There are few things worse than being responsible for a software project mired in testing. To those waiting to use the software, the project seems done. But it isn't. The software needs to be tested to ensure it functions properly and is stable and reliable. And the project manager's frustration mounts as days turn into weeks, weeks turn into months, and - heaven forbid - months turn into years.

For best practices for running your testing organisation read this article.

This process is doubly frustrating for CIOs removed from the action. Testing managers - who may not be skilled at communicating with CIOs - can distract attention from the real problems by being overly detailed or focusing on irrelevancies.

CIOs must assess the situation for themselves, asking the testing manager the following five questions face to face and observing how wide his pupils dilate.

Question #1: Is the software's functionality clear, complete, documented and subject to a formal change process?

You're really asking: Are we trying to hit a moving target?

You're trying to determine: If the problem is that the software is poorly defined or that the project's scope has changed.

Interpreting the response: If the software's functionality is not fully documented or is not clear, testers will have difficulty determining whether it meets the project's goals. When functionality is subject to interpretation, test cases might not reflect what was originally intended. If functionality changes because the organization continually adds, modifies or deletes functions, testers will have difficulty keeping up. Only changes critical to the integrity of the software should be allowed.

A related symptom to check: Intense debate about requirements and test results.

Question #2: Is development complete?

You're really asking: Are the testers essentially starting over with each new release because there are so many changes?

You're trying to determine: If the software has been released for testing prematurely, or if changes are uncontrolled.

Interpreting the response: Software released prematurely will differ markedly from the previous release. With all the changes, testing performed on a previous release might no longer be relevant to the new one. If testing of one release is not completed before the next one arrives, there will be no comprehensive understanding of release defects.

After each release, the software will change due to user feedback. But problems will occur if developers and testers do not agree about which changes will be made. If developers decide to implement sweeping design changes or to improve software already functioning correctly, the testers will be the dubious beneficiaries of releases that behave very differently from previous ones. Again, testing efficiency will be very low.

A related symptom to check: Complaints about the frequency of releases, about releases being delivered without notice or about significant changes in a release.

Question #3: Are test cases comprehensive and repeatable; are they executed in a controlled environment?

You're really asking: Is testing ad hoc or disciplined?

You're trying to determine: If testing is effective.

Interpreting the response: There should be a set of repeatable test cases and a controlled test environment where the state of the software being tested and the test data are always known. Without these, it will be difficult to discern true software defects from false alarms caused by flawed test practices.

A related symptom to check: If temporary testers are conscripted from other parts of the organization to "hammer" the software without using formal test cases, it means the organization is reacting to poor testing by adding resources to collapse the test time, rather than addressing the problem's root causes.

Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Error: Please check your email address.

More about Best SoftwareHIS

Show Comments