Les Fondy is confident that DHL Airways' fleet of 100 planes will take to the skies on Saturday, Jan. 1, 2000, as fully loaded as they would any day this year or next.
Albert Kern is sure that when Commercial Union Insurance's underwriters come to work Monday, Jan. 3, it will be as easy for them to write policies as it was the week before.
Brian D. Robbins is staking his reputation on every single one of Chase Manhattan Bank's approximately 1,600 locations opening for business on the same Monday morning without a hitch.
But how can these executives be so confident as they shepherd their Y2K conversion projects toward completion? Because they're not simply crossing their fingers and hoping for the best. They're deploying massive test efforts, often with the aid of outside consultants and third-party software tools, to make sure broad-ranging, complex Y2K projects will be up and running when the clocks tick to 2000.
Tests are no more fun now than they were in school. But now, as then, your future depends upon them. With your Y2K project, there are different kinds of tests you need to run, and they involve a set of logistics almost as complicated as the remediation process itself. Pencils ready? Let's begin.
If You Test, You're in the Minority
For most companies, unfortunately, testing is at best a rusty skill set. The Software Engineering Institute at Carnegie Mellon University in Pittsburgh estimates that 67 percent of organisations have no formal Y2K testing program in place. Even the praiseworthy 33 percent are finding Y2K testing to be knottier and more problematic than anything they've done before. "Dates are just so all-encompassing that when you start messing with them, every aspect of the system -- and all the interrelationships between systems-needs to be tested," says Albert Kern, an assistant vice president and the Y2K project leader at Boston's Commercial Union Insurance.
To add to the burden of making sure all systems are functional, there's the necessity of documenting every last step of the process. "You can't just run these tests," says Greg Pope, president of Azor Inc., a test-tool supplier in Palo Alto, California. "You've got to have evidence that you've run the tests and gotten the correct results because if you face litigation [later], you need to be able to prove that you did the testing and were in good shape."With Y2K projects, testing dovetails with documentation. The two are longtime bugaboos of the IT industry. "What's the first thing that you skimp on when you're running up against a tight time frame and a tight budget?" asks Brian Keane, co-president of systems integrator Keane Inc. in Boston. "Testing, with documentation a close second."Yet testing is expected to represent half a Y2K project's effort and expense, according to the Gartner Group Inc. Some project leaders, like Nabisco CIO Joseph Farrelly, see the test phase growing to 60 or even 75 percent of the total work. He points out that testing involves third-party software, embedded systems, PCs, custom-written software and, of course, mainframe operating systems and hardware. Even brand-new packages that purport to be Y2K-ready need to be poked and prodded for faults.
All that can be dubbed internal testing. Its various stages include unit testing (looking at individual program components), system testing (ensuring that the units work together) and integration testing (verifying that all the systems interact correctly). An entirely different facet is external testing -- making sure the organisation's remediated systems will work with those of its business partners, regulators and customers.
"You have to devise a sound strategy for getting everything tested on time," says Nabisco's Farrelly, whose company will be spending between US$25 million and $30 million on its Y2K project. "And you're going to need as much talent dedicated to testing as to any other part of the project."How are Farrelly and his fellow IT executives grappling with the testing challenge?Start Early and Test in TandemTesting can't be the last task on a Y2K project time line. "It starts in the beginning, not somewhere in the middle or at the end," says Les Fondy, senior manager for IS process solutions at DHL, the Redwood Shores, California-based U.S. division of DHL Worldwide, a $4 billion air-freight company.
"It takes a long time to set up the hardware and software environment for testing," adds Joseph Allegra, president of Princeton Softech, a software development company in Princeton, New Jersey, that makes a product that "ages" data for Y2K testing purposes. "It also takes a long time to develop your testing plan."Testing should begin during the initial assessment phase, when systems are evaluated for their ability to function in a post-2000 world. Systems that are dubbed compliant need to be thoroughly put through their paces. "We've never found a system that was 100 percent Y2K compliant," says William Ulrich, president of Tactical Strategy Group Inc. in Soquel, California. He mentions a major travel reservation system that had been given a seal of approval -- only it didn't recognise 2000 as a leap year. "If you wait until the year 2000 to fix the bugs in all those systems you thought were OK, you're going to be in deep, deep water," Ulrich says.
There's another kind of testing that can be done at the front end of a project: baseline tests, also known as pretests. Applications that are determined to be unprepared for Y2K probably have other problems, too, which project managers may or may not want to address during the conversion phase. Baseline tests are also useful for gauging the state an application is in before it gets handed off to a service provider for remediation.
Once conversion begins, other testing tasks can proceed in parallel: the design of test cases, the development of an appropriate Y2K testing environment and the assembly of a testing team. And as soon as code is converted, it should be run through the testing process. "Early tests help validate the remediation process you're using," says DuWayne Peterson, president of IT consultancy DuWayne Peterson Associates in Pasadena, Calif., and the former CIO at Merrill Lynch & Co. Inc. "It may be that there's a flaw, and you'd like to know that sooner rather than later."How Much Testing, Done by Whom?A host of questions about testing need to be asked in the early stages of the Y2K project. How much is necessary? Who will do it? Where will it be done? Who decides when an application has been satisfactorily tested? While it's not necessary to have all the answers before proceeding with conversion, it's important to begin exploring the issues.
Most IT executives are feeling the quicksilver passage of time and sense that the best use of it is to focus on their critical applications. "Time is just spinning by," says Nabisco's Farrelly. "We've realised that all we have time to do is test the critical stuff really well." At Nabco, noncritical applications like decision support systems will be tested last because, as Farrelly puts it, "If the date in a report is incorrect, we'll survive." DHL's Fondy says his company has prioritized its applications portfolio into four categories: critical, high, medium and low. Top priorities include things like aircraft maintenance, package tracking, EDI, customer service and logistics.
This approach can be termed "risk-based testing." The concept: identify the systems that, should they fail, pose the biggest threat to the enterprise's operations. Chase Manhattan's Brian Robbins offers a colourful metaphor for the concept: Risk-based testing is similar to the way white blood cells know how to attack the body's worst injuries first. "You don't want to be floating around looking at small bruises instead of major injuries," he says.
Unfortunately, there's no consensus on who should actually perform the testing.
Some, like Fondy at DHL, have hired full-time testers to do the work. His reasoning? "You might as well train them and keep them, and turn them into applications engineers later on." He doesn't see the fiscal or organisational sense in simply contracting for testing assistance. But others say that service providers or contractors can bring the kind of specialised expertise to the test phase that would be tough to find in full-time employees.
Still others contend that no additional resources are necessary. The same programmers who remediated the applications can do the testing. That's the view held by David Mounce, vice president of production operations at Options Clearing Corp. of Chicago, a government-regulated monopoly that clears every options trade for all four U.S. options markets. "I had the guys who were making the changes do the tests," Mounce says. "We started doing it with two separate groups-programmers and testers-but we stopped. It was totally inefficient. We didn't have time for the transition of knowledge from one staff to another."But having programmers perform testing isn't something everyone recommends.
"You need a dedicated quality assurance team that's just doing the testing," argues Peterson, the former Merrill Lynch CIO. "Don't try to commingle it with everything else your programmers are doing."Regardless of who does the testing, everyone agrees that one part of a test team is indispensable: the application experts. "The application experts need to be the intellectual leaders in testing," says Phil Carrai, president and COO of McCabe & Associates, a Columbia, Maryland, company that makes assessment, visualisation and testing software for the year 2000. "You need their expertise to develop test cases."Even the companies that enlist service providers to assist with testing won't be able to skimp on the amount of time required of their applications experts.
"The service providers aren't trying to understand your applications," says Princeton Softech's Allegra. "They're doing a mechanical remediation. It's really up to the application experts to supervise the testing."The experts' services will be in high demand as the test phase gears up, since there's a finite (and, by the way, insufficient) number of people who thoroughly understand a given application. Chase Manhattan, for one, has run into bottlenecks as a result. "This isn't something you can throw money at," says Robbins. "These are people who have to know the business and know what the results should be."Where Should You Test?Options Clearing Corp. didn't want to run the risk of testing its debugged code on its production system. Mounce's solution: rent a near-replica of its mainframe from Comdisco Inc., a Rosemont, Illinois, company whose primary business is disaster-recovery services. Other companies choose to buy their own backup equipment for testing, outsource the job or test on their own production systems using logical partitions (LPARs). For Mounce, though, none of those was a viable option. "We needed an iron-clad guarantee that the testing wouldn't disrupt our operations," he says. "It's our job to make sure the options industry can perform, and we don't take that lightly."Not surprisingly, Allan Graham, senior vice president for operations at Comdisco, endorses such an approach. "You want to be able to approximate your production system as closely as you can afford to," he says. But he notes that other tacks work, too, like temporarily taking down the production system, if that is an option, or augmenting existing technology for testing.
Most consultants working in the testing field seem to back Graham's view and advise their clients to create independent testing environments to avoid inadvertently contaminating live production data. Other drawbacks to testing on a production system include diminished performance and scheduling problems.
"You have to schedule testing around your peak utilisation periods, which prolongs the project and makes it more difficult to manage," says Roy Carter, president of Computer Related Concepts Inc., a Fairfax, Virginia, IT consultancy.
How Do You Know When You're Done?
The IS executives we've cited all aim to have testing completed by late 1998 or early 1999. How will they be sure they're done?"The answer is, when you have enough documentation to tell your board, your audit committee and your trading partners that you have tested to the best of your abilities," says McCabe & Associates' Carrai. Azor's Pope advises that that milestone is rarely reached after just one or two test cycles. "People think they're just going to run one test and it'll work," he says. "They don't realise that fixes always cause new problems."End users and internal auditors can help define the finish line for Y2K testing. Additionally, Fondy at DHL recommends something he calls "cross-peer inspections"-getting programming and testing teams to examine each other's deliverables. "The idea is, let me take Joe from Application A and Pete from Application B and have them both inspect what the other has done," says Fondy.
Each of those three parties has a role in determining when testing is complete.
Users themselves need to ensure that the application is producing the desired output. Auditors must verify that there were no gaps in the testing process that could expose the company to liability. And peers within IT can hold each other to high standards on a project with plenty of potential to become a deadline-driven mad rush. But what's important is that all three groups share a set of criteria that establish what the organisation's goals are for Y2K.
"You need common definitions of compliance," says Robbins at Chase. "When is something tested and ready for the real world?"What Happens After New Year's Day?Is thorough testing something companies will resort to only in times of crisis, as with the millennium bug, or will it become standard operating procedure? In other words, can we expect the Software Engineering Institute's 67 percent figure to increase as a result of Y2K? Again, there's no consensus.
"My hope is that this will be a retained competency, and it should be," says Howard Rubin, president of Rubin Systems and chairman of the computer science department at New York City's Hunter College. "Y2K is a wake-up call for bad technology habits."The skeptics, however, say that some things never change. "A lot of people are treating this as a necessary nuisance that we have to put up with," says Commercial Union's Kern. "They're not trying to look beyond it and see what we can learn and use for the future."Whether testing wedges its way into every organisation's culture or not, there's no denying that it has achieved something of a celebrity status as a result of Y2K. The stakes are high, and testing is crucial to a successful conversion effort. And some of the biggest testing hurdles-for most companies-still remain to be discovered. "We need to get a dialogue going around testing," says Judith List at Morristown, N.J.-based Bellcore, which consults with telephone companies all over the world. "We're only beginning to scratch the surface to see what some of these challenges are going to be."The Test PyramidNo matter how thorough an organisation's testing effort, there's always the chance that something unexpected will go wrong. Bad data from a business partner could corrupt a database, or an embedded system could wreak havoc on a plant floor. How to prepare for such unpleasant surprises? Contingency planning.
"You need to run the 'what ifs,'" says Pam Fredette, a solutions division president at Computer Horizons Corp. in Mountain Lakes, New Jersey. "If certain critical business applications fail, how will you get around that?"Some IT executives scoff at the thought of both doing rigorous testing and constructing a Plan B. "Our contingency planning basically says we'll make it work," says David Mounce, vice president of production operations at Options Clearing Corp. in Chicago. "We're not planning on failure."That kind of attitude irks William Ulrich, president of Tactical Strategy Group Inc. in Soquel, California. "The guys who know about the problem won't admit that you need a contingency plan. It would be acknowledging the possibility of defeat," he says. "On the other side, the business people aren't aware of the risks of this problem. Because of that, no one's driving."Someone needs to get in the driver's seat and begin drafting plans for workarounds and backups. "You need to take a look at having a fallback strategy for everything that's critical to the business," says Allan Graham, senior vice president for operations at Comdisco in Rosemont, Illinois. "They may be Band-Aids, but they're better than nothing." Graham also recommends having a SWAT team on standby over the long weekend of Jan. 1, 2000, ready to attack unanticipated snafus.
At Brown & Williamson Tobacco Corp., the Louisville, Kentucky-based company's Y2K steering committee is responsible for outlining contingency plans. "I've asked everyone in the group to investigate their disciplines and areas," says CIO John Hanaberry. "What if that bank isn't ready, or this distributor or that government agency? How do we respond?" Spreading the work among multiple business units is a good way to make sure that all the important scenarios are explored, says Hanaberry. Having a small group do contingency planning can often be too myopic.
And contingency planning, like every other aspect of the Y2K project, needs to begin as soon as possible. As Ulrich says, "The folklore suggests that the year 2000 problem is isolated to a brief period near the end of December 1999," but that's simply not the case. Applications will fail before that date as well as after it. Solid contingency plans ensure that any failures won't cripple the organisation.
(Scott Kirsner, a Boston-based writer and consultant, is working on a series of articles for CIO on the Y2K problem. He can be reached at email@example.com.)
Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.