The Truth About Cyberterrorism

The Truth About Cyberterrorism

Reader ROI

- Gain a clear and concise definition of cyberterrorism.

- Learn which threats are plausible and which are not.

- Understand what the threat of cyberterrorism means to the future of security.

The real threat is to critical data, not to property. That's what CIOs should be focusing on.

It is a crisp winter morning. The sun rising over Boston Harbour blinds as it strikes the white oil drums on the industrial edge of Chelsea. Marcus Kempe, director of operations support at the Massachusetts Water Resource Authority (MWRA), is showing off the crescent-shaped bank of computers that control the flow of water pumped into 2.5 million faucets across eastern Massachusetts every day.

These are the computers that would have to be hacked in order to carry out a cyberattack. And these days, most of Kempe's job involves planning against such an eventuality. But he is not particularly worried about it.

"You're talking about ridiculous barriers," says Kempe, who is a 25-year veteran of the MWRA and oversees its computer infrastructure. "Could a computer attack get us to a high-consequence event? Probably not."

First, Kempe says, a hacker would have to worm into the IT infrastructure. Then, he'd have to hop over a firewall and slip into the MWRA's SCADA (supervisory control and data acquisition) system (the crescent-shaped bank of computers) through one of two very narrow access points. Finally, he would have to plant surreptitious code that would allow remote control of the chemical distribution or even the flow of water itself. "You're talking about three hacks," says Kempe. "To us, cyberterrorism is a lower-level threat."

In the US, since September 11, it's been almost unpatriotic to suggest that the threat of cyberterrorism is anything other than dire. But CIOs and security experts are beginning to challenge the assumption that a hack on that nation's critical infrastructure will be the next big terrorist outrage. In fact, cyberterrorism may not be nearly as worrisome as some would make it. That's because it is utterly defensible. And CIOs can play a crucial role in the defence.

Defining the Threat

As was the case with so many New Yorkers, September 11 inspired Ed Cannon to get involved. Within a couple of weeks of the attack, Cannon, executive vice president and CIO of the global marketing communications company Grey Global Group in New York City, had formed the Information Civil Defence Group (ICDG). He envisions ICDG as a sort of neighbourhood watch group, where the neighbourhood is the private sector's critical infrastructure and the residents are concerned CIOs. ICDG will stage seminars for CIOs and work with Washington on security standards around critical infrastructure.

So far, Cannon has 100 volunteer CIOs and has met with Richard Clarke, special adviser to the president for cyberspace security, in the Office of Homeland Defence. About 80 per cent to 90 per cent of critical technology infrastructure resides in the private sector, and that puts private sector CIOs in a unique position of leadership. They run the systems that need to be protected against terrorist threats.

"We CIOs have the responsibility for managing this," Cannon says. "We have a real role to play."

The first order of business is defining cyberterrorism. Since September 11, threats once considered digital aggravations have been tagged cyberterrorist provocations. Suddenly, encryption was not a software feature but a weapon in the cyberterrorist's arsenal. Knocking out e-mail was cyberterrorism.

One widely quoted security consultant warned of the threat posed by the fans on computers, which can "breathe" and spread deadly bacteria. Is this, then, a form of "bio-cyberterrorism"? The term lost its meaning as it stretched to keep pace with flights of anxiety.

This is how the National Infrastructure Protection Centre (NIPC) under director Ron Dick, a key figure in the US government's infrastructure protection scheme, defines cyberterrorism: a criminal act perpetrated through computers resulting in violence, death and/or destruction, and creating terror for the purpose of coercing a government to change its policies.

So to qualify as cyberterrorism, an act must fulfil two criteria: a political motivation and a destructive result. But computer attacks usually satisfy only one: the motivation. It's far more difficult to cause destruction with computers. If phones don't work, it's annoying, perhaps costly, even dangerous, but not in and of itself destructive. Even the most often cited cyberterrorist threat - shutting down the Internet - is that really destructive, or just a massive inconvenience? Most experts believe it's the latter. Very few malicious uses of technology qualify under Dick's definition of cyberterrorism.

"It's a bad word. Cyberterrorism is not terrorism in cyberspace because there is no terror there," says security expert Bruce Schneier, CTO and founder of California-based Counterpane Security. He distinguishes between the term cyberterrorism and what he calls "cyberhooliganism", which would include viruses, Web site defacement and so forth. "Computers can be a vector for terrorism just as the mail system has become a vector for terrorism. But if the mob goes and shoots up a convenience store, we don't call that terrorism.

Think of the horrible crimes we don't call terrorism," says Schneier. "So if you shut down the Internet" - a feat Schneier and others warn is plausible and not unlikely - "yes, it's a huge malicious attack, but it's not terrorism."

Both Schneier and Dick agree that the definition of cyberterrorism includes two clear subcategories of cyberterrorist threats.

1) The physical infrastructure threat: compromising critical systems to severely affect critical physical infrastructure, such as power grids, water and sewer systems, dams, hospital equipment, pipelines, communications, global positioning satellites, air traffic systems or any other networked system, which would result in death and/or destruction.

2) The critical data threat: compromising critical computer systems to steal or irreversibly damage vital data, such as the Social Security database, a large financial institution's records or secret military documents, which would result in death, destruction and/or catastrophic economic turmoil.

Of these scenarios, the first - hacking to terrorise utilities - has dominated the cyberterrorism dialogue. But are these systems really networked, and if they are, why did companies and public agencies open critical infrastructure to obvious vulnerabilities?

Seduced By The Web

To answer those questions, we have to go back and look at how infrastructure adapted to the introduction of computers. In the 1970s, computers made it possible to network command and control functions of systems like the power grid or dams or communications switches. Remote control was considered a boon to routine maintenance; it created new efficiencies. Ironically, it was also deemed a good defence against terrorism; the government feared onsite attacks. The notion that someone could manipulate a computer to affect infrastructure was considered a bit of science fiction.

In the 1980s and early 90s, SCADA systems matured and came to dominate critical physical infrastructure. With SCADA, power companies could remotely control functions like load dispatching (balancing transformers so that no one power station gets overloaded). Networked SCADA looked like the future of utility maintenance and control.

The Nuclear Regulatory Commission (NRC) was the exception. After a short evaluation, the NRC decided to forbid remote control at nuclear plants. Then in the late 90s, critical utilities were, like everyone, barraged by consultants promising unprecedented cost savings through the Internet.

Kempe at the MWRA recalls meetings during the Web's headiest days, when he and his staff discussed the merits of opening up his systems to the Internet. "It was so tempting," he recalls. "It looked so wonderful - the cost savings, the efficiencies." In the end, the MWRA resisted temptation.

Not everyone did. Many utilities, particularly smaller outfits, and, for arbitrary reasons, power companies, embraced the Web. The ones who dove in either didn't recognise or didn't care at the time that they were also embracing the security weaknesses inherent in the public network.

Today, there remains an unsettling lack of understanding about just how safe utilities are from cyberattack. Even NIPC director Dick seems to have no idea as to the overall state of networked infrastructure. Asked if he knows which utilities are vulnerable, Dick says: "I don't know that anyone knows."

Why Cyberterrorism Is Not Worth the WorryThis much is known: some critical computers are vulnerable. In 1997 a hacker shut down control tower services at the Worcester, Massachusetts, airport. The incident didn't cause any accidents, though service was affected. Three years later, a General Accounting Office report suggested the Federal Aviation Administration computers were vulnerable. Locally, in Maroochy Shire, Queensland, in April 2000, a disgruntled consultant-turned-hacker compromised a waste management control system and loosed millions of gallons of raw sewage on the town.

The hacker had deep knowledge of the system, and he had stolen sewage-control software on his laptop. He spent two months getting into the system from the outside.

The good news - besides the lack of human casualties (marine life died) - is it took this former insider 46 tries to unleash the waste; the bad news is that those managing this critical infrastructure missed his first 45 attempts.

It wasn't cyberterrorism. But even so, many view the cyberterrorist threat to computer infrastructure as largely implausible.

Why? For one, experts say terrorism is like lightning. It takes the path of least resistance to its end. And, right now, it's easier to blow something up than to figure out how to damage it by hacking into and manipulating a computer system.

Take the MWRA. After September 11, Kempe's first order was not to lock down computers; it was to erect Jersey barriers, weld manhole covers and call in the National Guard. Terrorists want to make an immediate impact, and cyberterrorism is largely quiet.

"Terrorists need to make a big splash, to draw headlines," says Mike Hager, vice president of security at Colorado-based Oppenheimer Funds, who was at the World Trade Centre on September 11 and escaped after both planes hit. "The type of cyberterrorist attack pulled off would have to be huge." According to Hager, the fact that a hacker turned some lights out wouldn't convey any terrifying message. "The terrorists [on September 11] could have hacked into the power system of the World Trade Centre." They didn't, because that wouldn't have made a statement.

Security expert Rob Rosenberger feels much of the rhetoric about cyberterrorism is political posturing to gain funding. "The information-war people say this cyberterrorist threat is out there, but they never provide any plausible scenarios," says Rosenberger, director of, an independent Web site that squelches virus myths and general computer security hysteria. "I'm asking for reality, and I'm not getting it."

The Real Threat

Rosenberger fears the second scenario far more - cyberterrorist attacks that destroy critical data. And he's not the only one.

The general state of data security is woeful, again, thanks to the Web. Despite unprecedented spending on security in the past three years, more hacks than ever are successful, they are easier to create and carry out, and they produce ever more devastating results. Most of those threats are not through disablement but rather corruption: tricking a system into doing the wrong tasks while it supposes it's working normally.

Parasites - tiny computer programs that live in databases and slowly corrupt the data and its backups - could wreck a crucial database like Social Security. Or a hacker could penetrate a chemist chain's network or hospital database, causing fatal medical errors when a patient takes a prescription drug. If you want to raise hell on airlines, you hack the reservation system, says Schneier. If you want to cyberterrorise airlines, you hack the weights and measures computers that control planes' fuel and payload measurements.

Such "fringe systems" are seen as the most vulnerable to data corruption. "The threat to data is absolutely more of a concern," says Hager. "It's so much easier to attack, and there are so many more targets."

In any case, the threat of cyberterrorism is deemed most plausible as a supplement to a larger terrorist attack. In other words, we shouldn't think about cyberterrorism as the next great threat after the physical horror of aircraft hijacking and the biological horror of anthrax. Rather, cyberterrorism is something smaller that will be used to amplify those far greater horrors.

"I keep going back to September 11 and wondering how bad it would have been if the Code Red worm hit at the same time - the level of anxiety and panic that would have caused," says Grey Global's Cannon. "Having e-mail was one of the saving graces of that day."

The good news is that protecting against any security threat protects against cyberterrorism. Kenneth Niemi, CIO of the Minnesota State University System, learned that recently when he faced a two-and-a-half-week employee strike. It turned into a de facto antiterrorist exercise. Niemi found himself planning a defence against disgruntled employees who possessed the two keys to any security breach - knowledge and access.

Niemi's greatest takeaway from this exercise was how much physical and IT security should and can intersect (for more information on this, read "How to Plan for the Inevitable", page 78). Since September 11, the trend toward combining aspects of IT security with onsite security has accelerated. "We made key card access enforceable 24 hours a day. We require certain employees to take their laptops home in case we need to deal with a situation remotely," Niemi says.

Niemi also formalised the process of registering guests who enter his building and is adding computer lab surveillance.

Cannon has also tightened physical security by revoking several employees' access to the data centre. He also moved many consoles out of the data centre, all in an effort to reduce traffic near critical systems.

The MWRA already has tight integration of physical and network security. To begin with, the computers are in a locked room, which is accessible by key card and manned 24/7. Visitors check in and check out at the front desk, and after visitors leave, their host sends a memo to senior management detailing the visit.

SCADA connects through a private line (soon, via microwave) to pump stations and reservoirs. If something goes wrong at a water facility, an alarm sounds both on-site and at the SCADA operations centres. The alarm also flashes on the computers, and it can't be shut off until a formal acknowledgement of the alarm is made and physically logged by a person with clearance to do so.

"Roving crews" periodically go to MWRA pump stations and storage sites to check the integrity of the facilities and their connection to the control computers. Most of the sites are under surveillance.

"I see IT and these physical security rules meshing more and more," says Cannon. "Especially when you talk about disgruntled employees and screening. But it's a fine line. We want to treat employees like adults. Cut off too much access and you're saying you don't trust them."

Besides meshing physical and IT security, two other measures CIOs can take are to get involved and share information with each other. Joining Cannon in the Information Civil Defence Group, meeting with government groups like the National Infrastructure Protection Centre, raising awareness of the cyberterrorist threat within one's own company and opening security dialogues with peers are all important steps to take.

Six months after the September 11 attacks, there's a great deal of optimism among technology professionals about their ability to deflect the cyberterrorist threat.

"[Awareness] is a big reason for optimism," says Alan Paller, security expert and director of research at the Maryland-based SANS Institute. "The operations guy is getting a call from the vice chairman, someone really high up, who's asking what the company is doing about this threat. That conversation has never happened before. Underneath there are still a lot of vulnerable systems out there, but I believe cyberterrorism is very hard to pull off.

"My newest speech is about this topic, and it's not ‘look how life is ending'," Paller adds. "It's optimistic. There are many more reasons for optimism now than there were six months ago." vDebunking the Threat to Water Utilities "CONSIDER THIS: an operator at a water treatment facility presses a button to add a certain measurement of chemicals to untreated water. Instead of doing so, the computer dumps twice the amount of chemicals, an amount way above the maximum safety zone. The resulting excess causes poisonous toxins in the water and when distributed to individual homes, entire communities fall ill. Investigators and the public are left asking: ‘How did this occur?' The answer: a computer bug known as a Trojan horse."

From Hardening America's Public Utilities Against the Threat of Cyberterrorism, by Jason B Lee and Steven E Roberts.

Jason Lee and Steven Roberts, risk mitigation and security experts, postulate that the simple Trojan horse hack can result in cyberterrorism. Is this credible or simple fear mongering? CIO (US) went to the Massachusetts Water Resource Authority (MWRA)'s operations centre to find out.

What we found was a system that would be difficult to hack into and any number of best practices for securing systems against cyberterrorism. Here's the skinny. First, a hacker would need access to the MWRA computers. We are in a locked room accessible by key card and manned 24/7. To get in, you must check in at the facility's front desk (and then check out later), offer your credentials, wear a temporary badge and be with an escort at all times. After you leave, your host will send a memo to senior management detailing the visit for the record.

The computers we're looking at distribute water throughout much of eastern Massachusetts. An hour or so west - near the Wachusett Reservoir - is an identical crescent of computers that monitor water quality and control the chemicals that enter the water, according to Marcus Kempe, director of operations support at the MWRA.

Together, these two banks form the MWRA's Scada system. Scada (pronounced "scay-da") stands for supervisory control and data acquisition; most public utilities rely on a highly customised Scada system. No two are the same, so hacking them requires specific knowledge - in this case, knowledge of the MWRA's design and access to that customised software.

Scada is not networked, except in two places. One, a dial-up modem, is offline. Only one person has clearance to use it. Turning it on must be done manually by someone with clearance at the facility. And two, there is a link to the MWRA's general IT infrastructure through a program called Plant Information (PI). PI gives a small set of supervisors with the highest clearance a one-way view of data about the water system. They can look, but they can't touch. This data can also be piped into a war room around the corner from us in the operations centre, which is used for incident response.

If a hacker somehow got into Scada, he would need user names and passwords to gain control of the command and control computers; he would need a way to either make changes undetected - though someone is watching the system around the clock - or hide the fact that he is making changes. And he would need to work fast: systems lock after a few minutes of inactivity and can't be reactivated without a password.

Scada connects through a private line (soon, via microwave) to Programmable Logic Controllers, or PLCs, at the water facilities, which churn 250 million gallons of water per day from the reservoir to faucets. PLCs are dumb, rugged chips that basically never fail. They follow the lowest level, most basic instructions (such as turn on and turn off), and report them to Scada ("I just turned on."). If something is wrong, the PLC says, "Help me" in the form of an alarm. The alarm sounds at the water site and at the Scada operations centres. The alarm also flashes on the computers, and it can't be shut off until a formal acknowledgement of the alarm is made and physically logged by a human being. Every month, about 1700 samples of the water are tested for unusual characteristics. "Rolling crews" periodically go to MWRA pump stations and storage sites, and check the integrity of the facilities, and the electronics at the facilities such as the chlorine monitoring devices. Most of the water facilities are under surveillance and, currently, under the watch of the US National Guard.

But suppose a hacker got by all this and, through the use of a computer either at the operations facility or remotely, planted a Trojan horse that at some point ordered the system to dump too many chemicals in the water.

That water, chlorinated, leaves the reservoir and enters the pipes, where it will receive PhpH adjustment and fluoridation.

Scada receives data about the water 10 minutes after it enters the pipes. It's checking for wild fluctuations in chlorine levels, which would indicate a reaction with some bacteria or foreign agent. There are several more chlorine checkpoints, at two hours downstream, three hours, and so forth. If the Wachusett Reservoir were in one endzone of a football field and your faucet were in the other, your water would be checked at its own one yard line, its 20, its 40, your 40, and then it would be stored at another facility at your 20 yard line and tested there too. It also receives a goal-line chlorine treatment as an extra safety measure. It would take your water anywhere from 12 hours to three days to go endzone to endzone.

If, after all of this, toxic water made it to faucets because of a computer hack, and people got ill, the MWRA would convene in its war room, and proceed with a detailed emergency incident response plan that includes shutting down pumping facilities, and sending out emergency broadcasts, among other steps.

- S Berinato

The Spectrum of Cybermalfeasance

Level of threat to national security: greatest to least 1. INFORMATION WARFARE State-sponsored use of computers in military action.

2. INFORMATION COUNTER INTELLIGENCE State-sponsored use of computers to gain knowledge on a foe.

3. CYBERTERRORISM Use of computers to cause terror, death, destruction or massive economic turmoil, often by a party not affiliated with any state.

4. CYBER ORGANISED CRIME Use of computers by a cartel-like group for the purpose of stealing or trafficking, usually money.

5. INFORMATION VENDETTAS Use of computers, usually by an insider or sanctioned by an insider, to sabotage an organisation to create public embarrassment or to gain at the expense of that organisation.

6. CYBERCRIME Use of computers to steal money, credit card data or personal information for use in extortion schemes or to gain notoriety as a hacker.

7. CYBERHOOLIGANISM Using computers for digital vandalism and low-level destruction, such as Web site defacement, virus propagation or "hacktivism" - that is, using those tools to get a message across.


Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Error: Please check your email address.

More about CounterpaneE*TradeFederal Aviation AdministrationGrey Global GroupNIPCSANS InstituteWaste Management

Show Comments