When an employee turns on his own company, the results - damaged networks, data theft and even work stoppage - could be devastating.
It could rock the company even more than an outside attack because the insider knows where sensitive data is kept, what the passwords are and exactly how to hurt the company the most.
That's the driving force behind the work that Daphne Yao, associate professor of computer science at Virginia Tech, is doing on cybersecurity.
Yao, who received an NSF Career award for her human-behavior inspired malware detection work, is developing algorithms that will alert companies when an employee might be acting maliciously on their network.
And the Army Research Office has awarded her $US150,000 to continue her research into finding new ways to detect anomalies caused by system compromises and malicious insiders.
"The challenge is to understand the intention of the user and what the user is trying to do," Yao said. "Most are doing legitimate work and they're working their own project and minding their own business. You need a detection system that can guess what the user is trying to do."
The crux of Yao's work is to figure out which employees are simply downloading sensitive files or logging onto the network in the middle of the night because they're trying to get their work done and which employees may be doing the same things because they're trying to sell proprietary information or crash the network.
According to a 2012 Symantec report, 60% of companies said they had experienced attacks on their systems to steal proprietary information. The most frequent perpetrators were current or former employees or partners in trusted relationships.
In 1996, for instance, a network administrator at Omega Engineering Inc. planted a software time bomb that eradicated all the programs that ran the company's manufacturing operations at its Bridgeport, N.J. plant.
The trusted IT administrator, Tim Lloyd, effectively stopped the manufacturing company from being able to manufacture, causing the company $12 million in damages and its footing in the high-tech instrument and measurement market. Eighty workers lost their jobs as a result.
Lloyd was tried and convicted of computer sabotage in federal court.
More recently, in 2013 Edward Snowden leaked classified documents about global surveillance programs that he acquired while working as an NSA contractor.
The same year, Pfc. Bradley Manning, an Army intelligence analyst, was sentenced to 35 years for leaking the largest cache of classified documents in U.S. history.
These are the kinds of insider attacks Yao is working to stop.
The Army Research Office did not respond to a request for comment, but Dan Olds, an analyst with The Gabriel Consulting Group, said he's not surprised that the military is supporting research into detecting insider threats.
"The U.S. military is very concerned about security these days," added Olds. "The Bradley Manning leaks highlighted the massive damage that even a lowly Pfc can wreak if given access to a poorly secured IT infrastructure. The Snowden and Manning leaks have had a very severe impact on U.S. intelligence activities, disclosing not only the information gathered, but also showing the sources and methods used to get US intelligence data."
He also said insider-based attacks normally may not get as much media attention as most hacks, but can potentially cause much greater damage since the attacker at least knows where the keys to the castle are hidden. And if that attacker works in IT, he or she might even have the keys.
"Insider threats are many times the most devastating, as they are the least expected," said Patrick Moorhead, an analyst with Moor Insights & Strategy. "Companies spend most of their security time and money guarding against external threats.... So that sometimes leaves the inside exposed."
To combat this, Yao is combining big data, analytics and security to design algorithms that focus on linking human activities with network actions.
Typical computer systems monitor things like network traffic, file system events and email activities. They also focus on looking for specific warning signs, like someone uploading large amounts of data. The problem with that is that if someone knows what the warning signs are, they can easily adjust their actions -- uploading data in smaller increments, for instance -- to avoid detection.
Yao is taking a different approach; her algorithms are focused on learning what are normal activities and then detecting anything unusual.
"We build on a model of normal behaviors and then detect a deviation from normal behaviors," she explained. "If you see a user logging in and access a database or doing a file read or write in the middle of the night..., then you ask, 'Is this a legitimate sequence of actions or is this an anomaly?'"
She also said part of the idea behind her detection system is to corroborate the user's actions with what's happening on the network.
If, for instance, a military team is on a reconnaissance mission, then it makes sense that they would be accessing maps from a backend server and pulling various data off the network.
It's largely about putting network actions into context.
Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.