Bruce Wignall, CISO of Teleperformance, operator of 300 call centers spanning some 50 countries, nicknames his largest torment “Fraud 2.0.”

Thanks to robust perimeter technologies and stringent legislation and industry guidelines that have forced organizations to become better equipped to handle the external attacker, cybercriminals have begun shifting their modus operandi to leveraging insiders to perpetrate data heists.

Combine this new hacker strategy, Wignall says, with a sputtering economy that has some people desperate for a buck – and for a 120,000-employee company such as Teleperformance that serves hundreds of clients, many in the banking and health care verticals, a particularly dangerous prospect emerges.

“Frankly, it is frightening,” Wignall says. “It has forced to me say, ‘We’ve got some pretty good technologies and laws that we comply with, but it is certainly not enough. Let’s start predicting how bad things can happen and what we can proactively do to either prevent it or detect it early.”

The last two years, really, have been a perfect storm for the insider threat risk. With the economy still in tatters, the rise of sophisticated cyberespionage rings and the arrival of WikiLeaks as, love it or hate it, a viable outlet for sensitive information exposure, never before have organizations had so much reason to care about the motives of their employees, contractors and partners.

Most studies, in fact, now point to security professionals being more concerned about internal threats than external attackers. According to the 2010 Verizon Data Breach Investigations Report, which studied some 900 cases of data leakage incidents, 48 percent were attributed to users who, for malicious purposes, abused their right to access corporate information. Studies also conclude that these types of breaches typically are more costly than an outside attack.

“Definitely people are very concerned about insiders,” says Dawn Cappelli, technical manager of the Computer Emergency Response Team (CERT) Insider Threat Center, a federally funded research-and-development entity at Carnegie Mellon University’s Software Engineering Institute in Pittsburgh.

“The technology has become really good at keeping outsiders out, but your insiders walk right in every day,” she says.

For more than a decade, she and her team have been studying the problem, beginning when the U.S. Secret Service approached CERT to be a partner on securing a number of major public events, such as political conventions, the presidential inauguration and the 2002 Olympics in Salt Lake City.

“Traditionally, they had looked at gates, guards and guns, and then they realized they had to start looking at cyber issues,” says Cappelli. “We realized that insiders are a big threat. If you wanted to bring down an event, you could use a disgruntled insider or financially motivated insider to do that.”

Cappelli says she and her team embarked on a project never done before in the cyber era: studying the insider threat from both a technical and behavioral standpoint.

“We worked with the Secret Service to find every insider threat case we could find,” she says. “We tracked everything we could think of about those cases.”

The group divided the caseload – believed to comprise only a small fraction of the actual numbers because many intentional insider incidents go unreported or undiscovered – into four categories: IT sabotage, theft of intellectual property, fraud and national security espionage.

“We’ve talked to some vendors out there,” Cappelli says, “and from what we’ve seen, nobody has really done a functional requirement analysis for insider threat detection. Different vendors have their niche…but we’re looking across 550 cases in our databases. So based on what has happened in the past, if we could stop the crimes that already have happened, that would go a long way to stopping and detecting the insider threat.”

A deeper analysis

By 2008, the Insider Threat Center was ready to offer countermeasures.

CERT developed its first model for IT sabotage, defined as an incident when a employee intentionally attacks IT systems. The culprits are almost always disgruntled employees with a deep technical skill set, usually system administrators.

Sometimes they plant “logic bombs,” which are pieces of malicious code set to execute on a specific date. Other times, they set up unknown access points, which allow them entry to the network even after their privileges have been revoked. On still other occasions, they devise backdoor accounts or password crackers.

CERT’s model determined that most of these cases carry a “distinct pattern”: Usually the employees either have announced their resignation or have been formally reprimanded, demoted or fired, Cappelli says. In other words, the human resources department is aware of these high-risk personnel.

“We try to tell organizations,” Cappelli says. “You need to recognize that when someone is on the HR radar, you need to have controls in place to look at what they’ve been doing. You can’t look at everything everyone does, but when you have someone on the HR radar, you need to go in and say, ‘What has this person been doing?’”

The center also has devised a model investigating those employees who steal intellectual property. In these cases, Cappelli says, the offenders typically are scientists, engineers, programmers or salespeople whose motive is not sabotage, but belief that they are the owners of the data on which they have worked.

Traditionally, they strike within 30 days of resignation – either a month before or after leaving the organization, Cappelli says. The malefactors can fall into two groups: either those who are moving to a new job and want to take their work with them or, more maliciously, those who are part of a well-coordinated spy ring bent on ripping off the crown jewels, such as entire product lines, usually for the benefit of a foreign government or organization.

The CERT Insider Threat Center Lab, which opened last year, is working on offering technology that can assist organizations in their efforts against IT vandalism and intellectual property pillaging. The lab leverages CERT’s caseload to simulate actual events.

At this month’s RSA Conference in San Francisco, lab representatives plan to demonstrate “how configuration management controls could have detected and thwarted an insider’s attempt to plant a logic bomb in critical systems and modify logs in order to conceal his activity,” Cappelli says. The lab also has previously created scripts that can be integrated with email logs within an account management system to detect incidents of intellectual property theft.

“The last thing we want to do is tell an organization they have to go out and spend millions of dollars on a new tool,” Cappelli says. “You already have these technologies in place. Here’s how you can use them differently.”

Another academic organization, the Institute for Information Infrastructure Protection (I3P), part of Dartmouth College in New Hampshire, recognizes that the insider threat is a complex problem that no silver-bullet policy or technology can solve, and that empirical studies are the only ways to unearth answers.

“We don’t think there is a one-size-fits-all approach to the insider threat without understanding the nature of the threat,” says Shari Lawrence Pfleeger, I3P’s director of research. “Without understanding the nature of the threat, we don’t know what an appropriate response is.”

Specifically, the 27- member consortium, consisting of universities, national laboratories and nonprofits, has developed a taxonomy used to classify the nature of insiders and the undesired actions they may commit. This has allowed I3P to come up with hundreds of insider threat scenarios.

Among their current efforts, consortium members are studying the effectiveness of awareness and training and researching how to design non-security systems so that security fits “naturally into the functionality of what users need in the first place,” Lawrence Pfleeger says.

In addition, I3P partners at Columbia and Cornell universities are devising a language that specifies certain actions security teams want to know about if they happen on their networks. To complement this, the researchers are creating software that can record when these actions take place.

“A lot of existing [commercial leakage technologies] generate so much data, so the problem becomes: How do you find the needle in the haystack,” Lawrence Pfleeger says. “They are trying to specify what the needle looks like.”

Perhaps most interestingly, the organization is now turning to social scientists for help.

“Employees have misbehaved for a lot longer at work than computers have existed,” Lawrence Pfleeger says. “We’re just trying to shed more light on the nature of the insider threat and find solid ways to evaluate the technologies and the approaches so we have some science underpinning the decision-making about how to deal with the insider.”

Profiling the insider

At Teleperformance, one of Wignall’s most proud accomplishments has not been the implementation of a particular solution. Instead, it has been his introduction of a fraud risk assessment conducted for each prospective call center.

“I don’t think you’re going to catch people with technology,” he says. “You need to go out and be part of your business and understand what’s going on.”

The assessments have turned up some major vulnerabilities, including internal banking applications that can be accessed publicly or ones that allow call center employees to drop money – pennies at a time – on their own debit cards.

The investigations also have enabled Wignall and his team to implement what he believes is the most effective antidote to the insider threat – policy and procedure changes that force employees to fear punishment should they act maliciously.

For example, at call centers in which employees deal with warranty exchanges, Wignall says there have been instances where workers have delivered new products to their own homes if the application they are using failed to “tie warranty replacements back to the original purchasers.”

“If there is a flaw in our client’s applications and controls, you can count on not-so-honest employees to eventually find it,” he says. As a result, Teleperformance managers now sit down with employees each week to review each warranty exchange they have processed.

“I want them to immediately think that on Friday, they are going to be questioned about that particular transaction,” he says. “I’m proud to have people quit that are fraudulently minded.”

A new type of insider

But it is not just the employee desiring riches with whom businesses must be concerned. Whistleblower website WikiLeaks has forced organizations to look beyond the traditional profile of a malicious insider.

In a way, Bradley Manning, the U.S. Army private who leaked roughly 250,000 secret U.S. State Department diplomatic cables to WikiLeaks, revealed a new type of high-risk insider: the one with morals that can’t be repressed.

“Nobody assumed that anybody in the military would have a conscience, a different kind of motivation,” says John Kindervag, a senior analyst at Forrester Research. “Everybody assumed [Manning] would do the right thing because he was a trusted user. People might have a different morality than you. They might see trust and righteousness differently than you.”

Indeed, in a partial release of chat logs between Manning and Adrian Lamo, the hacker whom the Army soldier confided in, but who later turned him in, Manning explains his reasons for lifting the data to which he had access.
“[I] want people to see the truth…regardless of who they are…because without information, you cannot make informed decisions as a public,” wrote Manning, according to Wired.

Ted Julian, principal analyst at the Yankee Group, says the WikiLeaks episode has created a new channel for data leakage, one that nearly all security professionals had never considered.

“It can really turbocharge data loss,” he says. “You now have WikiLeaks and others like them that can get this out to a mass market incredibly quickly. There is no putting the genie back in the bottle now.

Julian, meanwhile, says he expects to see “dramatic spending” this year on technologies, such as data leakage prevention [DLP], that are designed to sniff out and prevent information exposure. DLP, in particular, has matured to the point where most solutions now offer discovery and categorization functionality.

Back at the CERT Insider Threat Center, lab personnel are trying to create solutions that make life easier on businesses. In addition, researchers have published a best-practices guide and recently began maintaining a blog devoted entirely to the threat.

“Our mission is to raise awareness of the risks of insider threat and to help identify the factors influencing an insider’s decision to act, the indicators and precursors of malicious acts, and the countermeasures that will improve the survivability and resiliency of the organization,” Cappelli wrote in an introductory post.

 

[sidebar]

Zero-trust: A network overhaul

When it comes to battling the insider threat, part of the reason organizations have been so unsuccessful is that they are treating the symptoms, not the disease, says John Kindervag, senior analyst at Forrester.

For example, he says, businesses are often quick to take drastic measures, such as eliminating removable media usage, but fail to recognize that an aging network model is the underlying cause of the problem. But many information and security professionals don’t care to investigate, choosing to take a “plausible deniability” mindset by ignoring what goes on in their network.

“In all my years of being an engineer and consultant, I’ve never been in a network where people adequately looked at their internal traffic,” Kindervag says. “Everyone wants to solve this on the edge, and you have to solve it on the center.”

To stem the risk of malicious insiders, organizations must drop their dependence on perimeter controls, such as network access control, and invoke a network refresh – known as zero-trust – that encapsulates accessing all resources securely, inspecting all traffic and gaining situational awareness for analysis and visibility, Kindervag says.

“We can do it with existing technology,” he says. “[It’s about] taking building blocks off our network and putting them in more logical places so your network is more structurally sound and secure so we can solve some of these problems before they actually become problems. It’s all vendor neutral and essentially technology agnostic.”