Mitigating the next WikiLeaks: Insider threats
Mitigating the next WikiLeaks: Insider threats

WikiLeaks as poetry? With lines like, “If neither foes nor loving friends can hurt you...,” Rudyard Kipling's bracing poem “If,” challenges the reader to face uncertainties, including the uncertainty of never knowing from where a threat can come. WikiLeaks does the same, with one exception: It exemplifies the vulnerability from within.

The calculus of threat is the probability of an event's occurrence multiplied by the downside consequence if it does occur. An insider attack may well be rare, but because data comprises an increasing fraction of total corporate wealth, the consequences on a corporation's data increase in severity as the value of that data grows.

In this fully connected world, a growing body of civil law demands the public shaming of any corporation that leaks other people's data. Financial regulations already in place are beginning to treat data leakage events as inherently material, thereby elevating such matters to the boardroom level. Furthermore, designers of national policy are proposing to make protection against insider attacks a mandate that includes a periodic inspection regime.

To be an insider, the individual must already have passed through an access control gate. That's what takes them “inside,” of course. Therefore, access control is not, and never can be, a deterrent to the problem of the insider threat. An inside perpetrator, to do his or her job, must have authorization credentials congruent with the task they must do. So, they are either trusted individuals or have discovered a way in. In either case, authorization systems are not deterrents to the insider threat, though they may bound the downside consequence, to a degree.

Some members of staff must have special authority simply because keeping the IT plant running requires interventions that cannot be anticipated, such as when parts fail. Such special authorities may also be available to any internal investigations team that may be in place, and similarly to any internal “red team.”

It is not necessarily bad – but it is a reality – that there will always be people in positions who are capable of being an insider threat. The question is how to control this by some means that is not itself subject to the very legitimate capabilities of the determined and knowledgeable insider.

“In this fully connected world, a growing body of civil law demands the public shaming of any corporation that leaks other people's data.”

– Dan Greer, chief scientist emeritus at Verdasys
The answer is that the operating environment itself must be altered. Of all possible design goals for any security system, perhaps the most important, the one with the highest value, is “No silent failure.” If we must alter the operating environment in a manner consistent with preventing the invisible or silent failure of an expert insider attack, the engineering problem is at least well specified.

The most cost-effective solution to this engineering problem is to instrument the operating environment so that data does not move without that movement being observed by the instrumentation. The transition from data-at-rest to data-in-motion always involves the operating environment, and does so in a way that is directly subject to instrumentation. That the instrumentation is difficult to do without side effects is a given. That the instrumentation – the event-detection scheme – implies the existence of a mechanism to receive and act on the detected data events in real time is likewise a given.

Only when this type of mechanism is in place can enterprises focus on another enterprise artifact: the human behavioral issues and their policies governing data handling. Technology alone, and human oversight alone, cannot solve the problem.

Dan Geer is a well-known computer security analyst and chief scientist emeritus at Verdasys, a Waltham, Mass.-based provider of enterprise information protection solutions.