Leadership, Phishing

Playing mind games with the enemy: How businesses can socially engineer threat actors

A JetBlue plane is about to take off from Fort Lauderdale-Hollywood International Airport in 2020. (Photo by Joe Raedle/Getty Images)

Getting owned by a malicious hacker early in your IT career might be enough to make some folks questions their choice of profession. But for Tim Rohrbaugh, chief information security officer (CISO) at airline carrier JetBlue, the experience actually fueled his passion to better understand not only how to defend against digital threat actors, but to actively undermine them.

When hiring job applicants for his security team, or communicating with key stakeholders in his organization, Rohrbaugh often asks job individuals to think of their own negative security experiences that they have encountered at some point in their lives. It doesn’t necessarily have to be a network breach; they may have been burglarized or had their privacy invaded.

Essentially, the point is to tap into one’s brain and apply relevant life lessons and rational perspectives that the person gained from being victimized in some fashion. “Priming the amygdala,” Rohrbaugh calls it, referring to the flight-or-fight part of the brain responsible for emotional responses to stimuli.

Often, when security professionals talk about the intersection of psychology and cybersecurity, it seems to have something to do with the principles behind social engineering and phishing — how cybercriminals use persuasive tactics and the illusion of urgency to trick fallible users into bad decisions. The so-called “human element.”

But there is so much more to cyber-psychology than that — and clearly, psychology plays a central role in Rohrbaugh’s approach toward both leadership and security strategy.

In fact, Rohrbaugh says he likes to flip the script and actually use psychological tactics against enemy hackers. That might mean developing an insightful behavioral profile on a threat actor to help make informed decisions during a future engagement with them, employing deception technology that tracks adversaries’ movements in your system, or employing visibly stout defenses that deter attackers from picking on you in the first place.

SC Media spoke at length with Rohrbaugh to learn more about his philosophies.

When did you start seriously incorporating psychology concepts into your offensive and defensive cyber practices?

It started somewhere around 2015. I was the CISO of a public company called Identity Guard [Intersections, Inc.]. And one of the things that the board had tasked me to do was build version 2.0 of the product — which was about trying to get consumers notified when there was identity theft. 

And when you looked at the history of what was happening, [these consumers] were contributing to their own breaches through misuse. And so the question was, could we actually derive the information and repackage it in a way that we could change their motivations, we could change the behaviors?

So I started looking at incentives and gamification, and it took me in two different directions: One is to apply psychology to human motivation to overcome things like cognitive biases.

And No. 2 is to use psychology as a tool against malicious actors?

Especially with respect to [their] perception and motivation. Criminality — it's just a business, right? They happen to be on the wrong side, but in that business, they have the same failings that we have. Humans sometimes have apathy. Sometimes they perceive value that's not there. Sometimes they can be manipulated just as our employees can be with respect to social engineering…

Security people are the only ones paid to actually make somebody else's life miserable — it just happens to be the criminal. 

From a strategy perspective, how do you actually go about doing that?

Tim Rohrbaugh, CISO at JetBlue.

The threat intel group I think is critically important today. It hasn't been emphasized the way it should. In fact, here [at JetBlue], I've taken threat intel group, and have them orchestrating everything that we do. That way, it’s not just a compliance mindset. We're actually making changes to defense and monitoring based on who is coming after us, with the knowledge of why they're coming after us and how they're coming after us.

If the threat intel team can’t [name the] threat actor group, then they define a persona. And if you do that… and you know the motivations and the tactics that they've used in the past, then you can start to look at how you change [the way] you appear to them…

When people come knocking on your door for an initial compromise, what do you do? Do you just let them do it? Do you deceive them? Do you call their provider? Knowing who it is, who the threat actors are, more than likely will tell you which of those paths to take.

Take fraudsters, for instance. They have a cost structure. And for them to get a call from their merchant account on chargebacks, or for them to get a call from their network provider that there's been a complaint levied against them, all of a sudden, the cost of doing business can get driven up, and you can push them someplace else. I'm not saying you can turn them away from a life of crime, but you can get them to go your competitor that's not as diligent…

I started a group a long time ago: Deception Practitioners. Deception, or lying, is a basic human right… Most businesses out there don't take advantage of deception… to play into the perception by the criminals of how much value is there [in attacking] and how much effort they're going to have to put out.

What type of information do you want in your threat intel reports to create a clearer picture of who the malicious actor is and how best to respond if you're engaged with them?

What we're trying to do is threat-informed defense… [which] requires us looking at the tactics and techniques that [adversaries] used in the past… This starts to give us insight into about how much effort they're willing to put out…

And what's happening now with like, Aviation ISAC and some of the other ISACs… is we're getting enough sharing and cooperation to where we can actually put some real details and history to their activity.

You work in aviation. Airline carriers deal in certain types of information (like passport numbers) that other consumer-facing businesses don’t. I imagine part of profiling an adversary is knowing precisely what assets they’re after.

I go back to the psychology of it. Why would [they] put out that much effort [to attack you]? …They have to have very good knowledge that there's something of worth to them… And if you know what they want, and you make a change to make it look like all that data is gone, or the systems have changed, or [your] business partners have changed… all of a sudden you change… their motivation…

They [might] want to monetize your data. [But] it's not necessarily financially [motivated]. It depends on where you fly, and whose constituents [you serve] or whether you service [the] U.S. government or things like that. There's a lot of reasons why they would want that data.

[It’s also about] understanding where the threat actors go when they don't find what they want. Is one of their modus operandi to do a ransomware event just to get some cash on the side? Do they hand it off to another group? I find it fascinating to understand the economic models of criminal business and try to figure out how to break it apart, how to undermine it, and how to frustrate the actors themselves, or just to, appear as much less valuable than they perceived.

I would imagine that profiling your enemy’s behavioral tendencies is especially beneficial when negotiating with ransomware attacks.

If you go into a ransomware event and don’t know who the actor is, what their real motivations are, you’re at a real disadvantage. Hiring real negotiators — ones that maybe came out of the FB — you can’t go wrong, as long as you can afford them and it doesn’t break the bank.

But you have to have… a lot of details on who you think it is and what the motivation is because it very well may be because ransomware is not really what they want to monetize. They‘re really using it as a distraction for something else.

What was it that initially piqued your interest in applied psychology?

The first book that probably had the most impact on me was by the Heath Brothers: “Made to Stick.” And that took me down a whole path of neuroscience… neuroplasticity and neurogenesis.

I’ll tell you something crazy: A very long time ago… I was “owned” [by a hacker] as a system administrator… I was in COMSEC in the military, then left and went to [IT services provider] CSC and was working on an air defense system. This was in the mid ’90s. And at that point, we were over in another country installing the system and — because we were away from our families for a year to three years — we ended up putting in email systems... There were very few of them out there at that time. And sure enough, [a malicious actor] was living inside the mail system.

I was just appalled; I was offended. And it drove me… into the commercial security space. It definitely had an influence on my passion… trying to figure out how not to be owned.

[Early on], I thought that I had to hire people who had [similar] trauma, whose amygdalae were already primed. They had gone through the irrational phase of risk analysis. Like if you had your house broken into, you can't sleep every night for months, depending on who you are, because you have this irrational fear of somebody watching you.

But what happens after that is that the irrational phase calms down and you end up in a very rational state. And so as part of the interview process, I used to look for people [who] had some kind of trauma — like having been owned, cyber-wise.

Then I came to realize… that that really cuts down on the pool of people. So my goal was to try to figure out how to… prime the amygdala, but without the trauma… I want to understand what people have experienced in their life, and then just try to find one of those experiences… something that you can start to craft messages around.

In what other ways do you apply psychology toward communicating with key security stakeholders in your organization, including board members and employees, whose buy-in is essential to maintaining a strong cyber posture?

When I'm communicating, whether it's with my staff or it’s board level, you’ve got to understand who that person is, what existing schema that they have in their mind and then get your message to match up so that you’re not stuck, so there's not a barrier.

I believe that everybody who's living here today has the capacity to be a risk expert, just as I am. There is no difference between us. The problem is that we haven't translated it, we haven't done our job to explain it to them.

prestitial ad