Content

Social Engineering: Telling the good guys from the bad

No matter how sophisticated computer security technology becomes, the human desire for connection and friendship appears to be an endless opening for social engineering attacks.

The costs of socially engineered attacks remain considerable. Individual costs can range between $25,000 and $100,000 per person per incident, says Mark Bernard, principal of Secure Knowledge Management, a security consulting firm. Multiply that by millions of individuals annually, and the costs are in the stratosphere.

Whether it’s online or in person, bad guys are looking for and taking any exploit they can find.

“One of my clients was a financial services firm and extremely paranoid about security,” says security consultant Steve Hunt. “They had upgraded their physical and IT security with surveillance cameras, guards armed with tasers walking around doing rounds of the facility and upgrading their access controls and digital security controls.”

As part of a “red team” penetration test, Hunt’s team lured those guards away from the building with some equipment inside their parked car: a rogue Wi-Fi access point whose signal wandered from point to point, thanks to being placed inside a potato chip can tied to an oscillating fan. Then, while the guards were distracted outside, the red team was able to physically enter the building by tailgating an employee. The red team was able to take photos of whiteboards, insert USB sticks in servers and all manner of other attacks.

There are also the bits of information that attackers can glean off the Dark Web from previous breaches – data from Equifax, Anthem and elsewhere – all of which could fill in gaps in personal information about a target.

While working at a payment company as an enterprise security manager, Bernard says one of the company’s controllers received a memo purportedly from the chief executive officer, requesting that the controller set up an account and transfer money into an account belonging to someone outside the company. “The thing was, we thought it was a little suspicious, even though it looked legit,” Bernard says. “Usually in finance, there’s several layers of signatures and reviews in order to verify and validate a major transaction.”

In this case, the outsider was someone who had been blocked from the payments company previously for laundering money there and had hired someone to execute this phishing attack, Bernard says.

Social engineering works as well as it does for a variety of reasons, one of them being that mindset that people think it could never happen to them, but it does happen to them, sometimes when they least expect it.

“Sometimes when you’re involved in your work and you’re like heads down, and all of a sudden something comes out of left field,” Bernard says. “Sometimes you don’t pay it much attention, and you just do whatever just to get rid of it, so you answer the question or give the information up and then you just keep moving on.”

Bernard says C-suite executives have gotten savvy about the threat of spear phishing attacks, but vice presidents and directors are more likely to suffer such attacks these days.

Mark Bernard,
principal, Secure Knowledge Management

Another factor lulling individuals into a false sense of security is their increasing use of second-factor authentication, which they believe will protect them from social engineering attacks. In fact, these individuals may be just as vulnerable, if they believe the attacker is a friend, not a foe, Bernard says.

Organizations need to do their risk assessments, to know where gaps are, and to segregate duties, Bernard says. For instance, “you don’t allow the same person who signs the check to actually print it out,” he says. “As they go through those checks and balances, there should be a set of criteria that needs to be checked off, and reviewed.”

The recently-enacted General Data Protection Regulation (GDPR) offers guidelines on the kind of personal information that should be handed out only with great care, Bernard says. If a caller asks for that information, until you can verify their identity and authorization to receive that information, “tell them a little story,” he says. “Say, ‘you know what, I’m kind of busy right now, and I can get back to you. Give me a phone number I can call you back on,’ or set up an alternative channel. A lot of them will just hang up.”

Phishing represents its own social engineering threat, arguably the most common one these days. A standard precaution is not to click on any links within an email, or to tell employees not to do so. Software also routinely monitors web addresses and compares them to a blacklist shared by security vendors or the security community.

More recently, organizations called public partnerships, such as the High Tech Crime Investigation Association, have been building intelligence hubs about fraudsters and criminals, with the ultimate goal of incorporating this into the technology we use, Bernard says. “When you click on your web browser, your software will warn you that this site has been known to post phishing attacks, and you should back out,” he says.

Enterprises need to continue to build out defense in depth, implementing a variety of strategies, such as restricting access control depending on roles and situations, Bernard says.

“I published a security architecture framework back in 2010,” he says. “It’s been downloaded more than 90,000 times. It’s got 11 layers, and certainly the architecture of most enterprises need to have those layers.”

Another staple of enterprise security these days is the phishing drill – deploying emails designed to test employees’ ability to avoid being phished.

“At Morgan Stanley [a one-time client of Bernard’s], as part of our security program, we did quarterly phishing tests,” Bernard says. “Compliance was mandatory. If somebody failed a phishing test, they would have to do it again until they got it right.”

Such tests must be ongoing, because current employees forget to be on guard against social engineering, and new employees need to be educated in the first place.

Steve Hunt, security consultant

Bernard has used Bloom’s Taxonomy to develop curriculum to educate employees about social engineering threats.

In this case, Bloom’s Taxonomy breaks down social engineering knowledge transfer, as it does all knowledge transfer, into six stages: knowledge of the subject; comprehension of the threat; methods of prevention; analysis to facilitate changes to processes; evaluation; and synthesis.

Another aspect of phishing drills is to not warn employees and IT staff that the red team drill is occurring, Hunt says.

“If you do continual social engineering testing, your employees learn pretty quickly that they’re being tested,” Hunt says. “That constitutes their warning.”

Utilizing programs that send out mock phishing emails to employees, these drills display prominent messages on users’ screens when they have clicked on one of the program’s attempted phishing emails.

“Now people are on high alert every time they open any email,” Hunt says. “They think any email could be a test, so they’re looking at it more carefully.”

Immediate prior warning is not recommended if an organization is doing any sort of comprehensive social engineering test, Hunt says. “The only exception is possibly the CFO or COO knows that the testing is going to happen,” he says.

With the permission of the client, Hunt has performed phishing attacks, telephone attacks with spoofed identities, and physical attacks at facilities, including breaking into buildings or extracting information face to face from an employee. “They all have different tricks for success, but they’re all useful, especially if the client’s security department does not know they’re happening,” he says.

Despite Bernard’s contention, Hunt says one of the most common and surprisingly successful phishing attacks is someone fabricating an email from the CEO to the CEO’s assistant or someone in finance, requesting authorization of a wire transfer to an account. “Nowadays, many companies have caught onto that and don’t fall for it anymore, but still it’s successful,” Hunt says. The best defense is employing some form of secondary verification, he adds.

Christopher Burgess,
author and former security consultan
t

Many criminal hackers pose as consultants or prospective employees, or as visitors to physical facilities, and attempt to walk around to find or overhear secrets, take pictures of documents or whiteboards, and often insert USB sticks that contain malware into the backs of computers that would then allow outsiders to enter enterprise networks, Hunt says.

Educators and IT departments need to measure success or failure of anti-social engineering efforts, evaluate the curriculum continually ask what is working and what isn’t, and figure out where to push harder, Bernard says.

“If the C-suite and the board accept the risks, and we’ve done our job as security professionals to define what a phishing attack could do to us, then so be it,” he says.

If a friendly-seeming attacker seems friendly enough, and succeeds in giving all the right answers to all the right questions, what then do individuals do?

It may be best to develop a kind of spidersense to determine when a conversation leaves you uneasy, and to reconsider whether to continue such conversations in the midst of them, if they do not feel right, security consultants say.

The best advice in such situations: Don’t be afraid to be firm and just say the information being asked for is too personal, or that you don’t know the questioner well enough to reveal it.

Even if the questioner has all the right credentials, don’t be afraid to call their purported employer to confirm the identity of the questioner before providing personal information, security experts say.

Another tricky situation can present itself when a junior employee of an organization witnesses a senior executive give out personal information to someone the junior employee suspects of having criminal intent. In such cases, IT departments have to be capable of receiving anonymous tips from those subordinates, who might otherwise have real doubts about stepping forward with their concerns.

So, despite all this advice, let’s suppose you’ve been socially engineered. Now what?

Again, culture can make a big difference. Company culture can compound the damage a social engineering attack can do. Employees, left to their own decision process, may want to keep quiet because they’re simply embarrassed, if they feel that they have made a mistake leading to compromising the security of their company.

Employees should feel they are free to be able to contact their enterprise IT help desk after such an incident. Enterprises must assure their employees that they know social engineering attacks are going to happen, and that employees are only human. If they take a zero-tolerance attitude instead, corporations could compound their problems by risking not being made aware of such attacks early on.

Employees also need to know reporting it will matter.

“I always question: If I report it, what is it going to actually do for me,” Bernard says. “Law enforcement needs the information to do their job. But will it prevent anyone else from being attacked?”

After a phishing attack, IT departments should monitor egress traffic, also known as traffic leaving their network, Hunt says. They should be looking for malware attempting to contact the criminal’s control server wherever it lives, or for data that malware is exfiltrating from the enterprise.

At the same time, management should let everyone in the company know that a phishing attack is underway, and not to open emails with certain subject lines, Hunt says.

“Phishing is still successful, and it’s going to be successful as long as there are emails to click,” Hunt says. “If you consider ransomware a type of social engineering attack, the number of attacks is definitely going to continue to grow.”

But to truly combat social engineering, individuals still have an increasingly important role to play.

According to Christopher Burgess, author, retired security consultant and a 30-plus year operative with the Central Intelligence Agency, individuals should consider stronger password retrieval questions, and if they aren’t willing to use a password manager, they should keep two physical notebooks – one with login names, the other with passwords. “You change something from a technological security problem to a physical security problem,” he says.

If users must reuse passwords, Burgess recommends making no reuse for key accounts such as email and financial information. “And anything you don’t want to see on the front page of the Washington Post,” he adds.

“Whether the threat is from a competitor, criminal or a nation state, the best defense companies have is to be alert to anomalies in all facets of their normal life,” Burgess says.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.