Successfully defending against socially engineered phishing campaigns means addressing both technological and human vulnerabilities. Doug Olenick reports.
When it comes to finding a scapegoat after a company falls victim to a spearphishing scam, pointing toward the human being in the room typically isn’t unjustified or unfair.
Unfortunately for the human race, this kneejerk response to the longtime and frequent security question – who’s to blame? – has been mostly correct because as a species we’re challenged when it comes to deciphering good from evil emails. Couple the basic human desire to be helpful, along with the increasingly powerful skills wielded by cybercriminals in their attempt to hack into an organization, and the outcome is predictable.
Socially engineered messages appeal to a very base human behavior and that is why it is such an effective strategy, says Patricia Wallace, a psychologist and former senior director of online programs and IT, Center for Talented Youth, at Johns Hopkins University.
“Social engineering causes people to drop their cognitive defenses by containing strong urgency messages,” she says, explaining that is why these messages often ask for help or touch on a topic that is quite personal to the recipient.
Whether it is an unsuspecting office worker at Stanford University’s payroll provider or someone at Snapchat, too many people just cannot help clicking on an email link, particularly one that has been carefully crafted using every social engineering tool in the box.
Vidur Apparao, CTO, Agari
Andy Feit, head of threat prevention marketing, Check Point Software Technologies
Michael Lamberg, CISO, OpenLink Financial
Shalabh Mohan, vice president of product and marketing, Area 1
Patricia Wallace, psychologist; former professor, John Hopkins University
Most measures designed to defend against socially engineered attacks rightly revolve around workforce education. The idea is to teach people to take a hard look at an email, not only before clicking on it, but prior to following any instructions it might contain.
However, teaching the average worker the dos and don’ts of cybersecurity should not be the only bullet in a company’s arsenal as a growing number of technical solutions are coming on the scene. But starting with those individuals on the front line is the most logical – and difficult – place to begin building a corporate defensive perimeter.
“This is the largest issue from a security perspective because everyone on the planet can be duped,” says Michael Lamberg, CISO of OpenLink Financial, a Uniondale, N.Y.-based software and services business.
There is no doubt that socially engineered attacks work. A quick look back at the last few months shows a corporate landscape littered with victims hit with W-2 scams, ransomware and malware with almost all of them being enabled by a human making a mistake. These include major hospital chains, like MedStar Health, Hollywood Presbyterian Medical Center, Seagate, Sprouts Farmers Market and Snapchat, to name a few.
Verizon’s 2016 “Data Breach Investigations Report” revealed the power of a properly socially engineered phishing attack. The data, which was derived from sanctioned phishing tests that had eight million total results, showed that 30 percent of phishing messages were opened by the target with 12 percent moving on to click the malicious attachment or link. This figure is up from 2014 when only 23 percent opened the email with 11 percent clicking on the attachment.
Not only do many people click on these emails, but they do so quickly. Verizon found the median time for the first user of a phishing campaign to open the malicious email is one minute and 40 seconds, and the median time to the first click on the attachment was three minutes and 45 seconds.
Because of this basic human flaw, the overriding opinion is that defending against phishing and spearphishing campaigns by teaching employees not to click on what appears to be an official company email is as hard as getting the idea through their heads that using a USB drive found in the street is a bad idea, or even that betting on the New York Jets to win the Super Bowl is a mistake. Every time, just don’t do it.
This is why even people with a great deal of “street smarts” can fall victim to these scams.
“Everyone has a trigger,” Lamberg (left) notes. “This is not a technology problem, but a people problem.” As an entity most of us are trusting, he says, adding that training can somewhat offset our innate desire to help others.
To reinforce this message and influence future behavior and, of course, thwart future attacks, OpenLink uses training methods that include phishing its own employees to accomplish several tasks, Lamberg says. One is building in a healthy dose of skepticism into each worker when it comes to dealing with cyber issues, while the other is to simply get the staffer to pause for a few seconds before they act on an email.
As a result, the company has reduced the number of successful attacks by five percent, he says.
“Training needs to be interactive,” says Wallace. “Immerse the workers in a [training] scenario where they receive a phishing attack.” But she was less certain that mock phishing attacks conducted by the company would generate the results desired.
However, she agreed a person might learn after being victimized by their own company, although there might be a side effect: “They also may become less trusting of their corporate environment.”
If the training fails, then additional steps may need to be taken to get through to workers who, for whatever reason, just cannot seem to get the hang of scrutinizing emails and thus constantly open the door to cybercriminals by downloading malware or sending off valuable information. This can include taking this negative behavior into account when conducting performance evaluations, says Wallace (right).
“People have to learn to take this seriously,” Lamberg says. However, he notes that OpenLink has not incorporated any type of punishment for poor cyber hygiene hoping to keep the atmosphere surrounding the problem positive.
Part of keeping an upbeat outlook is not feeling any shame in being victimized, says Andy Feit, head of threat prevention marketing at Check Point Software Technologies, a Carlos, Calif.-based security vendor. “Hackers are doing a lot of work to get the emails correct.”
Simply because it is so hard to change human behavior, some IT security firms are looking for a technological approach even though developing such a tool has often been derided as impossible. “I’m a contrarian,” says Vidur Apparao (left), CTO at Agari, a San Mateo, Calif.-based email security vendor. “I absolutely think we can stop them. It’s an indictment of our industry that the best methods are not technology based.”
Shalabh Mohan, vice president of product and marketing at Area 1, a Redwood City, Calif.-based firm that offers products to eliminate targeted, socially engineered cyber attacks,
does not go so far as to say a technology-based solution will work, but he is fairly confident socially engineered phishing attacks can be stopped.
“I agree that humans are extremely gullible and that is why these attacks get through,” Mohan says. “It’s easy to keep changing the social hook, so instead we want to go out and stop the attack
In each case the company is not looking at the email’s payload, or how the email is worded, but at other extraneous factors.
In Agari’s case, the defense relies on deciding whether or not an email sent to a specific person is normal. If the previous 90 emails that were sent between two people at the same company always used the same server, how come this last one came from another place?
“We look at security in a different way,” Apparao says. “We want to build a platform that models email traffic. This way we can tell the good from the bad based on history.”
Such a method could prove useful in defending against the swarm of W-2 phishing attacks that plagued the likes of Seagate, Snapchat and Sprout Farmers Market this year. In each case a lower level worker responded to what was thought to be a request from a supervisor asking for employee W-2 information.
Apparao says criminals sneakily use cloud services to create these fake emails knowing that security software keys detection on whether or not the originating domain is safe. If it sees the email came from an Amazon Web Service server it is likely to let it pass.
And a defense against business email compromise attacks is desperately needed. In a report issued earlier this year, the FBI reported that between October 2013 and February 2016 nearly 18,000 global businesses collectively lost $2.3 billion to business email compromise scams, whereby cybercriminals pose as company executives, attorneys or reputable vendors to trick employees into transferring corporate funds into fraudulent accounts.
While the general consensus says it will take a combination of technology and employee training to curb the impact of socially engineered attacks, there is also a downside to creating this situation. Employees, particularly younger workers, might slack off if they know there are layers of security software backing them up.
“What makes people susceptible is they have too much faith in the technology protecting them, “ Wallace says, adding that younger people who have less experience in the workplace are more likely to fall victim to an attack, whereas older workers are just not as trusting of both the technology and the world around them.
Area 1’s Mohan says employees’ high level of interaction with social media sites – ranging from LinkedIn to Facebook and all those in between – have created a fertile feeding ground for hackers looking for personal information that can be used as bait in their spearphishing attacks, with business sites particularly in the crosshairs.
“We feel the criminals are using LinkedIn as their tool of choice to find information,” Apparao says.
Companies should also stress that being aware of what one broadcasts to the world on social media has the dual effect of being good for potential attackers.
SCAMS: Let’s go phishing
If one were to go by the sheer number of successful phishing attacks pulled off so far in 2016, it would seem as if all corporate employees had a huge “S” for sucker tattooed on their foreheads.
In March, a Seagate worker happily handed over the W-2 information for all 52,000 current and former employees to someone who did not work at the company. Just behind Seagate was Sprouts Farmers Market where 21,000 workers had their tax information stolen. Meanwhile, in April, Brunswick Corp., which owns the well-known Boston Whaler and Mercury Marine brands, was victimized to the tune of 13,000 worker W-2s.
Universities and colleges were proved to not be very smart when it came to sussing out real from faux email requests. Solano and Tidewater community colleges were each hit, but were in good company as the University of Virginia gave up the W-2 info on 1,400 employees.
However, some criminals decided to cut out the middle man and simply spoof workers into sending cold, hard cash out of the company coffers and into their bank accounts. The largest such incident involved an unnamed company that sent $100 million to a crafty crook who convinced them to change some direct deposit account numbers used to pay for goods to his own bank. Luckily, $75 million was recovered.
Toy maker Mattel fell for a similar scam, sending $3 million to an unknown entity, again recovered, and in late April, Pomeroy Investment Corp., of Troy, Mich., lost $495,000.