Traditionally, ethical hackers disclosed their findings for a nod and, perhaps, a bug bounty. With stakes only getting higher, might they be lured with big payouts from questionable sources? Greg Masters reports.

It seemed like an anomaly in August 2016 when news broke that a group of security researchers at MedSec, a Miami-based startup cybersecurity research firm focused on the health care industry, brought their findings of a security vulnerability in a medical device not to the manufacturer, but rather to an investment firm, Muddy Waters Capital.

The proposal was: They’d share their findings – a software flaw in an implanted medical device system – if they could share in the profits from a short sell in the stock of the manufacturer, St. Jude Medical, which was in the midst of an acquisition by Abbott Laboratories.

The MedSec research was eventually substantiated by the Food and Drug Administration (FDA), which issued an alert on Jan. 9 warning patients with a radio frequency (RF)-enabled St. Jude Medical implantable cardiac device, as well as a complementary Merlin@home Transmitter used to send data from the implanted devices to a cloud server so medical personnel could access the data, left the devices open to hacking by malicious intruders who might send signals that could disrupt the devices’ intended operations, putting patients at risk.

Yes, the acquisition went through: Abbott Laboratories paid $23.6 billion on Jan. 4 to acquire St. Jude Medical. And yes, as Muddy Waters CEO Carson Block said, following the closing of the acquisition, without the notification St. Jude might not have mitigated the flaw.

Regardless of the outcome, the incident raised serious questions about the principles and ethics of those involved—and by extension, those who sniff out vulnerabilities. The traditional path for security researchers has long been to disclose their findings to the company involved and receive the firm’s gratitude and a public acknowledgement of their efforts, a tantalizing motivation for researchers as this accreditation often would lead to invitations to speak at conferences. A financial reward as part of a “bug bounty” program could be on offer as well.

The underlying principle is that a researcher here is doing a good deed by alerting the developer to a vulnerability, thus providing an opportunity for the company to mitigate the flaw.

On the darker side, however, researchers who uncover a software vulnerability could choose to sell their findings to an underground market rife with shady deals eager to exploit source code – a gold mine for profiteers who can leverage the code for a variety of attacks.

The case involving St. Jude can be said to fall somewhere in the middle. The MedSec researchers made a case that St. Jude’s had been warned of the flaw – several times – and failed to act. So, they partnered with the investment firm, according to MedSec CEO Justine Bone, “because they have a great history of holding large corporations accountable.”

But many industry observers don’t buy the argument.

“The overblown and misleading disclosure of this ‘research’ was structured purely to maximize opportunistic financial gains,” Alex Rice, co-founder and CTO at HackerOne, a vulnerability coordination and bug bounty platform, told SC Media. He went so far as to say he hopes to see the SEC investigate this behavior as classic short-and-distort securities fraud. “The disclosure of vulnerabilities in any technology should place the safeguard of consumers first, not blatant personal greed.” 

The threat of hacks into medical devices – and pacemakers in particular – has long been feared. As far back as Oct. 2013, former Vice President Dick Cheney revealed on the TV news show 60 Minutes that the wireless functionality of his heart implant had been disabled owing to concerns that hackers could assassinate him via a cyberattack into the device. As well, for several years running, a team of automotive cybersecurity researchers – Charlie Miller and Chris Valasek –demonstrated their ability to remotely enter into the computer networks of automobiles – particularly Jeep Cherokee – to alter settings that might interfere with the driving mechanisms. A demonstration at Black Hat in which the pair bypassed safeguards and sent malicious commands to car components, forced Chrysler to recall nearly a million-and-a-half of its Jeeps. For this pair of ethical hackers, their work led to positions at Uber’s Advanced Technology Center in Pittsburgh.

The stakes, the temptations

The marketplace for ethical hackers is only growing as the interconnectivity of devices tethered to computer networks and the internet grows bigger. The stakes – and temptations – involved for ethical hackers grow as well.

At the end of the day, a hacker is a hacker, says Chris Hinkley (left), lead ethical hacker at Armor, a Richardson, Texas-based cybersecurity firm. “The ethical aspect is purely philosophical and, I would argue, maybe even a buzzword to remove the bad connotation associated with the term ‘hacker.'”

To remain ethical can be a cumbersome challenge, Hinkley says. “It boils down to being able to follow protocol by getting express permission for engagements, ‘coloring inside the lines’ (based on assessment scope), and relaying information/analysis in a secure and responsible manner,” he says.

Hinkley explains that ethical hacking going on within his own firm not only makes the team more secure but its customers as well. “At some point, this will hopefully protect many other people, including myself, from an attack or hack that could have an impact on our lives.”

 

Ultimately, having something taken by someone that doesn’t own it or shouldn’t have it is offensive to most people, Hinkley says. “Especially when it can impact your business, your financial well-being, your health or any number of other aspects of life. It is very satisfying to have the ability and resources to combat cybercriminals to keep data safe from malicious ends by identifying and closing doors that potential threat actors could exploit.”

Vulnerability equals decision

Keeping people and corporations safe from intruders is also a motivating factor for Katie Moussouris, founder and CEO of Luta Security. Having started two vulnerability research programs – at Symantec and Microsoft – and co-edited the International Standards on vulnerability disclosure and handling, Moussouris says she is motivated by helping fellow hackers avoid prosecution for simply finding and trying to report vulnerabilities. She’s also in it to help organizations get more proficient at handling and fixing their inevitable security holes. “The world we live in is unavoidably connected and reliant upon the internet, and I would like to help make it as safe as possible for everyone,” says the bug bounty and vulnerability coordination and disclosure expert.

Moussouris (left) believes that as a hacker, each vulnerability found represents a decision. “Do you sit on it in fear of prosecution for finding it? Do you try to sell it? Do you give it to the organization to fix it, and get nothing in return?” 

Her work, she explains, both as a hacker and as a creator of vulnerability research programs and bug bounties, is designed to make it simpler for hackers and organizations to work together for defense. “This means that both sides should ideally get something out of the transaction of vulnerability reporting.”

Hackers shouldn’t be expected to do this work for nothing, she says, so organizations should consider incentives, whether that’s recognition or cash, ideally both.

In the St Jude case, she agrees with Rice, saying it is a classic ‘responsible’ disclosure debate inside of a safety debate enclosed in a quality debate, wrapped with a dollar bill around it for flavor. “Who is responsible? The researchers who make the public aware of the security and safety risk, or the manufacturers who have created the vulnerable products?”

Working with hackers

Katie Moussouris, founder and CEO of Luta Security, says her firm is the first and only company that helps governments and organizations prepare for vulnerability coordination and disclosure.

Moussouris explains that she’s never seen a completely “good” or “bad” human, so to try to characterize hackers this way seems bizarre to her. “Do I think that most humans are fundamentally good and decent? Yes. Therefore, if we are to extrapolate to hackers, I believe that most hackers will do the ‘right thing’ when it comes to vulnerabilities.”

However, she argues, if governments and corporations make reporting security holes difficult, or even risky – by threatening legal action against anyone looking for those holes without providing assurances that if the hacker reports them, then they have safe legal harbor – then most people will naturally avoid risking their freedom to “do the right thing” and choose to sit on rather than report the issues they find.

“If we want more ‘white hat’ behavior from hackers in the form of turning over discoveries to get them fixed, then we need governments and companies to roll out the red carpet for hackers and welcome vulnerability reports,” she says. 

To expect and encourage ethical hackers to come forward to any organization, that organization needs to prepare to respond to vulnerability reports with grace and agility, Moussouris adds. “Each organization needs to assess their capabilities and goals before jumping into a vulnerability disclosure program or bug bounty program.”

And that extends to the government. The Department of Defense, she points out, became the very first U.S. government arm to set up a permanent reporting mechanism and safe harbor for hackers to speak up in a cyber ‘see something, say something’ vulnerability reporting program. “More government branches and private companies would do well to follow suit.”

Unfortunately, there are many forces aligned against innocent people to take advantage of them from a cyber standpoint. From identity theft to financial theft, the public and businesses are vulnerable, Armor’s Hinkley points out. “I take great pride in entering the fray to face these threats head on to help keep friends, neighbors and customers safe.”

When asked whether he believes there are more white hat hackers out there than is generally perceived, Hinkley says, “Sure, there are more white hat, or ethical hackers, than immediately obvious, quite a few more in fact.”

That being said, it’s safe to say there are countless more black hat hackers out there than any reasonable person would think, he cautions. “It’s far easier to operate outside the confines of established process, procedure and red tape associated with ethical hacking.”

For years, cybercrime was in the background for the public, Hinkley adds. “Most people were oblivious to what’s going on in this realm of technology. Now, with high-profile headlines about hacks and the fact that the topic is no longer being relegated to the tech community, the sense of urgency to keep data and people safe carries additional weight and a sense of obligation.”