The marketplace for ethical hackers is only growing.
The marketplace for ethical hackers is only growing.

Traditionally, ethical hackers disclosed their findings for a nod and, perhaps, a bug bounty. With stakes only getting higher, might they be lured with big payouts from questionable sources? Greg Masters reports.

It seemed like an anomaly in August 2016 when news broke that a group of security researchers at MedSec, a Miami-based startup cybersecurity research firm focused on the health care industry, brought their findings of a security vulnerability in a medical device not to the manufacturer, but rather to an investment firm, Muddy Waters Capital.

The proposal was: They'd share their findings – a software flaw in an implanted medical device system – if they could share in the profits from a short sell in the stock of the manufacturer, St. Jude Medical, which was in the midst of an acquisition by Abbott Laboratories.

The MedSec research was eventually substantiated by the Food and Drug Administration (FDA), which issued an alert on Jan. 9 warning patients with a radio frequency (RF)-enabled St. Jude Medical implantable cardiac device, as well as a complementary Merlin@home Transmitter used to send data from the implanted devices to a cloud server so medical personnel could access the data, left the devices open to hacking by malicious intruders who might send signals that could disrupt the devices' intended operations, putting patients at risk.

Yes, the acquisition went through: Abbott Laboratories paid $23.6 billion on Jan. 4 to acquire St. Jude Medical. And yes, as Muddy Waters CEO Carson Block said, following the closing of the acquisition, without the notification St. Jude might not have mitigated the flaw.

Regardless of the outcome, the incident raised serious questions about the principles and ethics of those involved—and by extension, those who sniff out vulnerabilities. The traditional path for security researchers has long been to disclose their findings to the company involved and receive the firm's gratitude and a public acknowledgement of their efforts, a tantalizing motivation for researchers as this accreditation often would lead to invitations to speak at conferences. A financial reward as part of a "bug bounty" program could be on offer as well.

The underlying principle is that a researcher here is doing a good deed by alerting the developer to a vulnerability, thus providing an opportunity for the company to mitigate the flaw.

On the darker side, however, researchers who uncover a software vulnerability could choose to sell their findings to an underground market rife with shady deals eager to exploit source code – a gold mine for profiteers who can leverage the code for a variety of attacks.

The case involving St. Jude can be said to fall somewhere in the middle. The MedSec researchers made a case that St. Jude's had been warned of the flaw – several times – and failed to act. So, they partnered with the investment firm, according to MedSec CEO Justine Bone, "because they have a great history of holding large corporations accountable."

But many industry observers don't buy the argument.

"The overblown and misleading disclosure of this 'research' was structured purely to maximize opportunistic financial gains," Alex Rice, co-founder and CTO at HackerOne, a vulnerability coordination and bug bounty platform, told SC Media. He went so far as to say he hopes to see the SEC investigate this behavior as classic short-and-distort securities fraud. "The disclosure of vulnerabilities in any technology should place the safeguard of consumers first, not blatant personal greed." 

The threat of hacks into medical devices – and pacemakers in particular – has long been feared. As far back as Oct. 2013, former Vice President Dick Cheney revealed on the TV news show 60 Minutes that the wireless functionality of his heart implant had been disabled owing to concerns that hackers could assassinate him via a cyberattack into the device. As well, for several years running, a team of automotive cybersecurity researchers – Charlie Miller and Chris Valasek –demonstrated their ability to remotely enter into the computer networks of automobiles – particularly Jeep Cherokee – to alter settings that might interfere with the driving mechanisms. A demonstration at Black Hat in which the pair bypassed safeguards and sent malicious commands to car components, forced Chrysler to recall nearly a million-and-a-half of its Jeeps. For this pair of ethical hackers, their work led to positions at Uber's Advanced Technology Center in Pittsburgh.

The stakes, the temptations

The marketplace for ethical hackers is only growing as the interconnectivity of devices tethered to computer networks and the internet grows bigger. The stakes – and temptations – involved for ethical hackers grow as well.

At the end of the day, a hacker is a hacker, says Chris Hinkley (left), lead ethical hacker at Armor, a Richardson, Texas-based cybersecurity firm. "The ethical aspect is purely philosophical and, I would argue, maybe even a buzzword to remove the bad connotation associated with the term 'hacker.'"

To remain ethical can be a cumbersome challenge, Hinkley says. "It boils down to being able to follow protocol by getting express permission for engagements, 'coloring inside the lines' (based on assessment scope), and relaying information/analysis in a secure and responsible manner," he says.

Hinkley explains that ethical hacking going on within his own firm not only makes the team more secure but its customers as well. "At some point, this will hopefully protect many other people, including myself, from an attack or hack that could have an impact on our lives."