Content

A new ethics?: Moral compass

As hackers and cyberarmies shred network defenses, industry experts weigh the ethics of breach disclosure and information sharing. Lee Sustar reports.

Hammered by mega-breaches and constantly probed by would-be attackers, enterprises, government entities and other organizations are asking whether an ethical approach to cybersecurity requires sharing more information than the law requires. 

Many experts see ethical issues playing an increasingly important role as information security lapses not only spur greater consumer protection laws and increased regulation, but also put preeminent corporate brands at risk.

It's the reputational hit from a data breach that's the key driver of information security ethics, says Eric Burger, a research professor of computer science at Georgetown University. 

“Corporations act ethically because they have to,” says Burger, a veteran IT entrepreneur. “If they say they are, it is because they want to be in an ethical funds portfolio.” The average corporation is not founded on the basis of protecting data, he says. By contrast, the NGO where he serves as a board member has a commitment to ethics beyond the industry standard.

OUR EXPERTS

Eric Burger, research professor
of computer science, Georgetown University
Gary Kibel, partner in the digital media, technology and privacy practice, Davis & Gilbert
Ben Knieff, senior analyst, Aite Group
Larry Ponemon, chairman, Ponemon Institute
Thomas Smedinghoff, attorney, Locke Lord 

According to attorney Gary Kibel, the ethics of breach disclosure and threat intelligence sharing has to be seen in light of three basic categories: state and federal disclosure laws, regulatory requirements and contractual obligations to business partners that may require or prevent disclosure.

“If no one is forcing you, or you have no obligation [to report a breach], you need to decide whether you want to do it yourself,” says Kibel, partner in the digital media, technology and privacy practice at the Davis & Gilbert law firm in New York.

“We will talk to clients about what makes sense for their business,” Kibel says. “But ultimately they have to decide whether they are going to disclose even if they are not required or prohibited from doing so.” 

Those ethical questions are inevitably entangled with divergent and often conflicting breach disclosure laws across 47 states in the U.S., says Thomas Smedinghoff, a Chicago-based attorney with Locke Lord. 

“In some states, you are required to disclose certain things about the breach,” he says. “But in Massachusetts, the law says you are prohibited from disclosing them. If you were just being ethical, you could violate the law.”

Federal legislation governing data breaches present further challenges to the efforts of information security ethicists to balance collaboration with law enforcement with transparency to business partners and the public. The proposed Data Security and Breach Notification Act of 2015, for example, would mandate breach disclosure to consumers within 30 days “unless United States Secret Service or the Federal Bureau of Investigation determines that notification under this section would impede a criminal investigation or a national security activity.”

In this scenario, a company's efforts to do the right thing by consumers could be sidelined for weeks, months or even longer if national security agencies conclude that tracking corporate espionage or cyberattacks on critical infrastructure are more important than protecting the credit profile of millions of people. 

That could present a problem to businesses that have positioned themselves as being vigilant about consumer data. “From a corporate social responsibility and a public relations perspective, doing the right thing – and being perceived that way – can have a lot of advantages,” says Smedinghoff. Nevertheless, a rush to notify can make remediation more difficult, he adds. “You want to do the right thing, but want to make sure what you're dealing with before you do that.” 

Ben Knieff, a New York-based senior analyst at the Aite Group, makes a similar point. “Transparency is something we value in society but sometimes it is appropriate not to talk about something for at least a limited period of time,” he says. “Going public can substantially harm an ongoing investigation.”

That dilemma is more acute when it comes to disclosing cyberthreats through the sharing of intelligence, says Burger. “There is a huge financial disincentive for sharing,” he says, pointing to the Information Sharing and Analysis Centers (ISACs), established in the critical infrastructure industry and since broadened to financial services and other sectors. If you are a nuclear power plant operator found to be lacking in preparation for a threat that other industry players have found, “you will be fined,” he says. 

The government-initiated supplement to ISACs, the Information Sharing and Analysis Organizations (ISAOs), are given a pass on antitrust enforcement under the Cybersecurity Information Sharing Act of 2015 (CISA). But the ethics of public disclosure of potential threats are less clear cut. If a power generation company shares information about a potential cyberattack on the grid, what happens if one of the recipients of that information discloses it publicly, leading to a stock market selloff of the effected companies or panic over the possible blackout?

“There are huge ethical issues,” around threat information sharing, says Larry Ponemon (left), chairman of the Ponemon Institute, the Michigan-based research organization. “Sometimes what is shared is junk, it's trying to put a competitor out of business. This does require some ethics.”

The need for quality threat intelligence sharing is growing. According to a 2015 Ponemon Institute study, 47 percent of respondents reported a security breach that compromised networks or enterprise systems. Some 65 percent of respondents stated that threat intelligence could have prevented, or at least mitigated, those attacks.

It boils down to a simple question, says Burger: Does a corporation have to do something other than what government and regulators tell them they have to do? 

The evolution of automobile safety may provide a guide for what is to come in the ethics of cybersecurity, Burger says. “In 1930, we didn't know how to build safe cars. In 1950, we had a better idea. In 1960, we knew how to make safe cars, but didn't want to.” 

Ultimately, Burger says, it was Ralph Nader's consumer advocacy that ultimately forced automakers to transform vehicle safety.

For now, however, the patchwork of laws in the U.S. has complicated cybersecurity ethics. Then there's the European Union's General Data Protection Regulation, revised in 2016 to include a 72-hour notice to authority. But post-Brexit, in non-E.U. Britain, the less stringent Data Protection Act may become the governing law on the question, unless access to the E.U.'s single market requires conformance. The UK's independent commission on data privacy isn't clear on the issue.

Could an ethical approach help businesses find a way out of this global legal thicket? PwC consultant Stewart Room argued in 2014 that ethical considerations were a better guide to post-breach action than simply checking the boxes on legal obligations. Ethics, he wrote, “remind you of the bigger picture, helping you to do the right thing in a way that can withstand durable scrutiny.” Room concludes: “Ethics and breach handling go hand in hand.”

If consumer-facing companies may benefit from establishing an ethics of post-breach actions, the ethics of threat intelligence sharing are decidedly more complicated, says Ponemon. “Retail banks are spending about $250 million on cybersecurity tools – more like half a billion,” he says. “They don't want to spend that money and allow others to benefit. There is an idea that threat intelligence sharing is necessarily equitable.”

On the consumer side, a series of megabreaches have pushed the ethics issue into the foreground, with those affected forced to consider whether they should continue to trust their personal data and credit information to hacker-hit retailers like Target or health insurance providers such as Anthem. At the same time, a potential shift in breach liability away from credit card issuers – witness Target's $39 million settlement with bank credit processors in 2015 – puts pressure on retailers to go above and beyond the law in both the protection of data and disclosure of threats and compromises.

Meanwhile, government agencies charged with protecting consumers are raising the bar, with both the venerable Federal Trade Commission and the new Consumer Finance Protection Bureau pushing companies to become better stewards of private data or face fines and enforcement action. 

But, even the best ethical practices around cybersecurity and breach disclosure aren't enough to meet all those requirements, according to Smedinghoff. “Being ethnical does not necessarily mean you are compliant,” he says. “You may be ethically appropriate – but that may not be sufficient to satisfy the law.”

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.