In a hyper-competitive world, how do digital developers ensure cybersecurity and privacy without stifling progress? Bradley Barth investigates.
Even though the content review would be limited to specific employees and circumstances, many app users objected that they would have to opt out of the process to preserve their privacy. In light of this backlash, Evernote Corp. scrapped the new policy and decided to offer an opt-in mechanism instead.
Such are the pitfalls encountered by digital developers, web services providers and Internet of Things manufacturers as they strive to stay competitive, cool and cutting-edge. Indeed, as a digital or cyber product grows in capability, so does its attack surface. And sometimes even the most well-intentioned efforts to innovate and improve the user experience are fraught with privacy and security landmines that can blow up in your face.
Consequently, innovative companies find themselves performing a high-wire act as they try to balance creativity with security. The trick is convincing corporate leadership that the two concepts aren’t mutually exclusive – but that’s easier said than done.
“There is no innate conflict between innovation and security,” said Bill Curtis, SVP and chief scientist at CAST Software, a software analytics and measurement firm. “The tradeoff often comes in speed-to-market versus adequate quality assurance.”
Executives usually make a mental trade-off between revenue lost per week of additional development and testing time versus the potential loss from security breaches, outages, etc., he says. “The problem is that they rarely understand the full extent of the damages if a security breach is extensive.”
In a 2016 survey conducted by the Economist Intelligence Unit of 1,100 senior executives, 45 percent of top corporate managers (i.e., CEOs, CFOs and COOs) asserted that cyberattacks and efforts to mitigate them impede their product launches, compared to only 20 percent of top IT security professionals (CIOs and CISOs) who felt this was true. Fifty-four percent of corporate management also said that measures to prevent cyberattacks absorb too much management time, while 45 percent said attention to these processes slow competitive response.
With the exception of companies operating within the cybersecurity space, most digital innovators “want to know that they have a product that people are interested in before they spend significant resources, money and time layering in security. That’s not to say they don’t have any security at the onset, but it’s the barest of all standards,” says Elad Yoran, CEO of Security Growth Partners, an investment firm specializing in critical innovative security solutions, and chairman of communication security company KoolSpan.
Consequently, many products are rushed to completion, fueled by pressure from investors, venture capitalists and corporate decision-makers who fear another company might swoop in, out-innovate them and steal market share. Compliance policies and legislation, including the upcoming GDPR regulation in Europe, can also create a heightened sense of urgency.
Larry Ponemon, chairman of the privacy, data protection and information security research group Ponemon Institute, laments that in most cases, it’s only after a breach or cyberattack that companies “suddenly find newfound religion around security and privacy.”
But by then it is probably too late. “Companies are faced with the pressures of time to market and cost constraints, yet fail to recognize the cost to add security in after a product ships,” says Craig Spiezle, founder and president of the Online Trust Alliance (OTA), an Internet Society Initiative, which supports digital security, ethical privacy practices and data stewardship.
Ponemon agreed: “Any time you build security after the fact it’s much more expensive than doing it…by design, when you start thinking about it early in the process.”
There is no shortage of real-life cautionary tales: Ponemon immediately thought of a medical device manufacturer whose product was recently found to be insecure. No patients were harmed, but the company “had to deal with some issues in terms of reputation, brand value, regulatory pressure in order to basically resolve the problem effectively.”
Ponemon did not name the company, but in October 2016, Rapid7 and Johnson & Johnson warned users of its Animas OneTouch Ping insulin pumps that hackers could hijack communications between the pump and its control to deliver unauthorized injections. And in January 2017, the Food and Drug Administration issued an advisory that hackers could interfere with the interaction between St. Jude Medical’s Implantable Cardiac Device and its corresponding Merlin@home Transmitter, allowing them to rapidly drain batteries or deliver deadly shocks or heart rhythms.
Web-based services have also been known to launch prematurely before proper security measures are in place. This is evidenced by the U.S. health insurance exchange website Health care.gov, which was riddled with security and privacy holes in its early days, recalls CAST’s Curtis. In other cases, websites are rendered insecure by flawed updates that needed more testing, as Curtis believes was the case this year in Michigan, when a software update to a state computer system may have exposed data belonging to recipients of unemployment benefits.
“Arguably, any software-intensive system with well-known flaws – such as SQL injection or cross-site scripting – was released too early without adequate testing,” adds Curtis.
On the other hand, the threat of being left in the dust is also a legitimate concern. Ponemon, for instance, recalled an unnamed printer manufacturer whose security issues took so long to resolve that “a competitor beat them to the punch and they lost some market share as a result.”
Indeed, for some companies, there really is such as a thing as too much security. Yoran noted that for certain consumer apps or games, excessive security or privacy policies can be “intrusive to the user experience and, in many cases, contradictory with other objectives of the developer” – perhaps short-circuiting opportunities for customization and targeted marketing.
So how do developers and IT professionals work together to employ responsible security practices, while not taking unnecessary precautions that slow innovation?
For starters, SecDevOps, or the collaboration between IT security and IT professionals and software coders, helps ensure that security and privacy protections are baked into the earliest stages of design and development.
Yoran says that development and security teams should spend more time together “building relationships and cross-pollinating so that the developers learn to appreciate that the security folks are on their side. It’ll be a cultural shift for many development teams, but it will yield substantial long-term benefits.”
Rather than viewing security as a hindrance to progress, companies that practice some form of SecDevOps often see security as a key differentiator and an innovation in and of itself.
“It starts with the people who are the engineers or the creative people who are developing new apps and new devices; to have them actually think about security as part of their early-stage process during the development cycle,” says Ponemon, citing Apple as a prime example.
Secure development is especially important for IoT devices such as connected cars, home appliances and medical devices, all of which have the potential to cause serious physical harm.
“Every IoT device vulnerability discovered to date could have been averted had secure development practices been adhered to,” asserts OTA’s Spiezle, who noted that even after development is complete, companies must prepare and reserve budget for ongoing security maintenance and bug patching.
Conducting a thorough, realistic risk assessment of new applications and digital products can also help developers write more stable, secure code, while striking the right balance between security and innovation.
“We’ve seen applications that are over-engineered in terms of quality, and we’ve seen many that are under-engineered,” says Curtis. “The problem is that most executives don’t have a way to tell whether they are over or under in terms of quality, security and software risk. Having a measurement system, with benchmarking of software attributes, provides a level of insight to executives to make informed tradeoffs – both to avoid having too much quality and to avoid having quality lapses in critical application areas.”
Without this formal risk assessment process in place, executives and managers usually underestimate the damages, Curtis says.
“For security, the question becomes what data must be protected, how much of it is there, and what are the financial damages if the data is breached,” Curtis adds. “These answers are then compared to the cost of quality assurance and the lost revenue from additional time before going operational. There is also a financial effect from loss of reputation and trust that must be factored in to determine the point described as ‘good enough.’”
However, the benefits of risk assessment could be diminished if the organization conducting it lacks the corporate governance to apply its findings responsibly. To that end, the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) promotes innovation and competitiveness through the advancement of standards and technology, including a cybersecurity framework that is currently undergoing significant revisions.
Katerina Megas, a manager with NIST Cybersecurity for IoT Program, and a pilots program manager with the same organization’s National Strategy for Trusted Identities in Cyberspace, says that although risk assessment techniques can prove useful “these need to be [incorporated] into the corporate governance process, where humans can make informed decisions about the potential trade-offs to consider and accept any residual risk that cannot be remedied.”
A mature corporate governance includes the supporting policies and processes to ensure, evaluate and measure that the organization’s activities continue to support the overall corporate goals and objectives, she adds.
As long as companies adhere to their own self-imposed guidelines, corporate leadership is less likely to recklessly rush a product to market.
If companies can put a little less pressure on developers and give people the opportunity to innovate even on the security issues, that can be extraordinarily helpful, Ponemon explains. “So people who are a little slower getting their job done but are building a better security architecture are not going to be penalized.”
Ultimately, companies that find themselves in the center of a security firestorm may learn the hard way that the repercussions are far worse than if they had just remained patient with their developers.
“Quality assurance does not stifle innovation when properly performed,” says Curtis. On the other hand, major disasters from poorly engineered systems can stifle innovation because resources are drawn away for repair and damages, and executives become risk averse.”
Four decades of data now demonstrate that if proper discipline is used during development, high quality software-intensive systems can be delivered in shorter timeframes and cost less to maintain, Curtis says.