Content

Compliance: Watch your step!

Avoiding the perilous pitfalls of compliance

It’s no secret that Fortune 1000 CISOs struggle with compliance, but the pitfalls that fuel the most fury aren’t typically the ones with regulators (although regulator arguments do come in a close second). No, the battle is often internal, such as fighting over jurisdiction, with the California Consumer Privacy Act (CCPA), a set of privacy rules that has no direct—and that’s the battle—security requirements.  

Or CISO angst might come from marketing, which threatens a wide range of privacy and security protections with activity- and ad-tracking software that can unintentionally capture all manner of PII and otherwise sensitive data. It is sometimes battles with third-party vendors, who do not deliver the security and privacy protections that they said they would.  

And it’s periodically CISOs fighting other departments that hire those third parties, trusting them with critical data and not bothering to ask security to perform due diligence. 

Privacy rules these days are getting most of the attention, whether it be CCPA, the European Union’s GDPR or one of the many proposed new privacy bills, including a handful actively being pursued now in the U.S. Congress. Should privacy be part of the CISO’s mandate?  

Even though some privacy rules have few specific security requirements, there’s an argument that they absolutely impact security in that they force the company to deliver extra protection to a subset of corporate data. That, in turn, forces different decisions from security analysts.  

If an enterprise charges a premium to access certain fudge recipes and an attack releases all of those recipes into the wild, there is no compliance exposure. It may deliver a revenue hit—delivering a likely series of nastygrams from the CFO and some LOB executives—but regulators likely won’t care.  

Or will they? If the breach highlighted security holes, which could just as easily have exposed sensitive PII data or payment card information but didn’t, compliance enforcers may still ask a lot of questions. 

Of greatest concern for compliance are sections of the new rules that get less attention, such as the fact that GDPR defines IP addresses—which U.S. companies have rarely treated as especially sensitive—as PII, forcing them to be treated in the same way as a customer’s Social Security number, purchase history or payment card number.  

Aanand Krishnan, CEO and founder of Tala Security, points to a different little-discussed part of CCPA: a requirement to allow consumers to authorize agents on their behalf.  

From the CCPA: “Establishing rules and procedures to further the purposes of Sections 1798.110 and 1798.115 and to facilitate a consumer’s or the consumer’s authorized agent’s ability to obtain information pursuant to Section 1798.130, with the goal of minimizing the administrative burden on consumers, taking into account available technology, security concerns, and the burden on the business, to govern a business’s determination that a request for information received by a consumer is a verifiable consumer request, including treating a request submitted through a password-protected account maintained by the consumer with the business while the consumer is logged into the account as a verifiable consumer request and providing a mechanism for a consumer who does not maintain an account with the business to request information through the business’s authentication of the consumer’s identity, within one year of passage of this title and as needed thereafter.” 

Krishnan argues that the intent of this provision might be to perhaps allow an attorney to represent a large group of consumers wanting to opt-out of some provision. The problem is that the provision not only permits such representation, but it requires a specific authentication mechanism to verify both the consumer’s request and to further have a means of authenticating that attorney as representing a specific list of customers.  

And yet, few companies have created such a mechanism. And if they have, have they made it easy to find, both for their customers and an authorized agent? Is there a prominent reference on the homepage for both groups to find? Is it listed under an easy-to-find pulldown menu on every page and phrased in such a way to make it clear what it is offering? 

As for the authentication, many of the more sophisticated authentication methods—such as continuous authentication, using multiple means of biometrics (typing speed, errors per minute, pressure applied to keys, angle a phone is held, etc.)—are designed to authenticate one individual and then grant them access to whatever is accessible via their privilege level. Authenticating the agent is straightforward enough, but how to apply that approach to granting access to potentially hundreds of customer accounts? These are questions that California regulators are likely to ask. It’s not merely compliance, but creating mechanisms to permit future compliance. 

Another Krishnan concern involves site-tracking systems coming from Marketing, without the blessing of either the CISO or the CIO. He said that he worries that CISOs are “not paying [enough] attention to web sites.” Sometimes, Marketing will use traffic and activity analysis programs, such as freeware Google Analytics, to track information. This often exposes PII information from users filling in online forms. “Marketing people put whatever they want on the website,” Krishnan said, adding that he was investigating one client site and found “80 third parties on their web site. Talk to your marketing team and see what they are doing on the website and you’re going to be scared.” 

Some suggestions from Krishnan to minimize the risk from these web site invaders: Try using HTML iframe tags to protect sensitive content and offer to host your own version of Google Analytics (Tell Google “Give me a copy and I’ll serve it directly,” Krishnan suggests) so that the data is still collected, but it won’t leave your site, potentially placating regulators. 

Not only can this data exfiltration be a problem for various privacy requirements, it can also be a terrific starting place for an attack. PCI recently issued an advisory about third-party services, warning that this move by marketing could also endanger PCI compliance.  

It said: “Third-0party services and products should be reviewed to identify the impact on the organization’s PCI DSS scope. It is recommended that organizations prohibit external assets on pages that accept cardholder data, as it extends the cardholder data environment scope to any environment hosting those assets. Customer contact portal vendors are an example of third-party service providers that should be reviewed as part of the organization’s scope. Removing or disabling unnecessary plug-ins and services is also recommended. It is important to ensure that any third-party scripts that are present in other areas of the website cannot gain access to payment pages or other sensitive areas.” 

Nick Baskett, a U.K.-based compliance consultant who serves various firms as acting data privacy officer (DPO), says that he sees more CISO compliance problems stemming from CISO communications snafus.  

“CISOs are still not great at communication. It’s hard for their bosses or peers to take their advice when they are not speaking the same language,” Baskett says. As one example, he talked about a CISO he was working with who refused to sign a DPIA (data protection impact assessment) and got into a chicken-and-egg endless argument with five project managers who just wanted to get the issue resolved. 

What started this battle was that HR had retained a payroll vendor and HR never bothered to get Security to approve or to run due diligence to make sure that the payroll firm had appropriate security mechanisms in place. It turned out the payroll vendor did not have appropriate security mechanisms in place, so the CISO’s rejection of the DPIA request was quite legitimate.  

The security vendors had argued to HR that “we have big customers so you should trust us,” an argument that won over HR, Baskett said. “What processes did they have in place? Who had access? How did they control it? [Once Security started testing] it all started to fall apart.” 

There was also a political dynamic to the payroll vendor, in that the employee who hired them was an inside officer on the board of directors. 

In short, when the board member approved, no one other than the CISO objected.  

So far, so good. The communication problem was that the three project managers were not asking for the CISO to relent and sign the DPIA. They were merely asking what the payroll vendor needed to change to meet security requirements. And yet, the CISO just kept repeating that he wasn’t going to sign. Said the managers: “You’re not hearing us. You’re not listening. Why don’t you listen to us? We’re hoping to go live in three weeks. What do we need to do? Under what circumstances could you sign off?” according to Baskett.  

Eventually, someone broke the communication deadlock and the CISO proposed “a series of milestones that they need to make, including sending over live data under certain circumstances,” Baskett says.  

The company gave the payroll firm some test data and they were given a separate secure environment to run tests. The payroll firm was also told to add whitelisting to its firewall. Eventually, the CISO approved the DPIA when his conditions were met, Baskett says. 

Another compliance concern that Baskett mentioned—which is especially germane for U.S. companies dealing with the new privacy rules—is that some CISOs get overly focused on the business risk, compared with the risk to data subjects, i.e. customers’ risks. He pointed to how CISOs use risk registers.  

“I see these risk registers, with 30, 40 pages of Excel spreadsheet with red, red, red. The board doesn’t know what that means. You cannot assess the damage to the business without also assessing the damage to the individuals. (Individual risks) have to be dealt with on the risk register in a way that the board can understand. Example: Our web servers have a vulnerability that has this impact and this likelihood. Instead of listing a lot of risks, just list a few risks and give them the context,” Baskett says. 

He explains some CISOs tend to try and decide what is worth the risk, but there comes a point when that is not the CISO’s call to make. Instead, he suggests, CISOs should go to the board, lay out options and the associated implications, and let the board figure out the board’s risk appetite and risk tolerance. Then the CISO can take those marching orders and make appropriate decisions, Baskett says. 

Some argue that compliance rules tend to be either too prescriptive or too vague. “One of the things that drives me nuts is the generally very ambiguous wording that they put into this, such as ‘in a reasonable time,’” says Anthony Meholic, a former information security manager for JPMorgan Chase, former CISO for Republic Bank and the current chief security officer (CSO) for The Bancorp Bank headquartered in Delaware. Meholic says that Bancorp Bank is the U.S.’s second-largest gift card issuer, including the Visa gift card.  

“The legislation is being drafted by people who are not involved in that area,” Meholic says. “CCPA initially was atrocious. It was not written by anybody who was in the industry. The content was just a mess.” 

It’s hard for compliance rules to win in the specifics versus generic battle. Given the massive number of companies of different sizes and different verticals that are subject to compliance in so many countries, it’s almost impossible to craft specific rules that are appropriate and fair to all of them.  

The intent behind the vagueness (yes, rule writers, it’s a feature not a bug) is to give regulators in the field flexibility to look at what an enterprise has done and why they say they did it, and to try and make appropriate rulings as to whether the enterprise complied with the intent of the rule. 

That said, it’s hard for CISOs to know what they are supposed to do when the rules are so vague. Consider GDPR’s data-breach notification rule: “In the case of a personal data breach, the controller shall, without undue delay, and where feasible, not later than 72 hours after having become aware of it, notify the personal data breach to the supervisory authority competent in accordance with Article 55, unless the personal data breach is unlikely to result in a risk to the rights and freedoms of natural persons.”  

It sounds as though it’s setting a specific timetable (72 hours) for notification, but it wants the clock to start running when the controller (which is the enterprise) becomes aware. When does Boeing or Walmart become aware of something? Is it when a security analyst first notices a log anomaly? When the CISO is first told of a possible breach? When the CISO becomes convinced it’s a real breach? When the CEO is told? When the CEO is convinced? Maybe it’s when the board is told or a majority of board members become convinced? The rule offers no practical hints. 

Other than phrasing, Meholic says, his team works with compliance issues without much pain. In their cloud environments (Amazon’s AWS and Microsoft Azure), for example, Meholic issues that the cloud issue them their own encryption key, one of that is different from any other cloud tenant.  And with AWS, Meholic negotiated the right from Amazon to perform periodic pen testing, which they tell Amazon about right before they begin. It’s not unannounced, but it’s announced with almost no warning. 

But Meholic says that he has concerns about where in the enterprise the main responsibility for dealing with different compliance rules sits. He says that he’d prefer for CCPA to sit within Compliance. “There’s this tug-of-war within the corporate hierarchy of where it should sit. In a perfect world, all compliance should be in the compliance group.” 

Another concern is that privacy enforcement requires a very different perspective than security enforcement. Brian Rizman, a partner at Edgile, a security and compliance consulting firm. Rizman argues that the two teams should free up a specialist in one group to work in the other group, sort of a privacy version of DevSecOps (where a security specialist is embedded into LOB units). “A privacy officer should be teaming with security,” Rizman says. 

Other experts point to more fundamental procedural issues. Shawn Fohs, a managing director with EY (formerly Ernst & Young) who leads the consulting firm’s privacy and cyber response practice, said that his compliance wish list for his clients is to redouble efforts to investigate any third parties that they work with. Also, Fohs said that he still needs enterprises to focus on getting accurate global datamaps, which means getting a much better handle on shadow IT. 

As for the datamap challenge, Fohs says “It’s certainly one of the most complex aspects. They still don’t know what they have and where they have it.”  

Another concern is the reasonableness aspects of compliance enforcement. If a rule requires something that is impossible for the business to deliver, Fohs said, the CISO must put the onus back on the compliance regulator to propose something viable, both from a cost and operational perspective.  

“For a regulator to enforce it, [regulators] have to be able to say how they expect it to be done.” 

Richard Bird, the chief customer information officer for Ping Identity, says that one of his biggest concerns about security compliance is that CISOs, for various reasons, are testing security while limiting themselves in ways that no cyberthief is ever limited.  

Hence, he’s questioning whether such testing delivers realistic results. 

‘’Technologists, with the best of intentions, pass data to the auditing organizations that has been ‘massaged’ or ‘cleaned up.’ Pen testers are limited to only those actions that have been predetermined, such as terminating the pen test immediately if they get anywhere near a production environment,” Bird says. “Attackers don’t recognize or honor these types of compliance-driven behaviors. No cyberthief runs to the list of accounts and users in a sample set to find an exploitable pathway. They tear through every part of the technology stack to find precisely what wasn’t included in the sample.” 

Bird suggests using a staging environment, but one that is much more comprehensive and realistic than is typical.  

“Create a fully-realized production environment and then beat the hell out of it,” he says, suggesting an approach that is similar to an engineering destructive testing environment as well as the fake Western frontier town scene from Blazing Saddles.  

When running enterprise security in 2020, a CISO must take inspiration from anywhere a CISO can.

Evan Schuman

I’m a Computerworld columnist and a cybersecurity writer for McKinsey. I’m also the former Editor-in-Chief of a retail IT media outlet called StorefrontBacktalk. Officially, I am the CEO of a content company called The Content Firm LLC.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.