Eye in the sky
Yesterday morning the Federal Communications Commission (FCC) passed new—and controversial—rules regarding how internet service providers (ISPs) may use customers’ “sensitive” personal data. The privacy rules, proposed by FCC Chairman, Tom Wheeler, took a first pass through the Commission back in March in the form a vote (approved 3-2) which paved the way towards the strict guidelines which will put control over the use of personally identifiable information (PII) back into the hands of data owners: the users themselves.
What makes this ruling so groundbreaking—and controversial—is that the definition of “sensitive PII” goes beyond the Federal Trade Commission’s (FTC’s) inclusion of name, email address, health information, financial data, Social Security numbers, or credit card information. The FCC rules also say that any information captured about users’ precise geo-location, web browsing history, app usage, TV viewing history, or content of communications (including text messages, emails, and call records) is off limits when it comes to sharing with or selling to third parties unless the user specifically chooses to opt in. As part of the opt-in process, service providers must now clearly explain what data are being collected during the course of device usage, why the data are collected, how the collected data could be used (if consent is given), and with what types of companies PII could be shared. If the user does not provide consent, the ISP is out of luck insofar as monetization of sensitive data goes. Non-sensitive information, including email address and service tier information, remains “in play” for service providers unless the consumer opts out; ISPs can continue to use customer data themselves for things like reviewing account information and suggesting new or alternative products and/or services.
I am the maker of rules
The FCC ruling is a huge hit to ISPs who are trying to compete for revenue against the likes of social media and search engine giants, which are subject to the less rigorous FTC rules. In an age when every piece of data collected by companies is monetized to the fullest extent possible, ISPs are crying foul.
Indeed, the new rules put service providers at a disadvantage as compared to media and online contemporaries when it comes to advertising and marketing dollars, and representatives from broadband providers, including AT&T and Verizon, have come out against the ruling and even hinted at possible legal action to stop the rules from taking effect.
Money makes the world go ‘round, as we all know, and while the new rules deliver a blow to ISPs, they are a big win for consumers’ privacy rights, as well as a helping hand for security awareness. One of the main problems with information security is the fallibility of humans. This is not to say security gets a “pass” on vulnerabilities, system misconfigurations, or ignored or mishandled alerts, nor should users be continually blamed when they’re caught out by a tricky phishing campaign; the fact is, though, people who run and use tools and services—whether they’re security practitioners or end users—are only human and mistakes can be made.
Dealing with fools
For many long years, information security teams have been running awareness campaigns to help end users become more conscious of and conscientious about security best practices. These campaigns have had varying levels of success (depending on whom you ask), but part of the reason awareness programs have not been “very” or “extremely” successful is because, for the most part, end users are not required to participate in security. Yes, we ask users to change network/device/application passwords regularly, but doing so has become a rote action, not something the end user perceives as valuable or an improvement over not doing so. When it comes to the big breaches that affect consumers’ credit cards, other financial information, or even PII, most of the fallout is handled by a third party; responsibility has rarely been actively in the hands of consumers.
I can cheat you blind
To illustrate a successful communications campaign, take a look at a section of the Massachusetts 2016 election ballot. In Massachusetts, the ballot questions on which citizens may cast a vote include increasing the number of charter schools, legalizing the use of recreational marijuana, expanding gaming, and prevention of cruelty to farm animals. The legalese behind these questions is exactly what one would expect: long and somewhat convoluted. Many voters won’t entirely read the questions, so MA simplified them—because an accurate vote is important to the state:
If ISPs follow a similar format, it will be one small step for man, on giant leap for security-kind. For example:
“’Opt In’ means your name, email, contact information, geo-location, web browsing history, app usage, TV viewing history, and content of communications may be sold to or shared with partners or third parties. If you do not opt in, the above information cannot be shared with or sold to partners or third parties.”
And I don’t need to see anymore to know that
Because it’s not in ISPs’ best interests to make messaging clear and uncomplicated, it’s unlikely they’ll choose this route unless mandated by the FCC. The hope, however, is that as data owners are forced to become increasingly actively responsible for their own privacy, the more seriously they’ll consider privacy and security matters. Security can use a bigger “army” to fight infosec battles; let’s hope this is another move closer to making that happen.