Take me to the river
As the results of the Anthem breach investigation make their rounds, the security industry is reminded once again that phishing is a highly effective attack method. Barriers to entry are low, and once an attacker invests the time and effort to create a convincing and effective phishing email, that same phish can be easily used again and again until a single user falls victim, opening doors for the attacker to waltz right through.
Industry blogs, articles, and media offer plenty of conventional wisdom about implementing end user security awareness and training as a method for preventing phishing attacks. The fact of the matter is, though, that many of today’s malicious emails are carefully crafted—some professionally designed—and include personal references that make it difficult for even a keen eye to detect a phony message. Security and awareness programs should absolutely continue, but because identification is tough and users are only human, programs must primarily focus on controls that can be implemented to stop or mitigate damage once the attacker has successfully phished a victim.
I don’t know why I love her like I do
To decrease the number of bogus emails reaching the user in the first place, organizations should use spam and web filters. While not 100% auto-effective, filters can wipe out some of the more obvious threats, stopping them from landing in inboxes where busy users might accidentally interact, thereby spreading malware or providing additional information necessary for an attacker to inflict further harm. It’s also worthwhile to consider implementing a sandbox where emails can be inspected and/or scrubbed prior to delivery.
All the changes that you put me through
The goal of many phishing scams is to extract credentials from victims. Once valid credentials are obtained, the attacker can creep around in company systems, often undetected for long periods, and access all sorts of proprietary and sensitive information. It’s a pretty good deal if you think about it: compromise one user, obtain the keys to the kingdom.
Gaining unfettered access shouldn’t be so easy, however. Network administrators, in particular, are renowned for having too much access to too many parts of the network, including the entirety of the data stored or passing through. Admins do, indeed, require access to more data, systems, and applications than the average user, so to keep highly sensitive data secure in light of this fact, security teams should ensure that all data does not reside in one central location, nor should the different pieces be accessible with the same credentials. Think of it this way: different locations, different access codes.
Organizations should segment data and require privileged users to use separate logins for each area. In other words, an admin should not be able to use the same credentials to access both the company’s customer database and the company’s financial information. Is that more of a pain? Sure it is. Does it put a major road block in front of attackers? Absolutely. Particularly sensitive data can also be kept on air-gapped networks, further reducing the possibility that if one system is compromised, everything else will be too. Raise the stakes on the attacker by erecting additional barriers at every turn.
For regular users, the principle of least privilege should always apply, though companies report that their IT and security policies and procedures are laxer than, perhaps, is warranted. Very few people in an organization require the level of access to as many pieces of information as they are granted. Security teams should regularly audit user groups and ensure users are authorized only for what they need to perform job functions. In addition, automated account de-provisioning can be implemented so that permissions can be adjusted when an employee leaves the company or moves to a different department.
Take my money, my cigarettes
Many times attackers will send “urgent” requests to their targets, stating that the user “must provide information immediately.” Very often the “information” requested is username and password to a particular account. To remove password disclosure from the equation, companies should consider using a password manager. Doing so means that users won’t know (and thus won’t be able to give up) individual site/application/system passwords. Layering two-factor authentication (2FA) means that if an attacker manages to co-opt the master password to the password manager unless he’s also identified and has access to the second factor, he will be denied access. When a password manager isn’t implemented, 2FA becomes an even more critical control.
Needless to say, more companies also need to regularly use strong encryption for sensitive data assets. Encrypted data is less valuable to adversaries, as decrypting it would require considerable extra effort. Because some organizations will continue to store private information in the clear, let them be the low-hanging fruit, not you.
I haven’t seen the worst of it yet
Stacking security controls is the best way to ensure that if one of your users is compromised, your company won’t become the next Anthem (or Sony or Amazon or Dropbox or…). It’s nearly impossible to prevent phishing; criminals have become too good, and users cannot be the primary defense against adversaries. The security tool arsenal is vast, and practitioners need to be making better use of spam and web filters, sandboxing, password management, and encryption. Also, carefully monitor and control access, especially to sensitive systems and data, segment sensitive data across divergent systems, and apply the principle of least privilege.
This list of tools and techniques isn’t exhaustive, but it’s a step in the right direction. Out tricking the tricksters is hard, but it’s nothing enterprise security pros can’t do with a little extra concerted effort.