When you ask an information security professional in IT what he or she has been doing to protect end-users from malware, you might get a laundry list that starts with anti-virus, firewalls, IPS, and gateways. If you then explain back to yourself what you just heard, you wonder just how, given the maze of obstacles between them and their desktops, users can actually do their jobs. You'd be forgiven for thinking that the intent was to lock in the users rather than lock out the bad guys.
Most security solutions are monolithic. They exist to simplify policy enforcement, not to enable users. It seems obtuse to assume there could be a static rule system for all possible user activities, yet this is the status quo. The one-size-fits-all approach does not take into account user security needs that are as varied as the people who rely upon them.
There are many hierarchically deployed monolithic security tools out there. Some tools protect networks and tell the user: “Alert! Policy violation: You can't visit this website.” Some tools protect endpoints and tell the user: “Alert! Policy violation: You can't install this application.” Some tools protect applications and tell the user: “Alert! Policy violation: You can't push this button.” Some tools protect data and tell the user: “Alert! Policy violation: You can't send this file.”
These tools don't encourage productivity, they hinder it. The user is in a perpetually disabled state. There seems to be a fundamental disconnect in information security today that does not factor user experience into architecture design. Users don't think their daily work lives in the context of workflows that traverse the OSI stack, but rather as tasks that need to get accomplished. Perhaps it's time for administrators to focus on the “U” in user rather than the “I” in IT.
According to Forrester, 84 percent of U.S. adults now use the web daily. What subset of that use happens at work? There is no list of bad IPs or sites out there that will protect your infrastructure from a targeted attack. It will come at users from a fresh IP, or a domain that has never attacked anyone before. Putting a gateway or a proxy in the user path will likely end up filtering out more good IPs than bad.
Enforcing absolute rules based on software signatures or behavioral usage numbs users into clicking whatever it takes to make the warning messages go away. Desktop virtualization puts the security onus on the user. He or she must be cognizant of what is allowed or functional on each desktop (i.e. this is the desktop where web browsing is allowed, this other desktop is the one where SAP can be accessed, etc.).
Besides the fact that targeted malware exploits vulnerabilities in white-listed applications that have been “secured” and “locked down” by IT, when using a browser to get to salesforce.com, that browser is not “the task,” it is the means of accomplishing the task. Putting the browser in a sandbox doesn't protect a user when someone hijacks the browser session. The reality is that every URL is a cloudy blob of applications – should one of them be malicious, the browser, sandboxed or not, won't protect one application from another, and it won't protect desktops should the malicious application break free of the box.
If you think your DLP policy is going to stop users from sending documents that clients or managers want to see, then you are sadly mistaken. WikiLeaks is a great example of data-loss-prevention (DLP) gone awry. I'm sure that the Department of Defense used technologies even more sophisticated than enterprise-class DLP but they clearly did nothing to stop the data from leaving because they didn't take the user's behavior into account.
It's time to balance the scales between protection and productivity. It's time for an anthropological approach that protects users based upon enablement, not disablement. The exit from the rabbit hole has been through the mirror all along; it just required looking at the user occupying the body of the IT administrator looking back.