Network Security

Firewalls: No Simple Solution to Network Security but an Essential Element Nonetheless

By Katherine Teitler

Firewalls are a mainstay of security technology, an essential element without which no organization can afford to be caught. Lately, though, given the prevalence of application-based attacks, some security practitioners have been expressing frustration with the limitations of firewalls, wishing (presumably) they were some type of one-stop-shop for protecting the network and all data travelling through it, at every layer.

As is destined to be the case in a technology-driven industry, security product companies emerge daily to take up the slack where firewalls fall down. Though it’s easy to be enticed by bold claims and stated next-gen capabilities of new product offerings, firewalls show no signs of sun setting anytime soon, largely because their efficacy is indisputable…for what are were designed to do (i.e., they won’t wash your dirty dishes or change a tire on your car either).

Infosec Insider spoke with Marcus Ranum, an early inventor of firewall technology and current CSO at Tenable Security, Inc., to learn his take on the market and clarify what’s really needed to improve defenses against today’s threats.

Briefly explain the original purpose of firewalls.

We used to say "firewalls separate 'us' from 'them' for any given value of 'us' and 'them'."  I think that's actually a pretty good description of their purpose, and it's still the purpose that they serve. Another description is that a firewall enforces a trust boundary. In other words, you have two networks that are trusted differently, and you want to keep them separate so you can continue to believe that they are different. In that model, without a firewall, it's all one great big network—which it clearly isn't, because some people want to think of this machine as "mine" and that data as "separate" —without trust boundaries, it's all everyone's network and everyone's data.

Firewalls alone can’t protect today’s organizations; what is the best approach for managing and inspecting traffic once inside the perimeter?

Firewalls alone can't protect today's organizations because today's organizations want to do all sorts of crazy dangerous things in complete safety. That, of course, is not possible—so we blame the firewall for being unable to help. For example, organizations want to be able to download code from uncontrolled remote organizations and run it on local desktops behind their firewalls, then they're surprised when they find their machines have been infected with malware. What they don't realize is that, as bad as the browsers and malware are, the situation would be 100 times worse without the firewalls there.

The layered approach has always been to have a firewall, good system log analysis, endpoint security, use a browser that isn't full of holes, and make sure that servers accepting incoming traffic from the internet are capable of withstanding concerted attack. If your organization is offering a critical service, you need to think of resumption plans, how you're going to resist denial of service attacks, and (of course) ageing and backing up/restoring/deleting your data.

Computer security has become a huge multi-disciplinary field because we have moved past the point where we can expect simple solutions to our problems. But don't blame the firewall; that's like blaming the barn door for the horse's escape. We—users and administrators—chose to do dangerous things and we're paying the price for that convenience.

What is micro segmentation, and how is this an advancement in firewall technology?

Micro segmentation is the "cloud computing" idea pushed down to services, where there are a bunch of automated load-managing and access control tools that make sure the traffic gets to the right place and (we hope) only the right place, with the right bandwidth and redundancy. Back in the mid-90s I used to teach a thing I called "service-oriented security design" —basically, it's the same idea: you treat each application service as a separate connectivity and security problem, and design for that.

I don't consider micro segmentation an advancement in firewall technology; it's really just a way of distributing the work of a basic firewall across a whole network of systems (which is something you could do with good design and traditional firewalls even back in the 90s).

There is one major change that may happen as a result of micro segmentation: for micro segmentation and service-level security to work, you have to know where your services ARE so that the traffic can be directed to the right place. Organizations that have services scattered all over the place, rather than in data centers, will probably find it difficult to receive the full advantages of the next generation of networking and traffic shaping.

What are the limitations with micro segmentation?

If you think of firewalls as primarily network-to-network trust boundaries, you're omitting the application-layer, which is where most of the current vulnerability action is taking place. In fact, I'd argue that one of the reasons software vulnerability is such a big deal is because firewalls basically solved the network-layer problem, so the game shifted to the application layer. The open question for me is around what micro segmentation will be able to offer application-layer problems, such as server-side denial of service, buggy application servers, buggy web apps, buggy components, etc. If you're using state-of-the-art ultra-fast liquid networking to get outside traffic into a web application that has a SQL injection in it, none of that fancy stuff will help in the slightest—it'll just direct the attack traffic to you faster.

Micro segmentation is a technology that will mostly appeal to organizations offering services to the internet. It won't do anything to help with internal security against malware or desktop vulnerabilities, unless organizations sub-segment their networks internally. That capability has always been possible with existing firewalls—and it's always been a good idea—but most organizations don't want to pay the costs to design internally-segmented networks. Of course, this is all part of a continuum—you either invest in security and architecture up front, or you spend the money later for incident response and malware remediation. The internet has never been a “free lunch.”

As an inventor, where do you think security tooling needs to go to truly make organizations more secure?

We need to invent a technology that makes executives less likely to fall for ill-conceived ideas. When I look at most of the security problems we have today, they’re a result of executives reading some product glossy and saying to the business, "let's do this!" without considering any security implications. Or, they thought about security because the product glossy said "secure," and believed the marketing hype. At this point, because of all the well-publicized breaches, nobody should fall for sales and marketing claims any more, but they still do. If we could fix that, it'd sure help; most of the security problems we're dealing with are knock-on effects of bad technology strategy decisions made over a decade ago.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.