Content

Web 2.0: A “Perfect Storm?”

Roger Thornton, Founder/CTO, Fortify Software --
Web 2.0 technologies are spawning an explosive growth in client-side processing (Ajax/Flex), distribution of executable content (JSON), and the mixing of code from multiple sources (Mashups).

These represent architectural decisions in applications and their underlying frameworks that were made in order to improve user experience and application functionality. However, if we are not careful, these design decisions will also lead to an explosion in vulnerabilities that can be exploited both on the client and the server.

One of the major underpinnings of “Web 2.0” is the introduction of rich client interfaces based on Ajax or Adobe’s Flex platform. These technologies can greatly enhance the web user experience transforming it from simple web forms to the direct manipulation of a rich set of UI controls typically found only in desktop software today.

This requires that more code, in the form of JavaScript, execute on the client. This programming model also introduces lightweight distributed-computing mechanisms, namely JavaScript Object Notation (JSON) which facilitates the use of JavaScript as the primary means of communicating between client and server. Unlike transporting HTML and XML, we will now be transporting much more executable content.

Historically, whenever we depend on more software outside our control on the client or on executable content shared between programs, we see an increase in vulnerabilities. So here comes this next giant new trend and this one is the perfect storm.

Not only are we going to push code onto the client and pass around scripting code, we are also going to mashup all this code and content from multiple servers on a single client. Andrew Jaquith from Yankee Group termed it best in his 10/2007 research report – “The Web 2.0 Security Train Wreck”.

Web 2.0 applications and frameworks encourage developers to put more code on the client, ideally to enhance client side usability. But this will lead many developers to mistakenly put business logic and other critical code into the client without understanding the resulting security implications.

We call this class of problem a Trust Boundary Violation. This happens when we place code that requires a trusted execution environment into a location that is potentially under the control of our adversary. These types of problems were extremely common when JavaScript first made its way into web development. Back then developers would put input validation code in JavaScript on the client side in order to avoid a round-trip to the server when the user entered erroneous data. This was fine if the erroneous input was accidental, however, if it were malicious, JavaScript running in his own browser would not foil the attacker. They would simply disable the JavaScript and enter the malicious input to an unsuspecting server program, likely to be vulnerable since it assumed the client side checks were made.

More code on the client is fine, if that code is all eye candy to enhance the user experience. It is definitely is not okay to put validation out there, and it’s absolutely not okay to put security controls out there.

While Web 2.0 will create a wave of vulnerable systems, it doesn’t necessarily mean that there are going to be new types of vulnerabilities: many of these problems are a rehash of the same old stuff that has simply found a new home. There’s going to be cross-sight scripting (XSS) explosion.

We may call them XSS problems, or give them fancier names like JavaScript Hijacking, but it’s fundamentally the same stuff. Careless handling of executable content is the underlying issue behind all variants of cross-site-scripting (and SQL injection for that matter). Any design that calls for two programs passing executable content across trust boundaries will have to be carefully implemented (and used) to avoid inevitable security issues. That will be the case forever, the next big thing that does this will be a security problem too if we don’t learn this and design accordingly.

We must become better at recognizing these problems in the abstract if we are ever going to build things right the first time. Building things wrong, then waiting for the security community to find the mistakes (while the criminals exploit them), and then reworking everything is a major waste of development capacity and an unnecessary risk for businesses that increasingly depend on these systems.

What do we need to do to prepare for the Web 2.0 Train Wreck?

To borrow a couple cliché’s: this train has already left the station and there is no stuffing the genie back in the bottle.

Your company is going to deploy lots of Web 2.0 technology and it will put your business at risk. What you can do is make sure that your security team is working closely with your software development teams (internal and 3rd party). Stay on top of the vulnerabilities and exploits as they become public and be sure you have a quick response setup to mitigate and repair any of your software applications that have Web 2.0 vulnerabilities.

At the same time we can all work on making sure software developers and system designers understand fundamental security concepts so that Web 3.0 can deliver on the astonishing functionality it will surely promise without putting our systems and data at such risk.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.