Yuval Ben-Itzhak
Yuval Ben-Itzhak
Web 2.0 has become a popular term over the past 12 to 18 months to describe the second generation of community-based internet services. Before Web 2.0, website owners drove traffic to their sites by creating content aimed at attracting large numbers of visitors.

In the Web 2.0 world, the web serves as an online platform for people to create, collaborate and share their own content – which may be blogs, wikis, videos or photos.

The idea is to make this platform as user-friendly and accessible as possible, so that people will visit often to post and view content. Popular social networking sites, such as MySpace.com, or video sharing sites, such as YouTube, are prime examples of Web 2.0.

While Web 2.0 offers many advantages in terms of enriching the internet, improving the user experience and creating web-based communities, it also opens the door to new propagation methods for malicious code.

Web 2.0 security vulnerabilities

Since Web 2.0 platforms enable anyone to upload content, these sites are easily susceptible to hackers wishing to upload malicious content. Once the malicious content has been uploaded, innocent visitors to these sites can also be infected, and the site owners could be potentially responsible for damages incurred. From a technical standpoint as well, Web 2.0 sites are more prone to attack since they have more interactions with the browser and require running complex Javascript code on user machines. What makes matters worse is that the vast majority of these sites (e.g., Wikipedia, MySpace, Flickr) are considered “trusted” by URL filtering and categorization products, and as such will probably not be blocked despite the fact that they might contain malicious code.

Most enterprises do not normally block users from visiting Web 2.0 sites, which could become an IT security risk. Web 2.0 sites harboring malicious code raise a plethora of issues for enterprises: internal and external security; legal liability (direct, indirect and consequential); and regulatory compliance issues.

The use of a Web 2.0 platform for malicious purposes was discovered on a known U.S.-based website offering art directory services in April 2007 by Finjan's Malicious Code Research Center. The malicious code on this site was obfuscated to enable it to bypass anti-virus solutions. It exploits various browser vulnerabilities and uses AJAX technology to download and execute a potentially malicious trojan from a remote server. Simply by visiting this page, without taking any action, the visitor's machine is infected.

Another highly publicized example involved an online banner advertisement that  ran on MySpace.com and exploited a Windows vulnerability to infect more than a million users with Spyware. Internet Explorer users who visited a web page containing this ad and whose Internet Explorer was not equipped with the latest Windows Media File patch were most likely infected. Their machines would silently download a trojan program that installs adware bombarding the user with pop-up ads and tracking their web usage. Despite the fact that the WMF vulnerability was patched in January 2006, by targeting a high-traffic website the hackers were still able to achieve mass infections.

Asynchronous JavaScript and XML (AJAX) comprises a set of web technologies that are combined to enable web browsers to refresh content (e.g., stock quotes) in real time without requiring pages to reload or refresh. As these requests for content are hidden from the user's view, AJAX provides for a delay-free user experience and enables rich web services. Well-known sites such as Google Maps, Yahoo and MySpace already employ AJAX tools in a number of ways.

In internet jargon, the “hidden web” commonly refers to the vast majority of the web that is not indexed by search engines that crawl sites via links. Examples of the hidden web are the various forms and applications (i.e., web services) for which the user must enter and submit parameters in order to get a dynamic result.

In the security context, Finjan researchers have discovered that AJAX can query back-end web services automatically, or, in other words, “query the hidden web.” This provides an opening for hackers to create “invisible” attacks using AJAX queries, since the code is never revealed on the site and more specifically can be encrypted in transit using SSL. URL Filtering solution will most likely be unaware that a given site is malicious, because it doesn't know which parameter will activate the malicious AJAX script.

In order to protect users from malicious AJAX queries, enterprises require security solutions that are capable of analyzing each web request or reply “on the fly.” Real-time code analysis of web content, performed on the gateway between the browser and web servers, is one effective method for doing this. Since it analyzes each and every piece of content, regardless of its original source, this technology assures that malicious content will not enter the network even if its origin is a highly trusted site. Thus, web pages from Myspace.com or Yahoo.com are analyzed in exactly the same way as pages from smaller or recently created websites. Moreover, understanding what the code intends to do even before it does it, adds a significant and crucial layer of defense that will prevent the use of such attacks.

Protecting your IT systems and reducing legal liability

According to IDC, “Web 2.0, community-driven tools will move center-stage in the enterprise” as wikis, blogs and the like become an integral part of enterprises' product development, marketing and customer service processes. (IDC Predictions 2007). Research conducted by NetBenefit in the United Kingdom in May 2007 found that 60 per cent of users are actively using Web 2.0 technologies in the form of blogs, AJAX-enabled websites and Mash-ups. As the benefits of Web 2.0 technology are rolled out to both business and consumer users of the internet, IT security risks will escalate.

So what can enterprises do to protect themselves from Web 2.0 threats?

To protect against today's highly sophisticated web-borne threats, including Web 2.0/AJAX exploits, obfuscated code and other dynamic threats, enterprises should adopt a multi-layered approach, typically involving both proactive (e.g. real-time inspection) and reactive (e.g., signature-based) IT security technologies. The use of multiple IT security solutions must become a standard approach for any organization seeking to protect its internet-connected assets.

To achieve this objective, IT managers should consider installing an appliance at the Internet gateway, which performs real-time code inspection of traffic flowing into and out of the corporate network. High performance and high availability appliances capable of monitoring and acting swiftly to block any suspicious web traffic are paramount.

The evolution of the internet has had a profound effect on the way businesses and individuals work and communicate. While Web 2.0 and AJAX have greatly enhanced the user experience and added important business functionality, they also introduce opportunities for hackers to invisibly inject and propagate malicious code.

Reactive signature-based solutions were not designed to detect these types of dynamic malicious web scenarios, thus they are not enough, alone, to provide protection against the modern hacking methods. The prevailing assumption that an anti-virus or URL filtering lab can put its hands on each and every piece of malicious code and create a signature is no longer valid in today's web scenario.

On the other hand, real-time security solutions that are able to analyze web content on-the-fly as it occurs and detect whether or not it is legitimate, regardless of its source, are critical for stopping these threats. This differentiates real-time code inspection technology from URL filtering solutions or reputation services, which usually automatically mark well known websites as trusted despite the fact that hackers can upload malicious code to personal pages or ads to those domains.

- Yuval Ben-Itzhak is CTO of Finjan