Time Magazine recently bestowed its prestigious "Person of the Year" honor on "You," recognizing the growing social importance of community and collaboration on the web. YouTube, MySpace, Wikipedia, Bebo and hundreds of other websites that rely on user-contributed content and which are broadly referred to as "Web 2.0" have officially become mainstream.
While the explosion in the popularity of Web 2.0 sites has changed the way we communicate and use the web, it has also created an irresistible target for malware authors. As more and more users go online to take advantage of Web 2.0 applications — like social-networking sites, blogs, and wikis — malware authors are right behind them, opening up yet another front in the constant cat-and-mouse game between security defenses and hackers.
Early Web 2.0-focused threats emerged in earnest in 2005
In October 2005, one creative MySpace user unleashed the Samy worm, a cross-site scripting worm that allowed him to add one million users to his "friends" list. While the damage was limited, the implications of the Samy worm were huge.
Samy opened the security community's eyes to the potential for abuse of AJAX and Web 2.0 applications. Cross-site scripting worms can insert malicious code into dynamically generated web pages and allow an attacker to change user settings, access account information, poison cookies with malicious code, expose SSL connections and access restricted sites.
Keep in mind that Web 2.0 sites aren't just for consumers. More and more businesses are pushing applications to the web.
In 2006, Web 2.0 threats started to occur more frequently and on a larger scale.
In mid-July 2006, an online banner advertisement (DeckOutYourDeck.com) on MySpace.com used the Windows Metafile Flaw (WMF) to infect more than one million users with spyware when they merely browsed the sites with unpatched versions of Windows. Later that month, a worm was discovered on the site that embeds Java script into user profiles. The profiles redirected users to a site claiming the U.S. government was behind the Sept. 11, 2001 attacks.
In August 2006, the ScanSafe Threat Center found that up to one in every 600 social-networking pages hosted malware. Three months later, an entry on the German edition of Wikipedia was re-written to include false information about a supposedly new version of the infamous Blaster worm, along with a link to a supposed "fix." In reality, the link pointed to malware designed to infect Windows PCs.
And in December 2006, a QuickTime exploit was used on MySpace to spread malware via video. The virus eventually forced MySpace to remove infected profiles.
But why has Web 2.0 become a new threat vector for malware authors and criminals?
Web 2.0 sites are, by definition, more open than traditional sites. The hundreds of thousands of users contributing content to Web 2.0 sites make it easy for malware authors to hide and insert malware on dynamically generated Web 2.0 pages.
However, because a site is well known, trust by association is created where no trust should exist. For example, a book review posted by a user on Amazon.com is probably viewed by most users as legitimate content on a trusted, brand name site.
Many Web 2.0 sites have a large user base, making them a very attractive target. For example, in August, MySpace reported that it had reached over 100 million accounts and it claims that it attracts new registrations at a rate of 230,000 per day.
A word to the wise: "policy-based solutions" will not protect you from Web 2.0 threats
So how do you protect your network from this new generation of web-based threats? The short answer: don't rely on outdated solutions.
When web pages were relatively static and had a centralized content owner, software companies with URL filtering technology relied heavily on web crawlers to categorize sites. Now they are attempting to use that technology to look for malicious content.
However, by simply doing the math, you'll see that this filter-centric approach cannot keep up with the flood of new Web 2.0 content.
According to Netcraft, there are 107 million active websites. But if your non-real-time solution only crawls 80 million websites each day, which many claim, it's still leaving you exposed to potential threats on 20+ percent of all existing sites.
But let's put that 20 percent aside. To check 80 million sites daily for malware, a solution would have to crawl 926 websites each second. Assuming that each website has only three URLs, an almost absurdly conservative estimate, a solution would have to crawl 2,778 URLs each second, 24 x 7. Even then, each page gets crawled just once per day. So malware posted on a page later in the day isn't identified for at least another 24 hours.
Scarier still is that these figures do not include the millions of pages on high profile Web 2.0 sites — like the six million Wikipedia pages and the over 100 million pages on MySpace — the content of which is perpetually changing.
Real-time scanning and profiling is essential
Web 2.0 user-contributed content means that the content on countless URLs is constantly changing. Static web filtering solutions that rely on periodically updated URL databases and honeypots to identify threats are simply not in a position to keep up with the dynamic content that characterizes Web 2.0 sites. In order to keep pace with the dynamic nature of Web 2.0 sites, it is imperative for a web security solution to scan and profile URLs in real-time each time a URL is requested. A simplistic database lookup is not enough.
In addition, web security solutions that rely heavily on anti-virus signatures will be slow to react to zero-day threats that leverage Web 2.0 sites to propagate, leaving many vulnerable to these attacks until a signature is made available. In the six billion web requests ScanSafe processes each month, on average between 10-15 percent are threats for which there is no existing signature or patch.
As with all security, multi-layered protection is imperative. To effectively protect against Web 2.0 threats a solution should use an array of analysis techniques, including heuristics, behavioral analysis, anti-virus signatures and network intelligence that can fuel real-time analysis of URLs.
You wouldn't feel very safe if the only security check used in airport screening was matching your name to a periodically updated central register of suspect individuals. However, you would — and probably do — feel more secure when airports use real-time, multi-layered screening. In other words, checking every single passenger each time they travel by passing them though scanning machines, and having expert methods for identifying suspicious characteristics in addition to having a ‘no fly' list.
The same is true for delivering protection from Web 2.0 threats — although unlike tedious airport security lines, in the web world real-time scanning should be undetectable and painless to the user, allowing them to surf while keeping them fully protected from threats.
About the author
Dan Nadir is vice president of product strategy for security vendor ScanSafe and is based in San Mateo, Calif. ScanSafe is a leading global provider of web security-as-a-service, ensuring a safe, productive internet environment for businesses. For more information, visit www.scansafe.com.