Content

Lock the front door

Is anything really safe anymore? The hamburger you ate for lunch was made with USDA-certified beef that was recalled this morning. The FDA-approved pain pill you took after you ate lunch may destroy your liver. And the “trusted” website that informed you that your lunch and your medicine may kill you just planted some code on your PC that may enable a criminal to steal your life savings.

Hacking has metastasized into a global criminal enterprise deploying sophisticated and well-schooled technical resources, and the prime targets of the emerging “cybermob” are servers hosting popular sites and critical business services for millions of end-users.

The SANS Institute recently cited placement of exploit code on trusted sites at the top of its 2008 list of cybermenaces. Attacks on trusted sites have evolved from those based on one or two exploits posted on a site to use of scripts that cycle through multiple exploits, or utilize packaged modules that can effectively disguise their payloads, SANS reported.

“Placing better attack tools on trusted sites is giving attackers a huge advantage over the unwary public,” SANS noted.

A “firestorm” of XSS and SQL attacks
First and foremost, criminals targeting host servers want to steal sensitive and lucrative data from businesses and their customers, but they also are quite pleased to infiltrate and fraudulently manipulate business processes – or to simply harness the vast processing power of corporate servers to amplify the impact of malicious programs or build botnet armies of conscripted user PCs.

“Are servers a good target for data? You bet,” says Roger Thornton, founder and CTO of Fortify Software. “If I've got one server at a corporation and I've got all of the business partners and customers who connect to it, it's a launching point to attack them.”

The primary mode of attacks on servers – cross-site scripting (XSS) and sequel injection (SQL) – have been around for several years, with some new variants, like XSS request forgery attacks, materializing every few months.

What is changing dramatically – and not for the better – is the scope of overall vulnerability. As businesses increasingly integrate their fully automated processes with their web infrastructure and service their end-users with a kaleidoscope of web applications, they are exposing their operations to a tidal wave of security challenges.

“The entire business has moved to the application layer – that's where the money is. So if you control a mission-critical web server running a critical application, you can control a substantial portion of a company's business. That's why servers are being targeted by criminals,” says Core Security CTO Ivan Arce.

“The crown jewels are now [located in] the business apps,” Fortify's Thornton notes.

Each week brings reports of hundreds of vulnerabilities in commercially available web applications, which are being exploited by increasingly sophisticated criminal enterprises targeting valuable personal and financial data. Vulnerabilities also are proliferating in custom built web applications, although attacks on these are harder to quantify because they are not tracked by the public vulnerability databases.

Imperva CTO Amachai Schulman notes that while approximately half of the large data losses reported last year resulted from theft of laptops – often in situations in which it was not known whether someone actually took advantage of the lost information – the other half were due to actual attacks on servers.

“These were not potential compromises. The attackers were deliberately going into a website, deliberately sending a specially crafted request, and then they took information outside of the database in order to reuse it,” he said. “We are seeing resurgent attacks that have demonstrated the power of a cross-site scripting attack to compromise internal networks through vulnerabilities.”

According to Fortify's Thornton, the business IT community is in the midst of “a firestorm” of lethal XSS and SQL attacks that make the plague of buffer-overflow attacks of the ‘90s seem quaint by comparison. He added, ominously, that the problem is about to get a lot worse.

Web 2.0 enterprises are rapidly deploying applications that depend on executable content, offering a feast of new entry points for cybergangs trying to plant malicious code on host servers. “Web 2.0 is going to fuel an explosive growth in these types of attacks,” Thornton said.

Building a better wall
The basic tools needed to erect an effective defense against server-side attacks – security-minded coding, firewalls, instrusive scanning, log auditing and encryption – are readily available and constantly being upgraded to meet the evolving threat environment.

What seems to be lacking, a consensus of security experts told SC Magazine, is a healthy dose of common sense in the deployment of these tools and in how and where sensitive data is exposed.  

“Tools are important, but people have to know how to use them,” says Randy Abrams, director of technical education at ESET, which produces anti-virus products. “It's sort of like putting on a seatbelt and not knowing how to operate the brakes. You are still going to crash.”

Abrams is a proponent of the heuristic deployment of security tools, based on behaviorial patterns – methods he says increase the likelihood of anticipating attacks, instead of reacting to them. For example, he cited the notorious Storm worm trojan, a shape-shifting chameleon that has been used by its creators to build a million-strong botnet army of zombie computers.

“The code for Storm is being changed dynamically virtually every minute. So you can't use a static signature – you're always going to be behind the game. Heuristic techniques must be proactively deployed to detect all of the new iterations of the worm. You have to know what an unsuitable behavior is and block it,” Abrams explains.

Heuristics-configured tools can collect a variety of data-packed logfiles from servers and firewalls, examine them and take an educated guess at what to flag for the system administrator, he says.

According to Core Security's Arce, it is critical to penetration-test servers in their primary operational mode, as well as in the development cycle.

“They need to be tested after they go live,” he said. “You are going to miss things in the earlier stages. You will have your server interact in an operational mode with things you didn't anticipate, and that may expose it to threats and attacks. You need to know what happens if they are exploited.”

Common sense also needs to be applied to what is being placed in a secure server environment and to how servers are being used, experts said. For example, some companies are dressing up their internal networks with user-friendly features that actually are increasing their vulnerability.

“Even on the server side, we are seeing add-ons, like dashboards that don't monitor critical applications, but look really cool. They are putting on ActiveX controls to make the user experience look better, but what they are really doing is adding an attack surface with known exploitable code,” ESET's Abrams said.

Even worse, he adds, some administrators are carelessly using dedicated servers for non-essential tasks. “There are administrators that will use a payroll system web server to do really stupid things, like checking their email out on the web,” he said.  

Opinions regarding which elements of the server security toolbox are more critical vary. Predictably, vendors of firewalls will report that security-minded coding and scanning are not enough, and those offering scanners will counter that firewalls are flimsy without aggressive and continuous scanning.

However, firewall vendor Imperva's Schulman candidly concedes that the security tools are in a perpetual race to keep up with the bad guys.

“It's always going to be a cat-and-mouse game, and the protections must keep pace with the vulnerabilities,” he said. “You would think that a decade after the introduction of the network firewall, we would have no network-level attacks. Well, you still see them coming in. There are new techniques, new vulnerabilities and network firewalls must keep improving. The same thing will happen with web-application firewalls.”

Needed: A new definition of “trust”

There seems to be general agreement among security experts that the concept of a “trusted” website, while perhaps not completely outmoded, needs to be leavened with a more realistic expectation by end-users of what this means in today's high-threat environment.

“Somebody who has accessed a server for any purpose – say for a bank transaction – should act according to the degree of trust that they have in that web server,” Arce says. “I don't trust a network application any greater than I trust any random public site. I assume that both of them could be compromised. Whenever I go to a public or private site, I assume that it is malicious.”

“There has never been 100 percent trust in the world of security, ever,” he adds. “If I have an alarm in my house, people are going to find a way to break in regardless of the type of alarm that I put in. It's knowing to put the right amount of security in the right place, knowing to asess your assets, knowing where your most sensitive information and your most sensitive applications are, and focusing your resources on the places that are most susceptible to attack.”

Fortify's Thornton said a redefinition of trust also should be applied within the infrastructure, to delineate the “trust boundary.”

“The trust boundary is our server. We are not trusting any code outside of our environment,” he says. “So we have to completely distrust all of the information that is coming from another machine before we do anything with it. We have to look at it and make sure that it's okay, not simply assume it's okay because it's one of our customers. It could be attacking us.”


[Sidebar]

Shape of things: What is to come?

The growing threat to servers hosting trusted websites was dramatically underlined earlier this year by a mass SQL injection attack that, at one point, compromised more than 70,000 sites and infected visitors' PCs with a variety of exploits.

The mass attack was mounted using an SQL injection on Microsoft's SQL Server database product, specifically using an application that accessed system tables not commonly accessed.

According to researchers at the SANS Institute's Internet Storm Center (ISC), the SQL attack hit sites in the .edu and .gov domains. SANS Research Director Alan Paller told SC Magazine in January that almost all of the sites were trusted websites.

An interesting aspect of this attack was the fact that at least one of the exploits had already been patched, according to Grisoft Chief Research Officer Roger Thompson.

“What this means is that they went to the trouble of preparing a good website exploit and a good mass-hack, but then used a mouldy old client exploit,” Thompson says in a blog post. “It's almost a dichotomy.”

According to Thompson, the attack quickly spread to more than 70,000 domains within a few days after the domain hosting the malware for it was registered on December 28.

He also noted in the blog that most of the website operators for the affected trusted sites “were pretty sophisticated in terms of security smarts,” and acted quickly to stifle the threat. In less than a week, a majority of the attacked sites had been cleaned, he says. — Jack Rogers


Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.