Malware, Network Security, Threat Management

Prime pickings: Application security

Applications provide the juicy data that organizations must protect, says Marcus Prendergast, CSO of ITG. Dan Kaplan reports.

In today's digitally connected world, where most companies' competitive advantage largely is based on how well they interact and serve customers over the internet, Investment Technology Group (ITG) is an anomaly. 

The New York-based brokerage and financial markets technology firm has an attractive-enough internet presence, but the site isn't as highly programmable or littered with forms, fields and interfaces as one is used to finding in cyber space. To the contrary, the highly regulated ITG purposely maintains a limited web footprint, choosing to conduct the brunt of its business behind the corporate firewall.

“We're very much disconnected from the internet,” says Marcus Prendergast, the company's global head of security since 2010. “We don't expose anything unless it's necessary to expose it. It's not as though we have to use the internet to communicate.”

As a result, the 1,100-employee company is able to mostly avoid a major risk that other organizations simply cannot: cyber intrusions designed to pierce through web applications – those front-line attacks, like SQL injection and cross-site scripting, that can lead to a jackpot of customer data. It's become arguably the most preferred vector of attack by hackers, and is believed responsible for many of the headline-grabbing breaches of the past two years, including major personal information heists at Sony and LinkedIn.

But, applications still play a vital role in ITG's business model. It's just that Prendergast is less concerned about the public-facing ones and much more interested in the security of the roughly 55 backend legacy programs, which handle stock orders and provide confidential data to ITG's 700 customers. He says about three percent of all equity trading volume in the United States is conducted via these applications and systems.

Clients – which include 38 of the top 50 global institutional investors, according to Prendergast – typically connect to the applications through their terminals via fixed connections. That does an excellent job of mitigating the external attack threat. As such, Prendergast is more bothered with trusted insiders exceeding their privilege levels, whether on purpose or by mistake. A major part of ITG's security posture involves, thanks to products from Vigilant and ArcSight, obtaining real-time context and insight into not only who is accessing a certain application, but specifically what data they can reach. That visibility, combined with instantaneous notifications if authorizations are changed, has allowed ITG to gain actionable intelligence on a broad level, instead of alerts from individual applications.

“Once inside [the network], our SIEM monitors users' behavior, connections and all access attempts,” Prendergast explains. “Our applications don't contain any identifying information on our clients. We secure that outside the systems that process the data, and there is nothing saleable. So even if you could see order information inside these systems, you are only seeing it associated to an ID number. There is no way to tie the client back to the order. Some clients, we don't even use their real name in the system that securely maps client IDs. They are only identified by a codename – even verbally when discussed by staff offline.”

Whether they are internal or outward facing, applications usually sit in front of a gold mine of data. Chris Wysopal (left), CTO of Veracode, a Burlington, Mass.-based application security testing vendor, says many organizations have placed the majority of their focus on shoring up customer-facing applications because of the more than 45 state data breach notification laws, which require entities to alert customers in the event their personal information is exposed. 

But those same laws typically don't apply to inward-pointing applications because they traditionally house intellectual property – data that is no less critical to the organization, but which does not require a company to come clean if hackers make off with it. But now as the threats grow in sophistication and complexity – and more businesses recognize that compromise is an inevitability – there is increased concentration on the interior.

“Internal apps have been largely ignored up until the last couple of years,” says Wysopal. “The reasons were this fallacy that, ‘We have a perimeter and I don't really care about the insider threat as much.' But with the rise in APT [advanced persistent threats] there's a realization that the attackers are going to get inside through some other way.”

Internal applications need particular tending to because oftentimes they were built many years ago, without security in mind, he says. Consequently, organizations built up a certain amount of “security debt,” and now they are racing to validate the adequacy of these applications whose code was rarely, if ever, tested for deficiencies.

The problem is that while these applications cannot be directly reached through a traditional web-based attack, they are susceptible to the same types of exploits if an attacker is able to work their way into the corporate network through some other means, Wysopal explains. One such approach is to deliver to an employee a legitimate looking email that actually contains a malicious attachment, which installs malware on the victim's computer to enable entry and possible privilege escalation.

Prendergast is all too familiar with this type of tactic, and ITG has taken measures to deter it. “We actually hire a firm to execute social engineering attempts against staff to ensure they are always aware of the threats and don't fall for them,” he says. “The desktops and laptops our employees use have been disconnected from direct web access for some time – even when they are not on our network – and they go through a really advanced next-generation web filter which blocks advanced threats that may come in via email or the web."

Assessing the exposure

A big challenge that many organizations face when it comes to managing applications is understanding their inventory. Ed Adams, CEO of Security Innovation, a Wilmington, Mass.-based provider of software development and training services, says his firm recently conducted an investigation of a company that believed it was running 200 internal applications, when it actually was closer to 300.

Miscalculations like this are common for web-facing applications as well, says Nicholas Percoco, senior vice president and head of SpiderLabs at Chicago-based security and compliance firm Trustwave. He sometimes sees customers who oversee websites that were set up for them by marketing companies more than a decade ago and haven't been updated since. They've been, more or less, forgotten about – yet remain sitting ducks.

“Large organizations may have 50, 60, 100 brands under them – many different brands for different markets,” Percoco says. “When you start looking at it at that level, the list starts to become very long. We've seen compromises where there have been web applications that were no longer being used that are sitting in the same data center, in the same segment, as systems that were currently being used. Now [successful attackers] have a foothold in the environment behind the firewall, in the data center, and they go exploring from there.”

Hackers don't have to try hard to infiltrate vulnerable websites, Percoco says. Much of their work has become automated over the years. They simply scan for open ports , like 80 and 443, and then check to see if there are any known vulnerabilities that can be exploited.

And they usually like what they find. Depending on the study, anywhere from 70 to 90 percent of all websites contain flaws that could lead to data exposure, though a recent WhiteHat Security study of 7,000 sites determined that each site suffers from 79 serious security bugs on average, which is down from 230 in 2010 and more than 1,000 five years ago. Despite the drop, the attacks are happening with increasing frequency. A July study from cloud provider FireHost found that SQL injection attacks rose 69 percent in the second quarter of this year compared to the first, and security vendor Imperva estimates that the average app is under attack 274 times per year.

Website vulnerabilities “can appear as fast as they're fixed,” Jeremiah Grossman (right), founder and CTO of WhiteHat Security recently tweeted.

Experts point to a number of reasons why application security still struggles, one of which is disproportionate spending. According to a March Ponemon Institute Security Innovation study of 567 security practitioners, 63 percent of respondents stated that application security earns 20 percent or less of the total IT security budget. 

“The bottom line is most organizations don't care about security, especially about application security, until they get breached,” says Security Innovation's Adams. “That's the pathetic, but factful, state of the market.”

The discipline also struggles from a dearth of process, with 64 percent of respondents to the Ponemon study saying they lack a development lifecycle that governs building security into applications. The design process is where many of the bugs are introduced. Percoco says application protection often slips through the cracks – with the security team focused on perimeter defense and the application development team concerned with maintaining frameworks and servers. Accountability is lacking.

Still others cite outsourcing as a major impediment. Many applications were created by third-parties, so the organization running the program doesn't actually own the code – though they are still responsible for it. “If it's sitting on my network, I still need to worry about the security of that application,” Wysopal says.

Tightening the belt

At ITG, Prendergast says that education is the most important component of application security. That especially includes the company's 100 developers, located in 15 offices around the world. 

Once a year, staff members from Gotham Digital Science, a penetration testing company, fly to the various developer sites to meet with ITG personnel about the vulnerabilities it discovered in the company's code, including its public web portal, known as Transaction Cost Analysis. “When I first came here, the attendance [for such sessions] was not stellar, I can tell you that,” Prendergast recalls. But, steadily, engagement has increased, largely because the presenters use actual examples of ITG code shortfalls, “not these hypothetical OWASP-y examples.” 

He's referencing the Open Web Application Security Project, a nonprofit known for publishing its annual Top 10 risks facing developers. Yet Prendergast believes those types of general lessons often fall on deaf ears.

“If you make them search through an 80-slide PowerPoint, you're going to lose them,” he says of his staff. 

Despite all of its problems, many would agree that application security is becoming more of a priority, albeit gradually. Even mandates, like the Payment Card Industry Data Security Standard, now contain a provision that organizations processing credit card transactions must implement either source code review of their internet-facing applications or install a firewall between the web application and the client endpoint.

There's no one-size-fits-all solution, says Trustwave's Percoco. For example, one can't solely rely on code review because sites typically are releasing new bits and bytes all the time. “If you're an organization that puts out 12 releases a year, and you only have someone review that code once a year, there are 11 chances that bad code is going to be pushed out that year,” he says. The opportunity for vulnerability introduction is far greater at heavy user-driven properties, like Facebook and Etsy, which distribute new code multiple times daily.

Back at ITG, with its limited web presence, Prendergast doesn't have to concern himself with that threat vector as much as some of his peers. But, he fully knows that as the traditional perimeter-based model of security becomes null and void, even internal applications face significant risk. Often, such attacks can begin with a mistake by an end-user, inviting the adversary through the front door. That's why employees, not just developers, require conditioning.

“The most important thing we have, our differentiator, is that every single one of our staff is trained on a regular basis to recognize threats, including social engineering attacks, and to report anything suspicious to security,” he says. 

The internal threat: What has changed

The old model for protecting internal applications was to establish strong perimeter security (e.g., firewalls) and, allow for full internal trust on the enterprise intranet. That model has become invalid over the last few decades due to a variety of factors:

  • The presence of malware that can now be delivered via USB memory stick, email or a malicious website.
  • Multi-hop attacks that jump from web applications or internet connections to intranet systems.
  • Regulatory compliance that requires only certain roles to access or modify sensitive information.
  • Both mobile and cloud applications that are heavily exposed and need to be defended from attack.
  • The fact that many systems in intranets and even firewalled/locked room data centers have some form of direct or indirect internet access, providing attack surfaces which are insufficiently defended and vulnerable.

Source: HP

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.