Threat information is coming from a variety of sources, reports Angela Moscaritolo.

As the new vice president and chief security officer of the organization responsible for ensuring the reliability of North America’s bulk power system, Mark Weatherford (left) has some concrete cybersecurity goals.

As one of his primary objectives, Weatherford, who has been working at the North American Electric Reliability Corp. (NERC) for almost four months, says he intends to build out a threat and vulnerability management program to serve as a resource for the thousands of companies that have a part to play in delivering power.

Such an effort is needed, says Weatherford, because resources are not spread equally among all members of the electric grid. Many bulk power system owners, operators and users do not have the time, money or personnel to develop their own cyberthreat intelligence capabilities. At the same time, the sector is interconnected, and all members have to work together to keep the electrical grid in balance. Ultimately, all members of the grid have some part to play in delivering power.

Weatherford believes that NERC – an organization whose mission is to help maintain and improve the reliability of the bulk power system – has a responsibility to ensure all members of the electrical grid have access to important information about threats and vulnerabilities. The program Weatherford intends to build out is still in its infancy, but he says it will leverage relationships with computer emergency response teams (CERTs) and other organizations to gather intelligence about threats facing the grid. This information will be disseminated through security alerts and reports, he says.

Sources of intel

The goal of any cyberthreat intelligence initiative should be to collect, analyze and share data to detect and respond to an existing compromise or prevent a future threat, says Adrien de Beaupré (right), an incident handler at the SANS Internet Storm Center, an all-volunteer cyberthreat and internet monitoring service.

Beyond gathering data, he adds, businesses must tailor a response for short- and long-term planning. “I personally don’t think organizations spend enough time in this area, but some spend a very significant amount of time and money,” says de Beaupré.

These days, organizations obtain intelligence about threats and vulnerabilities from a variety of sources, says André DiMino, co-founder and director of the Shadowserver Foundation, a nonprofit web intelligence gathering group.

As a starting point, internal data, such as logs for spam, intrusion detection systems and firewalls, can and should serve as a wealth of intelligence. “There’s a tremendous amount of data in your network that if you could collect and use it, it could help you make better risk-based decisions and secure the enterprise,” says Rich Baich (left), principal at consulting and risk management firm Deloitte & Touche.

But internal data only provides part of the story. It must be correlated with external threat information to paint the full picture of network activity.

Commercial cyberthreat intelligence tools, services and subscriptions are widely available or organizations could choose to develop their own intelligence gathering capabilities in-house. It can, however, be fairly expensive to develop intelligence-gathering capabilities internally, and purchasing a commercial solution is often not cheap either, says de Beaupré, who is also a senior IT security specialist at security consultancy EWA-Canada.

Fortunately, there are a number of organizations that provide free intelligence about threats and vulnerabilities, such as the Shadowserver Foundation, the SANS Internet Storm Center and Team Cymru. Also, certain regional and industry-specific groups exist to share cyberthreat intelligence, such as the Electrical Sector-Information Sharing and Analysis Center (ES-ISAC), which facilitates communication between members of the electricity sector, the federal government and other critical infrastructures. NERC’s Weatherford says the ES-ISAC provides analyses and warnings about security incidents and threats and serves as a place for members of the sector to share information.

In addition, there are a number of other ISACs for those in specialized markets, including financial services, state and local governments, health, public transit, real estate, research and education, water and the supply chain. In his former post as CISO of the state of California, Weatherford says the multistate-ISAC served as an important resource. In emergency situations, members from all 50 states participating in the ISAC are able to get on a conference call within a few hours to discuss urgent cybersecurity issues, he says.

Aside from the many organized sources of cyber intelligence, many experts say that informal information-sharing can be fruitful. Personal relationships are important because threat intelligence is often casually shared among members of the information security community.

“Those relationships are, quite frankly, some of the most strategic things a CSO could have,” Weatherford says.

Developing such personal ties takes time, but they also can be especially helpful because very few organizations have the resources to devote to a pure intelligence-gathering initiative, he adds.

“I have never had the luxury of having staff devote their day to gathering intelligence-type information on security threats and vulnerabilities,” he says. “That’s why those relationships are so important.”

Weatherford recounts one recent incident when an informal warning from a friend in the security community was particularly helpful. The incident happened earlier this year when Weatherford was still in California. He got to work early in the morning and noticed he received an email from a friend who was located across the country. In the email, Weatherford’s friend said he heard some chatter that an anti-virus company had released a bad signature file that corrupted some systems to which it was applied.

It turned out that the flawed update caused computers to become stuck in an endless cycle of reboots, crippling PCs in organizations around the world. By receiving this information early, Weatherford was able to put out a statewide alert and make people aware of the problem.

“We were able to avoid some trouble and it was all because of a personal relationship I had with a colleague in New Jersey,” he says.

Collaboration and information-sharing among members of the security community has increased in recent years, Shadowserver’s DiMino says. The Conficker worm, which was first discovered in 2008 and infected millions of PCs around the world, fostered unprecedented cooperation among researchers and competing security companies. The effort was successful at neutralizing the threat, and Conficker has been cited as a lesson to the computer security industry that working together and sharing observations is useful.

Relevance is key

With so many potential sources of information, IT security pros must ensure the information they obtain is relevant, accurate, timely and actionable, SANS’ de Beaupré says. Ensuring that threat data is useful to a particular organization requires having a disciplined knowledge of the environment’s IT architecture. If, for example, an organization receives intelligence about a new Microsoft vulnerability, but does not make use of Microsoft platforms, the intelligence is useless, Beaupré says.

Five years ago, there were a handful of firms in the world that provided commercial cyberthreat intelligence information, and now it seems that “every mom-and-pop shop” has a stake in the market, de Beaupré says. The commercialization of the cyberthreat intelligence space is evidence of how important it is, he says. But because there are so many vendors in the field, maturity levels and quality of service varies among providers. And, even when using a paid service, there is still a certain amount of analysis that must be done manually, de Beaupré says.

Organizations should correlate internal data with threat information obtained from outside sources, DiMino says. For instance, an organization’s firewall logs may indicate that a high number of brute force SSH (secure shell service) attempts are coming from a set of IP addresses. By correlating that information with data obtained from an external cyberthreat intelligence source, it may be possible to determine that a specific botnet operating out of a certain location is responsible for the attacks. Or, by correlating other activity seen on an enterprise network with external data, it may be possible to discover that computers are infected with a data-stealing trojan or are part of a distributed denial-of-service botnet, DiMino says.

“The ability to correlate data helps them get a better picture of what’s going on in their network,” he says.

From both tactical and strategic perspectives, IT security professionals will likely lack the time to change their security programs to respond to a particular threat, Weatherford says. But information about threats will cause organizations to evolve their security programs in the long term.

In the past, for example, most organizations were focused on securing their perimeters with a defense-in-depth strategy. Now, with the advent of smartphones and remote access, there is no perimeter, Weatherford adds. As a result of this change, security programs have evolved and many now place a greater emphasis on safeguarding data.

Within the critical infrastructure sector, Weatherford says one particular threat – Stuxnet – will undoubtedly drive changes. Stuxnet, a worm designed to target industrial control systems software manufactured by Siemens, has been called one of the most important malware cases of the last 10 years. It is a game changer, experts say, because it was an act of cyberwarfare with a specific purpose: to cause real physical harm. Because of Stuxnet, other control systems vendors are retooling their internal processes to mitigate similar threats that may come down the pipeline, he says.

Weatherford was confronted with Stuxnet during his first week on the job at NERC. “As we watched what was going on, I realized this Stuxnet thing was a big deal and not run of the mill,” he says.

In response, he pulled together a team of experts from government and industry to research the threat and draft an advisory containing concrete steps to mitigate the vulnerabilities associated with the worm. The effort was a prime example of the relationships that are necessary to cultivate important threat information and disseminate timely, actionable recommendations to the community at large.

For Weatherford, the incident also brings up another important point about cyberthreat intelligence. “We have to be careful of spamming our own people with too much information,” he says. “If I send out one or two advisories every day, people will become immune. You have to be discreet and tactful about when and how you do that. A Stuxnet doesn’t happen every day – that’s when you ring the bells and alarms if you are judicious.”

[sidebar]

Threat intel

Internal sources

  • People
  • Trouble tickets and incident databases
  • Intrusion detection systems
  • Intrusion prevention systems
  • System logs
  • Application logs
  • Firewall logs
  • Router logs
  • Anti-malware logs
  • Sniffers
  • SIEM
  • Log collectors

Free sources

  • Personal relationships
  • SANS Internet Storm Center
  • The Shadowserver Foundation
  • Team Cymru
  • SRI Malware Threat Center
  • Malware domains
  • CERT
  • US-CERT
  • Twitter
  • Anti-virus vendor blogs

Source: Adrien de Beaupré, SANS Internet Storm Center