Surely by now, organizations should have erected the strongest barriers to hackers. But, as Illena Armstrong and others point out, many holes remain
From social engineering to session hijacking tactics, hackers have a bevy of methods to employ for gaining unauthorized access to systems worldwide. But, what most cybercriminals use to break into networks are the enterprise-wide blunders continually made by organizations of all sizes.
Now operating in an era where there are an increased number of hacking tools and advice on techniques conveniently served up straight from the internet to any interested party's desk, almost anyone can become a malicious hacker, says George Kurtz, CEO of Foundstone. The "esoteric knowledge of a few has been made available to the masses," he notes.
And, unfortunately, much of that knowledge falls back to scanning and sniffing techniques that enable cyberattackers to search for an organization's non-patched vulnerabilities and misconfigurations - the corporate slip-ups that often lead to about 99 percent of successful intrusions, he says.
Because of the far-reaching availability of tools and techniques, the days of starting from scratch and writing code for attacks are long over, adds Yona Hollander, vice president of security research and head of Entercept's Ricochet Team.
"Tools like security scanners, fuzzers - tools that send many combinations of packets looking for vulnerable servers, password crackers [and] sniffers are packaged and readily available via chat rooms, shareware, etc.," he notes.
Even with the benefits provided by many security solutions today, hackers still find ways to gain access because system holes abound, he contends. Stack overflows, improperly configured systems and networks left unpatched are just open invitations to expert and not-so-expert hackers.
"A common thread among [these] holes is that they are due to user neglect or error, or an inability to keep up with the latest patches and fixes," he adds. "This is a common problem that only escalates as hackers come up with more sophisticated attack methods."
But, tackling the influx of misconfigurations is a massive and continuous undertaking that often proves costly and time-consuming, and requires a great deal of expertise, notes Foundstone's Kurtz. To compound these issues, many organizations come at the problem of rampant system holes and flailing security measures all wrong. They too frequently fail to develop or follow even the most skeletal of plans to manage system vulnerabilities and security as a whole.
"I think what you have right now is a lot of resources kind of scrambling around figuring out what's the latest vulnerability and threat and does it apply to me," says Kurtz. "The first thing [companies] need to do is put an effective vulnerability management program in place, that is able to manage both the assessment and the remediation process. The finding of the issues is half the equation - the other half is getting them fixed."
Kurtz suggests that such a program include elements that identify what the vulnerabilities and threats are and how they apply to the particular company. It should also note who's accountable for classifying the corporation's valuable assets and executing patches and security mechanisms across the enterprise to protect those assets. Additionally, such a program should enable the company to measure ongoing remediation efforts, patching of holes, and the entire process of making overall corporate security better and better.
"If you can't measure it, you can't manage security. Just think about the basics. Take care of the low-hanging fruit first," he says. "Don't worry about all kinds of different security strategies. Have the right policy in place - from the policy level to the standards guidelines, controls, vulnerability management, the whole bit - and then take care of the 20 percent that's going to represent 80 percent of the risk. That's the low-hanging fruit - the exposures, the vulnerabilities you have that [will] knock out really 99 percent of the intrusions."
But, he warns that organizations must accept that they will never be risk and exposure free. "Take credit card fraud. The banks realize they will never have zero credit card fraud," he says. "What they need to do is get it down to an acceptable level, know that's the cost of doing business, and move on. That's what companies need to focus on."
Who can you trust? Risks within the development domain
Stuart King narrates a true story that happened some months ago to an application service provider (ASP) supplying online expense management solutions.
The first indication of an incident was when the helpdesk was inundated with calls from clients unable to log into their accounts. The continuity site kicked in, and an investigation was launched.
The finger of suspicion
A combing of web logs turned up suspicious activity involving the account of a senior company employee. The employee in question was a board director and above suspicion. Through this account, other users' accounts had been interfered with, but this did not explain the denial-of-service attack.
A further inspection discovered that a particular file had been accessed through the logged-on account, the name of which was the same as a legitimate file but with an .asp extension instead of .html. The difference is that an asp (or active server page) file can contain code capable of being executed on the web server against resources such as databases, while an .html file can only contain code for the client browser; i.e. the web site user.
The legitimate .html file contained business graphics and user help text. The other file contained a text field allowing SQL queries to be entered and subsequently executed against the production database without authentication, making use of the application connection string stored within the global .asa file, a file common to IIS web applications. This was now a case of deliberate sabotage.
The question was how had the file been migrated to the production web servers? Only the test manager had permissions to deploy new files from the source code database, after the code had been reviewed, tested and signed off. Only the test manager's machine had the necessary access to the production domain.
It turned out to be a classic case of social engineering. The guilty person, an ex-employee who had departed three weeks earlier, had gained access to the test manager's PC and account through a verbal request to use the CD writer installed onto it - her apparent intention to copy some code for archive purposes.
Seizing the moment
The test manager saw no reason to refuse the request and left his desk unsupervised. She had grasped this opportunity and copied the file across the network from a public shared folder on her own PC. At the same time (so we assumed), she queried the database for username and password combinations of application users. This former employee had resigned from the company after becoming disillusioned with her work assignments. There was no reason to suspect her of being a menace, although she had not been shy in airing her grievances.
The above scenario demonstrates again that to make internal security effective, you need senior management buy-in, proper procedures and policies, and training for users.
Stuart King, CISSP, is a U.K.-based security consultant.