Glenn Greenwald is not just a reporter of facts. He is a tireless advocate for what the facts reveal and the implications for American and global citizens.
The revelations he published a year ago in The Guardian – unveiling the extent of the NSA's collection of communications in all digital forms of U.S. citizens and foreign leaders – was a bombshell that reverberated throughout the world. Greenwald's new book, No Place to Hide: Edward Snowden, the NSA, and the U.S. Surveillance State (Metropolitan Books), details the NSA operations, as well as the personal drama of contact, and then partnership, with Edward Snowden, the former NSA contractor. The two ultimately disseminate the documents and tell the story of what those documents reveal about an agency overstepping its authority.
The first part of No Place to Hide reads like a thriller, narrating the high tension and frantic arrangements in the months leading up to a week-long meeting in a Hong Kong hotel room between Snowden and the author, an investigative journalist for The Guardian. Joining the two were a second journalist, Ewen MacAskill, who'd been at the paper for 20 years, and Laura Poitras, a documentary filmmaker. What follows is the now familiar tale of the conversion of a young Snowden who joins the military believing his efforts are patriotic to the nation, to an operative who ultimately believes he is acting as a patriot in service to the people.
Snowden, after his military service, begins his intelligence career as a security guard, works his way up in the CIA, NSA and among defense contractors to become an expert in cyber security. But troubled by the extent of the NSA's surveillance activities and questioning the program's legality, he eventually reaches a saturation point and decides to bring the agency's agenda to light. It's the nation that's gone rogue, Greenwald asserts, not the messenger.
In this book (and throughout his career), Greenwald relentlessly reminds us of the principles on which this nation was founded. His passionate advocacy aims to retrain our minds that these ideals – codified in the Constitution – matter. His argument, lucidly laid out, is a call to reason. It is, as well, a fearless affront to authority, especially an authority squandering its agenda on a scheme more dastardly than that of any Bond villain. Greenwald has little patience for the NSA's PR machine asserting its pretext, mainly the fight against terrorism, and chews right through its veneer to expose the deception.
He spends the second half of the book building a convincing argument of what all the documents portend: namely, that surveillance of this magnitude is an offense to this nation's guiding principles of liberty and the guarantees of privacy. These activities, he says, are a bloated abuse of power. They don't just steal the American conscience – distracting resources while self-perpetuating a ravenous bureaucratic hydra – but squander the American agenda into a cesspool of mismanaged, and ultimately inefficient and counter-productive, programs. The problem, as he sees it, is:
"…that there are far too many power factions with a vested interest in the fear of terrorism: the government, seeking justification for its actions; the surveillance and weapons industries, drowning in public funding; and the permanent power factions in Washington, committed to setting their priorities without real challenge."
But, Greenwald's probing raises important questions that may not be so easily resolved. Is it too late for anyone to care? Have we become so immune to the abuse of authority that this reasoned explication will fall on deaf ears?
Greenwald, formerly a Constitutional lawyer, lays out his exegesis as if to convince a jury, presenting arguments to counter every possible opponent. For those who argue that their own good behavior, for example, makes them immune from government intrusion, he counters: "...the true measure of a society's freedom is how it treats its dissidents and other marginalized groups, not how it treats its good loyalists."
And, his campaign not only targets the government, but the mainstream media which, he says, is often barely more than a mouthpiece for the elite. He cites numerous examples, particularly the practice of cowering at the possibility of legal problems, necessitating running items past government clearinghouses for approval.
Greenwald saves his loudest challenge for the current administration, which, he points out, has ramped up prosecution of whistleblowers and equated "adverserial investigative journalism with a crime."
This book is particularly satisfying because it unfolds like an algebra equation, positing a problem and then laying out a reasoned solution. The author's intent becomes quite clear early on: To pursuade the reader that confronting entrenched systems is vital to the democratic process, and the creativity of its citizens is stymied by surveillance – all so as to keep a ruling elite in power.
Ultimately, No Place to Hide, like all Greenwald's writing, is a call for critical thinking. It advocates for principles and ideals that even the most liberal among us have let fade from our lives amidst our own personal struggles.
This article is an opinion piece and does not reflect the views or position of SC Magazine.
Today marks my final day at SC Magazine after more than 7-1/2 years.
Beginning next week, I will be taking on the role of manager of online content at the Chicago-based information security company Trustwave, a newly created position for which I'm very excited.
I leave SC with a heavy heart.
My first day was Jan. 16, 2006, when I joined as a reporter. Here's the first-ever story I wrote. Surprise, surprise: It was about a data breach.
My final day is today, Sept. 3, 2013, and here is my final story. As if it were scripted, the piece is about the state of data breach lawsuits.
Seems like the perfect bookends.
In between, I wrote thousands of articles, ranging from breaking news to blog posts to 3,000-word covers; recorded hundreds of videos, podcasts and webcasts; and probably conducted tens of thousands of interviews. I've watched SC in the U.S. dynamically grow and further entrench itself as the go-to IT security trade publication for professionals. And all along, the monthly print magazine stayed strong, even as the climate for news consumption and ad dollars hastily moved online.
When I think about the expanse of time I have dedicated to this publication, it takes my breath away. Spending one fifth of my life here truly is a testament to how much I've believed in this title and how wonderful and talented my co-workers have been. So thanks to them and thanks to all of my sources.
But most importantly, thank you, the reader. I hope I've helped to inform, educate and maybe even entertain you over the past many years.
Happily, I'll be remaining in this wonderfully vibrant industry, albeit in a different role, but one in which I plan to produce important content for you nonetheless.
Thanks for the memories. And I'm looking forward to creating new ones.
You can find me on Twitter: @DanKaps.
Twice during his appearance on Monday morning's 'Today' show on NBC, host Carson Daly turned to security researcher Chris Valasek and said: "I'm glad you guys are on our side."
It may have been the most important sentence anyone uttered during the five-minute segment.
Valasek and Charlie Miller (or, Dr. Miller, as Valasek referenced the legendary iOS white-hat hacker as) were on the program's set to demonstrate how they can compromise the internal computing system of a test Ford Escape to manipulate the car's speedometer and control its steering wheel. The pair will formally present the research on Friday morning at the annual DefCon gathering in Las Vegas.
And while the NBC segment certainly underscored for the mainstream how vulnerable to digital attack network-connected automobiles are – the goal, of course, is to get car manufacturers to take security more seriously – there was another positive consequence.
Anytime white-hat hackers (those dedicated to finding vulnerabilities before the bad guys do) can appear on national TV and be framed in a positive light, never mind be praised by the host – Valasek even looks and dresses like Daly – it will go a long way toward improving a public perception of security researchers that remains in serious need of nursing.
"I think people fear the unknown," Trey Ford, general manager of Black Hat, the hacking conference that will precede DefCon this week, told me recently. "There's this spooky factor. There's a certain taint these guys are smeared with. You're fighting a moniker and a fear of the unknown."
So it's no surprise the public has sat idly by over the last two years as federal prosecutors prepared overzealous hacking cases under the comically outdated Computer Fraud and Abuse Act (CFAA) against researchers like (now-deceased) Aaron Swartz and (now-jailed) Andrew "Weev" Auernheimer.
And it's also no surprise that some readers' comments on the death last week of Barnaby Jack, 35, who was set to deliver a talk on hacking pacemakers at Black Hat, were laced with a mixture of confusion, ignorance and hate.
Jack is no different than Valasek or Miller. He could have just as easily been on the 'Today' show set. Anyone speaking at Black Hat or DefCon could have too. Cars just happen to be cool.
But know this, people of the world: Security researchers usually have full-time jobs where they get paid to tinker with software, hardware and services. They may proudly consider themselves hackers. They may tend to have big egos and get flashy about their discoveries. And they may occasionally demand to be paid big bucks for finding bugs.
But the end result of their work is almost always consumer advocacy, aka your best interests.
Be happy they're around. Be happy they're motivated much more by good than by greed. Be happy they often catch stuff before the bad guys do. They are your personal watchdogs.
And while, as Errata Security's Robert Graham argues, it may never be possible to transform the term "hacker" into a meaning that is positively embraced by the public, there is no denying that a hang with Carson Daly helps.
In November, months before Edward Snowden would become a household name, President Obama issued a memorandum to the heads of federal agencies, spelling out new guidance for deterring the security threat of insiders.
Predictably, the commander-in-chief positioned the memo, which followed his formation of an Insider Threat Task Force a year earlier in the wake of WikiLeaks, as a means by which classified information and national security could be protected.
The memo defines the insider threat as "potential espionage, violent acts against the government or the nation, and unauthorized disclosure of classified information." The announcement drew relatively little news coverage, but it promulgated some basic new requirements:
Technology. Education. Privacy protection.
Standing alone, these "minimum standards" sound similar to the protocols that would be listed as part of any robust insider threat program. Still, the guidance was met with a fair amount of skepticism from the civil liberties community, who worried it failed to include any distinctions around whistleblowing.
Now, a new report from McClatchy Newspapers, which examined government documents surrounding the program, has confirmed those apprehensions. The Obama administration's initiative is much more expansive than previously understood.
The documents McClatchy analyzed show that the program encourages employees to be on the constant lookout for suspect behavior exhibited by their colleagues, and it can impose very severe penalties for failing to speak up. In addition, the program is broadly defined, meaning agencies can implement it as they see fit, which could open the door for significant abuse.
Some agencies have taken this "latitude" as justification to equate whistleblowing with malicious behavior as egregious as aiding the enemy, a conflation that could have a chilling effect on a worker who seeks to report, either through the recommended channels or otherwise, unethical or possibly illegal conduct in the workplace.
Not only could the program significantly discourage whistleblowing, but because it relies on employees to profile, be inherently skeptical of one another and possibly file dubious claims, camaraderie and morale undoubtedly will suffer.
And once you've been fingered as a violator, you might be out the door.
Information exposed by Edward Snowden and, before him, Bradley Manning underscore the need for organizations to further bolster their insider threat strategies. They must be built with the understanding that the portrait of the malicious insider has changed. He or she may not necessarily be someone operating out of their own self-interest – like a worker wanting to steal a customer list so they can start a competing company or a disgruntled employee wishing revenge on a superior – but may actually be a "conscientious objector," someone who is motivated by their morals and ethics and the betterment of others.
The Obama program reportedly offers "greater protection for whistleblowers who use the proper internal channels to report official waste, fraud and abuse," but would you feel comfortable signaling wrongdoing in an environment where snitching on a co-worker is considered good form? Nobody wants to work in a climate of paranoia, distrust, intimidation and fear. And that's why this initiative seems to be less about collaring the true malcontents and more about going after the men and women who potentially could embarrass the government. Remember, President Obama's administration has prosecuted more government officials for releasing secret material than all other administrations combined.
Then again, if you're as slick as Snowden, who reportedly joined Booz Allen Hamilton for the sole purpose of exposing surveillance documents – something security expert Jeffrey Carr on Tuesday called the "targeted insider threat" – nothing may be good enough to prevent it. Carr suggests implementing better "background investigations and post-hire monitoring for network access anomalies" to combat this prospect.
Sounds more effective than turning Jane from Accounting into a clinical psychologist.
In April 2009, Gen. Keith Alexander, director of the National Security Agency, took the stage at the annual RSA Conference in San Francisco for a keynote address. He told the crowd of thousands: "The NSA does not want to run cyber security for the government."
Instead, he said, the job of protecting U.S. infrastructure is a shared responsibility, falling into hands of government agencies such as the Department of Homeland Security, as well as private sector companies and colleges and universities. “The government is here to protect the country from adversaries,” Alexander explained. “The NSA can offer technology assistance to team members. That's our role.”
Alexander wasn't lying, but he wasn't exactly telling the truth either, as leaks from former NSA contractor and whistleblower Edward Snowden have revealed. The NSA never wanted to be in the cyber defense game, but it very much was gearing up, as we now know, for offensive digital missions.
Two months after that RSA address, the U.S. Cyber Command was formed, described as a new armed collaborative for protecting Department of Defense Networks. Not long after, Alexander was tapped to head up the command, while still leading the NSA. Fast forward to this past January, and the DoD announced plans to grow the command, which is closely tied with the NSA, nearly fivefold over the next few years, from around 900 to about 4,000 military and civilian personnel.
The talent boost will go toward safeguarding infrastructure deemed critical to the country's security, such as the power grid, but also toward executing offensive missions, according to a Washington Post report. Citing an unnamed U.S. official, the article, however, said there were restrictions in place so that the "military would act only in cases in which there was a threat of an attack that could really hurt."
That likely was the justification behind the "Olympic Games" program, responsible for the creation of the Stuxnet worm, which came to light in the summer of 2010 and which targeted Iranian nuclear systems. But does it hold water for the recent revelations by Snowden that the NSA is stepping up offensive cyber actions across the world?
Snowden leaked documents to the U.S. version of The Guardian newspaper that revealed that President Obama has ordered senior security and intelligence officials to "draw up a list of potential overseas targets for U.S. cyber attacks" that "can offer unique and unconventional capabilities to advance U.S. national objectives around the world with little or no warning to the adversary or target and with potential effects ranging from subtle to severely damaging."
But in an interview last week with Hong Kong's South China Morning Post, Snowden presented much more damning evidence of the extent of these targets and attacks. The 29-year-old told the paper that the United States already has conducted at least 61,000 hacking operations globally, including against hundreds of targets in Hong Kong and mainland China, among them private businesses and a university that routes internet traffic for hong Kong.
According to the paper, Snowden wanted to showcase “the hypocrisy of the U.S. government when it claims that it does not target civilian infrastructure, unlike its adversaries."
In a live online chat, he told The Guardian on Monday: "I did not reveal any U.S. operations against legitimate military targets. I pointed out where the NSA has hacked civilian infrastructure such as universities, hospitals, and private businesses because it is dangerous. These nakedly, aggressively criminal acts are wrong no matter the target. Not only that, when NSA makes a technical mistake during an exploitation operation, critical systems crash. Congress hasn't declared war on the countries – the majority of them are our allies – but without asking for public permission, NSA is running network operations against them that affect millions of innocent people."
If this true, that the United States is spearheading widespread online assaults of civilian targets, likely in an attempt to mine for sensitive information, it is a far cry from cases in which there's a threat of an attack that could "really hurt" the country.
One can liken these engagements to the nation's ever-expanding drone war, which allegedly targets suspected terrorist targets, but often results in the deaths of innocent civilians. War journalist Jeremy Scahill, who has conducted gripping, on-the-ground reporting in some of these secret war zones like Yemen and Somalia, worries that these attacks could lead to blowback, as the families of victims will be incited to take up arms against America.
While espionage and sabotage conducted through the digital sphere won't lead to bloodshed – at least not yet – news of these U.S. attacks is troubling. At the very least, the U.S. government runs the risk of losing all credibility in its efforts to discourage and prevent Chinese hackers from infiltrating American businesses and stealing hundreds of terabytes of data, as security company Mandiant documented earlier this year.
Should the nation continue to engage in this type of cyber behavior, in secret and protected from meaningful national debate and a full understanding of the legal framework behind it, serious unintended consequences could arise, ones that may make us weaker, rather than stronger, in cyber space. There have been initial attempts to define this emerging landscape, and U.S. ally Israel is also taking steps, but it's nowhere near where it needs to be.
In short, caution, not aggression, should be the default setting for U.S. foreign cyber policy.
Earlier this week, The Associated Press stunningly revealed that the U.S. Department of Justice secretly obtained "records for more than 20 separate telephone lines assigned to the AP and its journalists" covering "a full two-month period in early 2012." Presumably the feds were interested in finding out who had leaked to the AP information about a foiled al-Qaeda plot in Yemen, and Attorney General Eric Holder justified the snooping in the name of national security, an argument that, as the days pass, is growing increasingly dubious.
Four days later, four members of the provocative, ostentatious and now-defunct LulzSec hacktivist clan were sentenced in London after a two-day hearing. The LulzSec members – who were responsible for leaving a trail of mayhem and embarrassment in the wake of their mid-2011 attacks against organizations like the CIA, Arizona Department of Public Safety, Sony and 20th Century Fox – received relatively light sentences, ranging from 20 to 32 months, with two of the defendants only expected to serve half of that time, one sentenced to a youth facility and the other expected not to see the inside of a jail cell at all, assuming he stays clear of trouble.
The AP probe and LulzSec punishments may not, on the surface, appear connected. But they are, especially when one considers the case of Jeremy Hammond, the accused Anonymous and LulzSec-linked hacktivist who is charged with looting the computer systems belonging to the Arizona Department of Public Safety (allegedly done to protest tough immigration laws) and at HBGary Federal and global intelligence firm Stratfor (allegedly done to expose the inner workings of the so-called intelligence industrial complex). The Stratfor hack resulted in millions of emails being unearthed and, according to Rolling Stone, "focused worldwide attention on the murky world of private intelligence after Anonymous provided the firm's emails to WikiLeaks, which has been posting them ever since."
But unlike his counterparts in the U.K., Hammond has been handled far more aggressively here in the United States, having been held in prison in New York without bail since last March, often in solitary confinement, while being denied visitors as he awaits trial. A judge has determined him a flight risk.
It's clear that the United States wants to make an example of Hammond, and by throwing the proverbial book at him and essentially declaring him an enemy of the state, even before he stands trial, federal prosecutors are fully aware that this will discourage other people from engaging in similar acts that seek to expose government or corporate corruption and impropriety. The same applies to what the DoJ has done to the AP. In many ways, Hammond really is no different than the person or persons who tipped off the news agency about the counter-terrorism operation in Yemen. Or any whistleblower or press leaker for that matter. Hence, they all face similar treatment.
Although President Obama pledged to maintain the "most transparent administration" in history, his actions have proven otherwise. The events of this week is further proof that the U.S government -- and the corporations for which it looks out -- is more interested than ever in preserving its cloak of national security secrecy. And if that means instituting press or source intimidation, or waging aggressive prosecutions against activists...well, you might want to get used to it.
When President Obama first addressed the nation in the hours following the Boston bombings last week, he admirably reserved judgment as to the motivation of the heinous attacks. "We still do not know who did this or why," he said. "And people shouldn't jump to conclusions before we have all the facts."
But as the president exited the podium without taking any questions, in a telling moment of what soon would come, the media that was present shouted: "Mr. President, was this terrorism?"
In his next address, the following day, everything was different. Obama referenced the attacks as terrorism, still without any evidence that an "unlawful use of force or violence...in furtherance of political or social objectives" (as the FBI defines terrorism) had actually occurred. The mainstream media, if it hadn't already, updated its menacing-looking graphics and headlines to reflect what it seemingly wanted to hear since the blasts went off near the finish line of the Boston Marathon. The parade of national security "experts" and counter-terror talking heads could officially commence.
But while we may now know who likely committed this cowardly act, we still aren't that much closer to understanding why. And we still still shouldn't jump to conclusions before we have all the facts.
Unfortunately, we already have.
And that's exactly why the term "terrorism" can be so reckless. Since it first was used to describe the bombings, it singlehandedly permitted the city of Boston to experience an unprecedented lockdown, transformed overnight into what briefly resembled martial law, where residents were forced from their homes at gunpoint to ensure that one of the suspects was not holed up there. And most residents dutifully submitted to this, without any reservations that the actions resembled an authoritarian state. The scene in Boston looked like something we are used to seeing in a faraway place. But in an instant, it was normalized in a neighborhood minutes from Fenway Park, all because the authorities were hunting for terrorists. Terrorists, you see. Not run-of-the-mill murderers.
And the worst could be yet to come if the bombings are used to usher in a further breakdown of Americans' privacy or a backlash against Muslims.
What has transpired in Boston and the surrounding areas is exactly the reason why the term terrorism is so dangerous. As Guardian columnist Glenn Greenwald has noted, the word is meaningless, but, at the same time, it justifies everything. And what separates the actions in Boston from that of the Newtown school shootings, the Aurora movie theater massacre, or the sniper spree in Washington, D.C.? The only constant, it seems, is that terrorism almost exclusively is used to characterize people of Muslim descent carrying out violence against Americans.
And just as a supposed act of physical terrorism has prompted hysteria on the streets of Boston, we must recognize that a similar consequence is just as plausible in the cyber realm. Already, members of Congress have used the blasts as a call to pass the controversial Cyber Intelligence Sharing and Protecting Act (CISPA). Rep. Mike McCaul, R-Texas, went as far as to conflate the explosions in Boston with the possibility of "digital bombs" being dropped on U.S. critical infrastructure.
Along these same lines, one should also recognize the increasing use of the word "cyber terrorism," itself a risky label to apply to malicious acts conducted online. While the definition is up for debate, the U.S. State Department has defined cyber terrorism as "the premeditated, politically motivated attack against information, computer systems, computer programs, and data which result in violence against noncombatant targets by sub-national groups or clandestine agents."
Has the U.S. ever been a victim of this? That's unlikely. Yet we have seen the designation applied on a number of occasions for far less calamitous acts. As an example, it recently was invoked in the wake of a series of distributed denial-of-service attacks launched against U.S. banking websites, presumably conducted because the alleged perpetrators were seeking the removal of a YouTube video that they found offensive to Muslims.
But there is no reliable evidence to suggest that the attacks launched by a group calling itself Martyr Izz ad-Din al-Qassam Cyber Fighters were terrorism related. No proof exists that the group is connected to any terror organizations. And the DDoS attacks didn't have any serious impact on banking infrastructure, i.e., no Americans lost access to their money, never mind any lives being harmed.
That didn't stop Obama-appointee Debbie Matz, chairwoman of the National Credit Union Administration and a former member of the president's economic team, from sending a letter in February to credit unions advising them to implement DDoS mitigation strategies given an “increasing frequency of cyber terror (emphasis mine) attacks on depository institutions.”
Distinction and nuance here are critical. Cyber terrorism is not espionage or online financial fraud. And it's certainly not firing packets of traffic at a server so a website gets knocked offline.
As Peter Singer, director of the Center for 21st Century Security and Intelligence, and a senior fellow in the Foreign Policy program at Brookings, wrote late last year: "About 31,300. That is roughly the number of magazine and journal articles written so far that discuss the phenomenon of cyber terrorism. Zero. That is the number of people who have been hurt or killed by cyber terrorism at the time this went to press."
That's why those in power who loosely wield a term as culturally and politically significant as terrorism, freely and indiscriminately, is textbook demagoguery. At its best, it will drive up levels of fear. At its worst, it will be used as justification to pass overly restrictive and invasive laws that govern use of the internet, while permitting increased surveillance and the seizure of personal information. Remember, if what we saw in the suburbs of Boston last Friday is any indication, actions you never thought imaginable could one day happen in cyber space too.
By no means an exhaustive list, but here's an assortment of convicted cyber criminals over the last three years who have received less prison time than Andrew Auernheimer, also known as "Weev."
The security researcher and self-proclaimed internet troll earned 41 months behind bars Monday for his role in using a script to retrieve data on roughly 120,000 Apple iPad users from a public web server.
- Romanian hacker Cezar Butu, who pleaded guilty to compromising the credit card processing systems of Subway restaurants in 2011, was sentenced to 21 months in prison.
- A Chicago woman with roots in Nigeria was sentenced to 30 months in prison for playing a key role in extracting cash from the bank accounts of individuals whose prepaid payroll information was stolen in a massive 2008 breach. Sonya Martin, 45, was part of a gang that evaded encryption on the network of Atlanta-based RBS WorldPay's U.S. payment processing division to compromise prepaid payroll debit cards, prosecutors have said.
- Two men each were sentenced to 36 months in prison for withdrawing tens of thousands of dollars from ATMs with credit card information that was stolen from craft-store retail chain Michaels Stores. In March, Eduard Arakelyan, 21, and Arman Vardanyan, 23, pleaded guilty to one count each of conspiracy to commit bank fraud, bank fraud and aggravated identity theft.
- A former bank executive was sentenced to 33 months in prison for committing 84 fraudulent wire transfers that deposited $673,000 of UBS Securities funds into his personal accounts. Shawn Reilly, 34, of Congers N.Y. also received three years of supervised release. In addition, Reilly, who served as settlement group director at UBS from November 2007 to January 2010, was ordered to pay back the money he stole when he tricked his team into making "false journal entries" and authorizing bogus transfers, believing they were for legitimate customers. On Sept. 6, he pleaded guilty to one count of bank fraud.
- A Kansas City man was sentenced to two years in prison after he was found guilty in September of creating a virus and amassing a 100,000-node botnet to launch DDoS attacks against a number of websites, including Rolling Stone and Radar. Bruce Raisley, 48, launched the attacks against sites that published articles detailing an incident in which he agreed to leave his wife for a "woman" whom he met on the internet, according to prosecutors.
- A former IT head in Virginia, upset about being fired, was sentenced to two years and three months in prison for hacking into his former employer's website and deleting approximately 1,000 files. Darnell Albert-El, 53, of Richmond, Va., pleaded guilty in June to one count of intentionally damaging a protected computer without authorization, according to federal prosecutors.
- A former senior database administrator at a Houston-based electric provider, who was fired three months before he hacked into the corporate network to steal personal data belonging to 150,000 customers, was sentenced to a year in prison. Steven Kim, 40, was fired from his job at Gexa Electricity in January 2008. Three months later, he broke into the energy company's database to download files, containing customer data such as names, Social Security and driver's license numbers, billing addresses and birth dates.
Auernheimer decided to fight the charges rather than plead guilty, unlike his co-conspirator, Daniel Spitler. Had he admitted guilt, he may have gotten less time. But it's worth noting that Auernheimer never intended to profit off the information he exposed, aside from the exposure that the "hack" would earn him. He also never published the information. Rather, he said he sought to embarrass AT&T for having poor security.
Many fellow security enthusiasts are worried that the zealous prosecutions of Auernheimer under the Computer Fraud and Abuse Act (CFAA), as well as others, are telling of system leveraging a draconian law to criminalize research and dissent.
Rep. Zoe Lofgren, D-Calif, has issued a draft proposal for "Aaron's Law," which would revise the CFAA. In January, Lofgren took to Reddit to announce her plans to reform the law so that people like Aaron Swartz, the computer programmer and freedom-of-information activist who committed suicide in January, are not punishable by decades in prison.
Lofgren's first version of the bill would "exclude certain violations of agreements or contractual obligations, relating to internet service," a provision of the existing statute under which Swartz was charged. She sought feedback from the internet community, including cyber security professionals, and came back in February with an updated proposal. "This revised draft also makes clear that changing one's MAC or IP address is not in itself a violation of the CFAA or wire fraud statute. In addition, this draft limits the scope of CFAA by defining 'access without authorization' as the circumvention of technological access barriers," Lofgren wrote.
(Auernheimer did not circumvent any "technological access barriers.")
Lofgren has told SCMagazine.com she wants to ensure the law is reformed in such a way that it doesn't legitimize certain attacks.
"My thought is that we should make changes to the statue so if that someone did something like Aaron, they would not be facing a 35-year prison sentence," she said. "On the other hand, there are in fact cyber criminals. I am not of the view that cyber crime is non existent."
Since President Obama took office, his administration has waged an unprecedented war on whistleblowers, invoking the Espionage Act to prosecute more people under the law than all previous presidents combined. One of those defendants is Bradley Manning, the U.S. Army intelligence officer accused of leaking tens of thousands of diplomatic cables to WikiLeaks. Since his arrest, Manning has spent nearly 1,000 days (that milestone date comes this Saturday) in prison without being tried, a significant amount of that time under deplorable conditions at the Quantico Marine base in Virgina. Meanwhile, Julian Assange, the founder of WikiLeaks, remains hunkered down at the Ecuadorian embassy in London, fearful that if he leaves to face sex crime questioning in Sweden he will be extradited to the United States to also face espionage charges.
The vigorous prosecutions of six Americans accused of providing information to the media because they in principle believed it belonged in the public domain is just one aspect of Obama's excessive pursuit to maintain secrecy and silence dissent. The U.S. Department of Justice has been just as zealous in its handling of cases involving activist hackers, or "hacktivists," accused of infiltrating corporate and government information systems to extract data – not to profit off it, but to expose reprehensible corporate behavior and systemic wrongdoing or to, simply, embarrass the powerful. Among those facing long prison sentences are Jeremy Hammond, Barrett Brown and Andrew Auernheimer. That list would have included Aaron Swartz, had he not hanged himself last month in his Brooklyn apartment, a few months before he was to be sentenced, the victim, his girlfriend and family believe, of an unwavering prosecution that sought decades of imprisonment.
Whistleblowing organizations like WikiLeaks and accused hacktivists like Hammond are not foreign spies lusting to plunder intellectual property from U.S. corporations and government agencies in order to profit and gain a competitive advantage, or for that matter, steal F-35 jet fighter plans or turn off the lights for millions. But that may have been the impression you walked away with after reading a White House report released on Wednesday, not-so-aptly titled "Administration Strategy on Mitigating the Theft of U.S. Trade Secrets."
Many observers had expected the report to solely focus on the very legitimate and alarming threat posed by foreign-based corporate cyber spies, especially on the heels of a series of high-profile hacks involving companies like The New York Times, in addition to a fascinating in-depth look from forensic firm Mandiant into the operations of a Chinese military unit believed responsible for hacking 141 organizations, primarily based in the United States.
While the White House report, which pledged to increase law enforcement activity and diplomatic measures to stymie the threat, did devote significant space to concerns over nation-state-led data collection and the economic impact these alleged Chinese and Russian-based hacks impose on U.S. businesses, it also contained two passages that referenced WikiLeaks and the now-defunct hacktivist group LulzSec as threats of an equal magnitude. One of the passages, from the report:
Incongruous as it may be to include them in the report, it should come as no shock that the Obama administration is using any opportunity it can to label hacktivists and organization like WikiLeaks (The White House even calls WikiLeaks a "hacktivist" group although there is no evidence the organization has ever itself conducted hacking operations to receive its incredible scoops) as just as criminally liable as a highly skilled and well-funded group of Chinese military hackers that reportedly have been able to clandestinely compromise the networks of 141 organizations in the United States, including some of the largest and seemingly most secure businesses in the world, to smuggle documents, design plans and source code.
By the White House clumping these disparate groups with varying motivations together into one pool of perfidious internet miscreants and promising accelerated law enforcement efforts, it enables the president to appear tough on Chinese and Russian espionage and sabotage that might hurt the U.S. economy, while also sending a clear message that individuals who feel morally responsible to expose corporate or government impropriety will be treated just as harshly. Taken another way, this further normalizes and institutionalizes the exceptionally harsh treatment of online dissent. And while it may be a crime, the report deems it essentially no different than the actions taken by a Chinese military unit that is digitally infiltrating and possibly sabotaging hundreds of U.S. companies, including critical infrastructure operators, from a 12-story office building in Shanghai.
When President Obama merely was candidate Obama in 2008, one of his keystone campaign pledges was to rid secrecy from government decision making. In fact, he vowed to transform his presidency into the "most transparent" administration in history.
Drones are a prime example. The United States currently is fighting several covert wars in which countries such as Pakistan and Yemen regularly are bombarded by robotic aircrafts that are killing thousands of people (including three U.S. citizens), among them civilians. The government, meanwhile, is employing a bizarre guilt-by-association logic to make the attacks appear more precise and successful than they actually are. Details of the drone program have never been presented to the public, never mind discussed or debated by Congress, leading some lawmakers to question whether drones are even a legal mechanism of war.
Thus it was no surprise that when the U.S. government, in conjunction with Israel, created a super computer worm known as Stuxnet designed to attack Iran's nuclear enrichment facilities, it did so under the cloak of extreme secrecy. The common argument in defense of this type of warfare is that by enlisting Stuxnet (and later the Flame virus), the United States was able to avoid mobilizing actual troops to accomplish its national security objectives. What often goes unmentioned, however, is that by unleashing one of the most sophisticated pieces of malware ever written on a nation against which there was no official war declared, the United States may have staged a cyber future that many people won't like.
"We have all kinds of cyber weapons that have already been used by America and its allies," Scott Borg, director of the U.S. Cyber Consequences Unit, a nonprofit that researches the impact of America's actions in cyber space, recently told me. "I compare it to entering the nuclear game without Hiroshima. We've got people using cyber weapons without thinking anything through but the tactical gain."
Aside from the perfectly reasonable argument that American aggression, whether dealt by drones in the air or malicious code over computer networks, actually exacerbates anti-American sentiment and potentially incites violence against the United States that may have never happened in the first place, the legal justifications of Stuxnet and Flame are still unknown. Yet many countries eventually may look to the two pieces of malware as examples to follow. This could quickly escalate conflict in a domain that allows anyone to strike from anywhere, often anonymously. In short, all hell could break loose.
As Steve Coll wrote last June for The New Yorker: "Common sense argues for caution, especially by the President of the United States. It also argues for strong defenses, and the pursuit of global laws and norms to contain the military use of these technologies before they cause chaos and destruction. During the nineteen-fifties, a shocking number of American generals believed that a nuclear war could be won. 'Olympic Games' [codename for the Stuxnet operation] suggests a comparably self-aggrandizing strain among our new class of digital fighters. Here the comparison to the early nuclear era does seem apt. As a citizen, will it once again seem tempting to buy land, guns, gold, and bottled water?"
But instead of openly discussing the legal defenses and potential ramifications for this new era of battle, not to mention whether the attacked nation is sanctioned to respond back, the Obama administration actually has taken the opposite route, choosing to aggressively hunt down the officials who leaked the story to The New York Times that Stuxnet was a U.S. creation.
Prosecuting whistleblowers has been a common order of business during Obama's time in the White House. And according to a Washington Post story on Saturday, the FBI and U.S. Attorney General Eric Holder are stopping at nothing to unearth those involved in the Stuxnet leak.
"The FBI and prosecutors have interviewed several current and former senior government officials in connection with the disclosures, sometimes confronting them with evidence of contact with journalists," according to the Post. "Investigators, they said, have conducted extensive analysis of the email accounts and phone records of current and former government officials in a search for links to journalists."
I originally believed that the leakers of this story would not be sought because the disclosure was a calculated move on Obama's part to appear tough on perceived enemies, especially in the lead-up to the presidential election. But that does not appear to be the case.
A healthy democracy demands debate and openness, or at the very least acknowledgement, of our government's actions, which, after all, are committed in the name of the American people. What it does not necessitate is a relentless assault against revelatory journalism and the persecution of the very people who seek to divulge those truths.
Last month, I wrote about a series of overzealous prosecutions that are being waged against individuals accused of stealing information from IT systems. The defendants in these cases may have violated various laws, but their stated end goal was never to enrich themselves with money, but the world with data that they believed belonged in the public domain. Nevertheless, they are now facing dubious and excessive charges, the victims of a powerful government and corporate state that appears scared to death over losing their stranglehold on secrecy and profit.
I should have, but I didn't, include Aaron Swartz, who, prior to his suicide on Friday, was accused by federal prosecutors of using his access to MIT's network to steal millions of academic papers so they could be distributed for free. He faced 35 years behind bars. Swartz's family and girlfriend believe that the prospect of spending decades in prison is what pushed him over the edge.
We live in an age now, as former congressional staffer-turned-political writer and activist Matthew Stoller notes, when prodigies like Swartz are not embraced, but scorned, by the establishment, specifically, in this case, the U.S. attorney's office in Boston. It's a fundamental problem with no easy fix. And maybe no fix at all, depending on how deep of a hole we've dug ourselves.
I never met Swartz. Like many people, I have learned more about the 26-year-old over the past two days than I had known prior. There have been a number of wonderfully poignant and equally thoughtful reflection posts that have been written on Swartz, including this and this.
I urge you to read those with the hope that the public can help spur and erect some mechanisms so this never happens again. A good start would be updating a comically outdated federal anti-hacking law that gives truculent prosecutors the means by which to "bully" defendants and force them to face decades in prison for alleged crimes that expert witnesses prefer to describe as "inconsiderate" actions.
The cozy relationship between national security reporting and the United States government was back on full display Wednesday with a story from the New York Times, headlined "Bank hacking was the work of Iranians, officials say."
The article, citing unnamed government sources, claims that a recent spike in distributed denial-of-service attacks against top U.S. banks, in which access to their websites occasionally has been disrupted dating back to the fall, is the work of the Iranian government.
But the article seems nothing more than the craftsmanship of the Pentagon's massive public relations apparatus, as well as the trust it has that reporters who are granted special access for "scoops" will happily return the favor by dutifully printing the story, even if it turns out that the story is propaganda manufactured for fear.
Because it could be. We have no way of knowing. Six paragraphs in, the writers, Nicole Perlroth and Quentin Hardy, finally dish out that sort-of important fact:
The writers here admit they were shown no proof that what the officials were telling them was true, yet they accept it as fact. Tape, transcribe, send to editor. Quoting an anonymous U.S. official who fails to provide even the slightest bit of technical proof about what he or is she is claiming is dangerous. It's not journalism, it's PR, and it runs counter to the job of a journalist, which is to hold the powerful accountable for their actions.
Imagine if the presenters at the Black Hat conference each took the stage, only to close out PowerPoint and ask everyone in the audience to simply trust that they have found a gaping vulnerability in DNS.
And when talking about Iran, it's even more important that we demand accountability, considering these types of allegations can encourage a nation to support a war in which it shouldn't engage. But, it appears no lessons have been learned from 10 years ago when the media joined President Bush and most lawmakers in lock step to pronounce that Iraq had to be invaded because it housed weapons of mass destruction. And we all know how that turned out.
Read this NYT piece accusing Iran of attacking US banks. It's a straight-up USG press release. nyti.ms/UAAPzf— Barry Eisler (@barryeisler) January 9, 2013
The NYT story also only briefly touches on why Iran, if it is actually behind the attacks, may be inclined to do so. The 17th paragraph is when the reader is finally introduced to Stuxnet, arguably the most sophisticated piece of malware ever built. It is now widely believed that Stuxnet, designed to destroy Iran's nuclear centrifuges, was the work of the United States and Israel, which in the process potentially set a dangerous precedent.
That was most definitely a hack. Temporarily disrupting access to bank websites – while no doubt a crime under U.S. law, a significant financial imposition for the victim institutions and a great inconvenience for customers – is not.
So I'd probably have changed the headline of this story, as well. Hope it got a lot of page views, though.
Prosecutors around the country are sending a clear message to hackers and activists who want to use their computers to promote a political ideology: We plan to throw the book at you.
The latest example is a fresh, 12-count indictment that was returned last week against Barrett Brown, the sometimes-spokesman of prominent hacktivist group Anonymous, who in October was charged with threatening an FBI agent in a YouTube video. The new indictment charges the Dallas man with sharing an easily obtainable URL that linked back to a "dump" of credit card information allegedly stolen in the attack on Stratfor, a global intelligence firm whose clients largely consist of U.S. government agencies and Fortune 500 companies. What is important to note, however, is that Brown is not charged with committing the attack or profiting from it.
According to the indictment, "By transferring and posting the hyperlink, Brown caused the data to be made available to other persons online, without the knowledge and authorization of Stratfor and the cardholders."
It's a dubious charge and one which should be cause for concern for journalists everywhere who report on incidents of hacking and include similar links in their stories.
But, the charge also is not something that is particularly surprising. More and more, prosecutors are using all available means, aided by an antiquated federal anti-hacking law, in an attempt to discourage internet-based dissent and deliver stiff penalties to those who seek to agitate the powerful.
“Increasingly, it's become clear that the government recognizes these criminal cases as pivotal to seize back control, so it can continue to operate in secret when it wants to, immune from public scrutiny, without risking embarrassment or revelations of wrongdoing.”
For instance, longtime political activist Jeremy Hammond, an Illinois man accused of helping to orchestrate the Stratfor hack, faces life in prison, and has been denied bail and added to the terrorist watch list. By comparison, as InformationWeek's Matthew Schwartz astutely points out, a Russian man who stole $3 million by hacking into the bank accounts of U.S. mom-and-pop businesses recently pleaded guilty – and received a two-year sentence.
There's the harsh, borderline tortuous, detainment of Army Pfc. and whistleblower Bradley Manning, who next year finally will stand trial over charges he "aided the enemy" when he allegedly handed over sensitive – but quite revealing – U.S. diplomatic cables to WikiLeaks. He could be sentenced to death, though prosecutors have said they will not pursue that punishment.
And let's not forget the so-called "PayPal 14," a group of mostly 20-somethings who are staring at 15 years in prison over felony accusations that they downloaded a tool that allowed them to temporarily disrupt access to PayPal, after the site suspended access to WikiLeaks' donation account at the behest of U.S. lawmakers and the State Department. A defense attorney in the case contends that the purported acts of his clients amount to nothing more than the digital version of a lunch counter sit-in.
All of these cases share a theme: Often under the guise of national security, a notably heavy-handed prosecutorial effort is underway to stem the flow of information from government agencies and powerful corporations. Increasingly, it's become clear that the government recognizes these criminal cases as pivotal to seize back control, so it can continue to operate in secret when it wants to, immune from public scrutiny, without risking embarrassment or revelations of wrongdoing. If would-be hacktivists see that people like Brown, Manning, Hammond and the "PayPal 14" are being treated like enemies of the state, they may be less likely to commit similar acts.
Make no mistake, deterrence is a critical element to law enforcement. But we are a country built on the rule of law. As such, it is incumbent upon all citizens, regardless of one's political views, to ensure that we all get an equal shot at justice.
No matter which side of the ongoing Israel-Gaza conflict you stand -- or maybe you stand on no side at all -- it is critical to view all media reports coming out of the region with a healthy dose of skepticism, even when they are published by seemingly trusted sources like CNN or The New York Times.
Today, The Times, in its Bits blog section, printed a story titled "Cyber Attacks from Iran and Gaza on Israel More Threatening than Anonymous's Efforts."
The story, quoting Israel's finance minister, focuses on how the country has been able, for the most part, to successfully deflect attacks from the Anonymous hacktivist collective, aside from a large number of defacements and general website slowdowns that have resulted.
However, the report contends, the Jewish state should be much more concerned with the increasingly sophisticated attacks coming from Iran and Gaza. Specifically, the article, citing the CTO of an Israel-based security firm, references Mahdi, an espionage trojan disclosed in July which "appears to have originated in Iran" and has been used to spy on computers in Israel. The story also mentions a remote access trojan (RAT) whose command-and-control hub is reportedly now based in Palestine.
At no point in the nearly 800-word piece was there any mention of what may have invoked such attacks.
In case you don't remember, Stuxnet -- which predates both Mahdi and this RAT -- is a massively sophisticated computer worm that was first spread in 2009 by the United States and Israel as part of an operation dubbed “Operation Olympic Games.” And while its exact impact -- or what future malware it has inspired -- is the source of some debate, Stuxnet “temporarily took out nearly 1,000 of the 5,000 centrifuges Iran had spinning at the time to purify uranium."
Today's Times story isn't the first time mention of Stuxnet magically disappeared from a discussion around the threat posed by Iran or Hamas. Last month, U.S. Defense Secretary Leon Pannetta delivered a rousing speech to business leaders in New York, warning them that America is at a “pre-9/11 moment” and that countries, such as Iran, are developing menacing capabilities in cyber space.Let's be clear: Stuxnet is considered the world's first weapon in the new era of online warfare. And it was fired not by Iran, or Gaza, but by Israel and the United States.
As Misha Glenny, a journalist and visiting Columbia University professor, wrote over the summer in a New York Times op-ed:
The United States has long been a commendable leader in combating the spread of malicious computer code, known as malware, that pranksters, criminals, intelligence services and terrorist organizations have been using to further their own ends. But by introducing such pernicious viruses as Stuxnet and Flame, America has severely undermined its moral and political credibility.
As digital offenses and defenses continue to work their way into foreign policy decisions, it will be crucial to avoid declaring any one country a victim, while simultaneously ignoring its aggression -- a dangerous notion known as exceptionalism. Any help the mainstream media can provide to prevent this from happening will be welcomed.
Consider all that has developed this year regarding cyber threats on a global scale: from a barrage of world-class malware targeting the Middle East, to increasingly professionalized financial fraud emanating from Eastern Europe, to continued worries over Chinese espionage, to growing and laudable international cooperation to track down online crooks and bust operations.
But, there was barely a mention of any of it during Monday night's third and final presidential debate. According to the transcript, only on two occasions did the candidates acknowledge digital concerns.
First, in response to a question about budget spending, particularly funding for the military, President Obama makes a brief mention: "We need to be thinking about cyber security." Second, when asked what the greatest threat is facing the United States, Gov. Romney replied, not surprisingly: "a nuclear Iran." But he also addressed China by saying: "They're stealing our intellectual property, our patents, our designs, our technology, hacking into our computers, counterfeiting our goods." And that was that.
I can't say I'm completely surprised. After all, arguably the United States' most critical foreign policy decision -- its troubling use of drones without judicial oversight to often blow villages in Afghanistan, Pakistan, Yemen and elsewhere to smithereens -- only got one question. (If you missed their answers: Both candidates have a massive love affair with drones).
As well, cyber traditionally has been given short shrift in these types of high-ratings moments, given the public's propensity to relate to more traditional, recognizable foreign policy matters.
But considering Iran and Israel each got mentioned dozens of times by both candidates, it would have seemed natural for threats like Stuxnet and Flame to come up, especially when Romney was challenging Obama on his toughness on Iran. Remember, Stuxnet and Flame are almost assuredly U.S.-Israel creations, and both seek to either destroy computer systems or gather intel, particularly regarding Iran's nuclear capabilities.
Obama was more than happy to bring up the "crippling sanctions" he has imposed against Iran. Yet, he couldn't even muster a measly cyber-Pearl Harbor reference.
I'm guessing the reason neither candidate took the bait is because the United States doesn't want to publicly admit that it is orchestrating sophisticated cyber attacks against enemy nations. Because if it did, this could blow the government's dirty little secret: that the United States is not only a victim in cyber space, but also an aggressor, especially when it comes to offensive missions.
Essentially, it boils down to: "Just because we're doing something doesn't mean you can too."
Hear it from Defense Secretary Leon Pannetta.
Earlier this month, he spoke to business leaders in New York, where, according to an Associated Press recap, he pronounced that "the cyber threat from Iran has grown, and he declared that the Pentagon is prepared to take action if American is threatened by a computer-based assault." Pannetta cited attacks against Gulf oil companies Saudi Aramco and RasGas believed to have been carried out by the Iranian government, but he never mentioned that U.S.-led cyber strikes against Iran already have happened.
But there was none of this talk on Monday night.
So in a year in which the floodgates opened thanks to Flame, Duqu and others, and comprehensive cyber security legislation almost passed in Congress, where it remains a key issue, one thing remains clear: Cyber still isn't ready to hang with the big kids.
Last week I stumbled across a story on CNN Money, titled "Major banks hit with biggest cyber attacks in history."
After reading the headline, I decided it was probably best to forgo reading the rest of the article -- and instead just duck for cover. A few days later, when I deemed it safe enough to emerge from my hideout and return online, I assumed that the worst-case scenario we had been told to fear -- a cyber Pearl Harbor (or is it cyber 9/11?...I always forget) -- finally played out.
Our economic system had collapsed. Money was worthless. The end was near.
What's that, you say?
Oh. It was just a DDoS attack. Never mind. I feel stupid.
W-w-wait a minute! How can a flood of bogus traffic that knocks a few, albeit major, banking websites offline for several hours be considered the "biggest cyber attack" in history? No networks or servers were breached, no sensitive or valuable data was stolen, no lives were put in danger.
It wasn't just CNN that slammed down hard on the hype gas pedal. Last week's attacks drew similar stories across many supposedly reputable media outlets. One unnamed U.S. official called the DDoS attacks among the "worst-case scenarios envisioned by the National Security Agency." (This official does know the difference between a DDoS and a hack, right?)
So, big deal, the media went bonkers on a story. Happens all the time, right?
The math book your child brings home from school might suffocate her while she sleeps. Story at 11.
Well, it's poor timing right now because we're at a pivotal point in the cyber world. After a year that saw repeated attempts at internet security legislation, President Obama now is reportedly prepping a 19-page executive order that would be similar to the White House-backed Cybersecurity Act of 2012, which was struck down in August.
As such, news accounts that purport to chronicle the largest cyber attacks in history could provide the ammo for our leaders to make misguided decisions -- things that erode our digital freedoms. It also doesn't help matters when the chairman of the Senate Committee on Homeland Security and Government Affairs, warmonger extraordinaire Sen. Joe Lieberman, claims -- without showing any proof -- that last week's attacks were orchestrated by the Iran government. And the media eats it up!
But even beyond that, let's remember two things about Lieberman: He a) wants cyber security legislation passed (he co-sponsored the Cybersecurity Act) so it makes sense he'd play the Iran card on this one and b) he has been itching for a fight with Iran since at least 2009.
The media shouldn't lend Lieberman the free space to plug his cyber-demagoguery without at least mentioning his views on the country.
So let's get our act together, everyone. Hey, there's still time: the DHS-sponsored National Cyber Security Awareness Month just started!
But then again, if the Homeland Security secretary doesn't even use email, what hope is there for us anyway?
The debate around the sale of vulnerabilities and exploits is again playing out within the security community, and this time it comes with a new twist.
It's really an old debate, one which heated up in 2009 when a group of well-known researchers announced their "No More Free Bugs" intention to the crowd at the annual CanSecWest hacker show in Vancouver.
At the time, Dino Dai Zovi, Alex Sotirov and Charlie Miller, annoyed that vulnerability hunters weren't being properly compensated for their discoveries, reacted, in true capitalistic spirit, by telling the world that they just want to get paid.
But since then, the conversation has taken on a much different tone. Remember, back in 2009, the scale of the advanced persistent threat and spy viruses weren't yet realized. There was no Stuxnet, no Flame, no Gauss. But as nation-states, prominent among them the United States, began using cyber weaponry and engaging in a modern-day arms race, governments now are paying a pretty penny for zero-day exploits, which are those attacks and threats for which there is no defense. In other words, today's researchers are selling the exploits to people who presumably want to use them, not fix them.
It's necessary to underscore the immensity of this fundamental shift. Researchers seemingly are becoming very incentivized to find vulnerabilities and create exploits that governments can use to launch attacks. As such, they appear to be becoming less incentivized to find these same vulnerabilities – and report them to the affected vendor for patching, even as bug bounty programs become more prominent.
And what it has created is a new breed of researcher who is also part mercenary -- someone who can earn hundreds of thousands of dollars by selling their discoveries to the highest government bidder. Most known of this group is France-based Vupen Security, which won a series of hacking contests at this year's CanSecWest event, but chose not to enter the competitions where they'd have to reveal the details of their exploits, opting instead to save those treasures for a government agency, better known as their deep-pocketed customers.
As Andy Greenberg of Forbes reported about Vupen in March, its business model is a risky endeavor:
It's this mindset that has prompted concern from the Electronic Frontier Foundation (EFF), an internet civil liberties group, which argued in a March blog post that the researchers and government buyers involved in these deals are both responsible for making the internet less safe.
Believing the group was implying that government regulations were necessary to oversee exploit sales, some coders felt attacked by the EFF (check out the thread here), which regularly advocates on behalf of security researchers.
As a result, the debate over exploit sales has now morphed beyond money and into a conversation around personal freedom and libertarianism.
Some researchers consider any attempt to regulate the exploit trade to be an attack on the free market. They believe they have a right to sell their research to any viable buyer – even if that's another government. And anything that prevents them from doing that is an unfair infringement on their basic rights.
David Maynor, founder and CTO of Errata Security, a vulnerability services company, is the most recent person to run with this argument.
Maynor's remarks sound like when Goldman Sachs CEO Lloyd Blankfein famously said that he and his firm were doing "God's work."
Let's continue with the Wall Street theme for a moment and compare it with the exploit market. The 2008 financial collapse -- from which the country hasn't come to close recovering -- underscored an extreme and desperate need for regulations. But these regulations have barely come, and the ones that have are token gestures at best.
Like Wall Street honchos, some exploit developers are wholeheartedly opposed to the government meddling in their business affairs. But just like Wall Street, they're more than happy to accept
government taxpayer money. That strikes me as hypocritical, but it also may create a market imbalance.
The government shouldn't be buying 0day in secret as it upsets the market with public money. It's basically welfare for already rich people.— Jacob Appelbaum (@ioerror) August 15, 2012
Some researchers, even ones who have admitted to selling exploits to governments for a handsome sum, suggest that the pricing signals that Appelbaum speaks of must change.
But what makes the trade of zero-days perhaps even more shadowy is that there is virtually no transparency around the process. At least the American public knew how much moolah it had to cough up to ensure that the banks were, indeed, too big to fail.
The fact researchers sell exploits to the government is bad for everyone, but is predictable given the dynamics of the vulnerability market.— Charlie Miller (@0xcharlie) August 14, 2012
@jcran As an initial matter, I'd like to see mandatory reporting of sales (buyer,seller,$). Obviously, not with details of the actual vuln.
— Christopher Soghoian (@csoghoian) August 15, 2012
The irony of the situation is that regulations around exploit sales would force the government to stay in check too, not just the sellers, as they are among the biggest buyers.
More to come from this saga, and I don't claim to have all the answers. Exploit hunters certainly have a right to profit from their discoveries, but I just hope transparency wins out. Because when we're talking about governments buying high-powered, offensive cyber weaponry that could -- and apparently easily -- fall into the wrong hands or result in collateral damage, we're probably better off knowing about it.
Information sharing, at its core, is among the most effective ways to fight cyber crime. Plainly put, the saboteurs do it, so why shouldn't the very organizations that those adversaries seek to attack. Learning the details about a successful intrusion or attempted intrusion, such as the tactics used and who was behind it, can go a long way to help a peer prevent a similar fate.
There have been many successful law enforcement- and industry-led efforts, such as the Financial Services Information Sharing and Analysis Center (FS-ISAC), to promote this type of collaboration among the good guys. But now, it seems, Congress wants to codify the sharing of data through the Cyber Intelligence and Sharing Act (CISPA), which is due for a full House vote on Friday. Sounds great, right? Not really. The proposal vastly overreaches, at the expense of Americans' coveted freedoms and civil liberties.
Make no mistake, CISPA is not SOPA, the anti-piracy bill that was squashed earlier this year amid an unprecedented outcry from critics, including some of the most well-known web giants, such as Reddit and Wikipedia, which went dark for a day to protest the measure.
But CISPA is a very dangerous proposal in its own right. You see, when the sharing of threat intelligence data becomes the sharing of people's personal data with our three-letter agencies (without judicial oversight), serious problems come into play, and a murky-language-filled bill that is meant help secure cyber space becomes an example of expansive and excessive surveillance on the open internet as we know it. As CNET's Declan McCullagh explains:
What sparked the privacy worries [about CISPA] -- including opposition from the Electronic Frontier Foundation, the American Library Association, the ACLU, and the Republican Liberty Caucus -- is the section of CISPA that says "notwithstanding any other provision of law," companies may share information "with any other entity, including the federal government."
By including the word "notwithstanding," CISPA's drafters intended to make their legislation trump all existing federal and state civil and criminal laws. It would render irrelevant wiretap laws, web companies' privacy policies, educational record laws, medical privacy laws, and more. (It's so broad that the non-partisan Congressional Research Service once warned (PDF) that using the term in legislation may "have unforeseen consequences for both existing and future laws.")
CISPA strikes me as another example -- cough, NDAA, cough -- of powers meant to stop the real criminal being turned back around on the people. Often, the justification for passing these laws amounts to nothing more than instilling fear over an unknown enemy, who, in the case of cyber, is some shadowy figure one line of code away from knocking out the lights from Boston to Bakersfield. For some context into how high the levels of fear mongering can reach, just read this U.S. House Committee on Homeland Security press release, issued Tuesday, for context.
Cyber threats are very real. Not so much the "cataclysmic" events that are designed to ruin "our way of life," as Rep. Peter King of New York would have you believe, but more likely the silent killers, like the commercially available exploit kits customized to steal bank login data, or the more stealthy espionage malware created to pillage trade secrets.
The intentions of legislation like CISPA -- and this perhaps is giving our lawmakers too much credit -- seems in the right place. Admittedly, threat information sharing is sometimes riddled with difficulties, including concerns over competition and legal complexities. Making the process more seamless is commendable.
But surely this can still be obtained without eroding the civil liberties and Constitutional rights of Americans.
Last weekend, I headed from Brooklyn to Manhattan with my girlfriend so she could get her iPhone fixed. Our destination was the Apple store, a hip and stylish three-story building in the Meatpacking District.
Surprising as this may sound, it was my first time ever at an Apple store. Within a few minutes, I became fairly convinced that nobody ever comes here to buy anything; it's merely a hangout, much in the same way the popular nightclubs in the vicinity are.
As expected, the Apple fanboys and girls were out in full force on this Sunday afternoon, so the place had its usual air of elitism to it -- at least that's the way my Windows and Android-using insecure self perceived the surroundings. I gotta admit, though, I've kinda gotten over my grudge toward Apple. That's because every time I've played with one of their gadgets, I've really enjoyed it, even though the only device I own from the House That Jobs Built is a busted iPod that I will toss out one of these days.
Still, as a security journalist, Apple and I have a tough time being great friends. And that was only compounded when I was making small talk with the "Genius Bar" dude who was troubleshooting the girlfriend's phone. I asked him if he thought Macs needed anti-virus protection. He, without hesitation, responded no.
Cue a few days later, and Apple is facing possibly its largest outbreak of malware in its history, with news that the dangerous Flashback trojan has contaminated some 650,000 Macs, many of which are located in the United States.
In my mind, Apple -- the richest company in the world, remember -- has failed on two levels here. For starters, it was abysmally late in pushing its own update for Java for Mac OS X, even though in mid-February, Oracle, which owns Java, fixed the vulnerability that is allowing Flashback to spread.
You see, Apple insists on releasing it own patches for third-party products. And Flashback is known for disabling built-in Mac OS X defenses, so any attempt at security that Apple already had in place wasn't going to help out.
The second problem is security communications. Over the last several years, I can count on my fingers the number of times a PR person from Apple responded to a query from me. Maybe SC Magazine isn't big enough of a name when considering the publications that fawn over Apple's products, services, (and stock price), but is that really an excuse? Or maybe Apple just likes to stay true to its "security code of silence."
But one would think that, in the case of a malware outbreak, Apple might prefer to get ahead of the story by providing, at the very least, some user guidance. After all, viruses on Macs are likely a new concept for most Apple users, so they may actually need some help dealing with them.
In the end, I guess not much changes in three years.
Maybe Flashback will give Apple the wake-up call it needs. Only time will tell, of course. Don't forget, Apple still makes up only a fraction of the world's operating systems.
In the meantime, I wonder if I head back to the Apple store tonight if that air of elitism would seem a little less dense.
If Bay Area Rapid Transit (BART) knew that its decision to temporarily cut mobile service at four of its stations would result in naked photos of its communications director appearing online, it may have kept the web up and running for commuters.
And if handbag-maker Coach knew that its support of the very controversial Stop Online Piracy Act (SOPA) would result in a group called UGNazi hijacking its DNS records to divert traffic elsewhere, maybe it would have kept its focus on satchels and clutches.
Sony, Coach and BART are just three names on a laundry list of recent "hacktivist" victims -- one which has been steadily growing over the last 12 months. As social movements such as Occupy Wall Street take hold on the streets to protest corporate and government wrongdoing, groups such as Anonymous seem to be guarding the cyber skies in the name of exposing and embarrassing its targets.
Within the security industry, much has been made of the new risk that hacktivism poses to organizations. So while organizations work to better equip themselves with the people, processes and technology to defend against this threat – all great measures, certainly – they may also want to consider an additional, and perhaps far simpler, tactic: conversation.
Hugh Thompson, the program committee chairman of the RSA Conference and an adjunct computer science professor at Columbia University in New York, thinks it makes sense for companies to, at the very least, weigh the consequences of their business decisions and practices as they face this new hacking phenomenon.
Last week, I chatted with Thompson about hacktivism, and he told me that organizations must adjust their security model to become more adaptable and nimble in the face of today's attacks. That means accepting that failure will happen and becoming more agile and competent in responding, all within the context of risk.
But decision-makers may also want to consider who they're going to tick off when they decide to do something, he said.
The corporations and government agencies targeted by the likes of Anonymous and LulzSec wield tremendous power, so it's hard to believe they would ever publicly cower to online activist attacks, which often fall into the illegal category, I should add.
But they might become more proactive in their corporate strategy, at least. After all, in Sony's case, it was ultimately hit more than a dozen times, millions of users were impacted, its leaders publicly apologized, and it certainly suffered reputational harm, particularly when the PlayStation Network was offline for weeks. Even when it knew they were coming, Sony couldn't stop the hacks. It still can't."Maybe if it was today, [Sony] would have decided the other way," Thompson told me, referencing the Hotz lawsuit.
"The scope of security has to expand," he added. "The company really is in this ecoystem. Security is a huge function of targeting, as opposed to what you have done to defend your organization."
In other words, if you're not a target, you're probably in much better shape. That's not to say anyone should ever be forced to walk on egg shells – capitalism has dealt with its fair share of blows lately, but it still remains the foundation of our economic system. And some choices an organization makes just aren't going to be loved by everyone (or Anonymous). That's a fact of life.
But if having these boardroom conversations means an organization like Monsanto, for example, which was hacked last year by Anonymous, will become a more compassionate, principled and ethical player in our world than it currently is, I'm all for the shift in corporate mindset that may result from the threat of hacktivism.
Color me skeptical for now. The power elite are a difficult bunch to win over.
A bulletin released this week from the U.S. Department of Homeland Security, which implies that the hackivist group Anonymous may be interested in crippling critical infrastructure (think electric grids and oil-and-gas refineries), strikes me more as a move to discredit and undermine the collective rather than warn of any actual danger.
Earlier this week, as expected, plenty of press picked up the story, obediently reporting the news despite the scant evidence and lack of on-the-record government sources. (Which is how most government news is dispensed for public consumption, by the way. I've been guilty of this many times myself.)
In my eyes, this seems to be another step by U.S. officials, without exactly coming out and saying it, to label Anonymous as a cyber terrorist organization, bent on indiscriminate destruction of digital property and infrastructure.
And I don't think that's fair.
"The information available on Anonymous suggests they currently have a limited ability to conduct attacks targeting [industrial control systems]," the bulletin read. "However, experienced and skilled members of Anonymous in hacking could be able to develop capabilities to gain access and trespass on control systems very quickly."
"I don't believe it's fair to characterize Anonymous as a group dedicated to sabotaging the very resources...that Americans rely on to survive."
Certainly, I won't defend any of the alleged actions Anonymous has taken that are illegal. Organizations have a right to keep their personal property out of the hands of hackers, and Anonymous, if its claims are to be believed, has broken the law on a number of occasions in the past.
But, I also don't believe it's fair to characterize it as a group dedicated to sabotaging the very resources, such as oil-and-gas pipelines or water and sewage treatment plants, that Americans rely on to survive.
If anything, given its dedicated support to the Occupy Wall Street movement, it seems Anonymous cares much more about the average person than you might be made to believe – certainly more than some of our lawmakers have shown, who are, on most occasions it seems, more subservient to lobbyists and corporate donors than their own constituents.
In its bulletin, DHS produces, as evidence, two examples of "Anonymous' interest in control systems." One is the group's launch this summer of "Operation Green Rights presents: Project Tarmageddon." The project opposes the development of the Alberta oil sands because of environmental concerns. Anonymous named crude manufacturers Exxon Mobil, ConocoPhillips, Canadian Oil Sands Ltd., Imperial Oil and oil financier the Royal Bank of Scotland as targets.
The other is, are you ready, a tweet from a "known Anonymous member" that included the results of recon he or she did into a directory tree of Siemens software.
Exactly who *isn't* probing SCADA systems these days? It certainly was a very hot session topic at the recent Black Hat conference in Las Vegas, and has caught the eye of researchers so much that the government has set up a clearinghouse for control system vulnerabilities.
Which reminds me: I'm waiting for DHS to publish a warning based on a potential real critical infrastructure issue that popped up just yesterday -- evidence that the Stuxnet authors are back with new malware. I'm sure the bulletin will arrive any minute now.
So why would the government want to paint Anonymous in this way? Well, that's pretty simple to answer. The group has made no qualms about its distrust of the powerful and elite, and has taken steps to expose corruption through hacks and to silence its enemies through distributed denial-of-service attacks.
Thus it's in the government's best interest to stamp the group as some purposeless band of radicals, much in the same way you can't blame the Department of Justice for going after whistleblowers like WikiLeaks, which published a trove of documents cataloging a number of atrocities, including the deaths of innocent Iraqi civilians and detainee tortures at the hands of U.S. and ally forces.
Yes, Anonymous is amorphous and leaderless, with splinter elements, and there is no conclusive way to know what exactly its goals are. But some of the more reliable Anon Twitter accounts that I follow for news about the group don't seem to be mentioning anything about hacking these days, never mind infiltrating industrial control systems. In fact, the group seems to be devoting a good chunk of its energy to the Occupy Wall Street protests, which have spread to scores of cities in this country and around the world.
Remember all those breaches we read about in spring and summer? Well, ever since OWS began, it's like they all stopped in the name of a bigger cause.
I think a tweet on Tuesday from Anonymous was pretty telling of where its motivations currently lie.
Here was the group's apparent response to the DHS bulletin: "Anonymous should issue a warning to the public against the DHS, FBI, etc. related to gov't efforts to subvert freedoms in the USA."
Of course, I'm not here to deride the DHS, either. I think issuing alerts such as these can have a benefit, especially when they come with advice.
"Asset owners and operators of critical infrastructure control systems are encouraged to engage in addressing the security needs of their control system assets," the bulletin concluded.
I think that's something we can all agree on. But in the case of Anonymous taking down critical infrastructure, I don't think we should "expect" them there.
UPDATE: I was interviewed about this story Friday on RT's "The Alyona Show." Video here: http://www.youtube.com/watch?v=KWy1MtOiQT8
Apparently, my call 18 months ago for more transparency and openness around security incidents largely has fallen on deaf ears.
At the time, I was writing to protest the firing of Bob Maley, the former CISO of the state of Pennsylvania, who received a pink slip after revealing details – too many, apparently, in the eyes of his bosses – about a compromise that affected a government agency in the Keystone State. I wrote:
In 2010, remaining mum, or too close to the vest, about incidents benefits nobody. Every organization in the country is being probed on a daily basis. Vulnerabilities are going to be there. Hacks are going to happen. Data is going to be exposed. The criminals are going to be one step ahead. Let's move on from this prevailing wisdom that any one organization is immune from attack.
But there's been little to no advancement on this front, at least from what I've seen and heard. If anything, we've taken steps backward.
Case in point: Harvard University. The college announced this week, in a brief statement, that its website was defaced by "sophisticated" attackers. Then it went into defense mode by, in essence, saying there was nothing it could do to stop the adversaries.
"Recent months have seen a rise in frequency and sophistication of these attacks, with hacking groups increasingly on the offensive and targeting news media, government and education websites," a Harvard statement said.
A university spokesman declined to offer details as to what made the attack or attackers sophisticated.
I can't claim to know the specifics, but I don't normally associate "sophisticated" with a site defacement, do you? (To put this incident into some context, it doesn't appear as if any data was stolen, and who wastes a zero-day vulnerability to scrawl some threats on Harvard's home page?)
I have to believe that Harvard, instead of accepting blame for lacking security measures that should have prevented such a seemingly simple attack, leaned on recent headlines to save face.
Harvard's decision to basically say, "There was nothing we could do. Sorry. Maybe next time," is not a particularly shortsighted PR move. After all, most people wouldn't know the difference between the skill level required to perform a defacement versus that needed to create the real deal.
But this PR tactic certainly has lasting ramifications for the security of the internet. Not only did Harvard not release any specifics about the attack – the bad guys share information, why can't we? – but it also attempted to exonerate itself by citing "sophistication."
Nothing will ever improve if organizations keep doing this every time they are breached. Security will continue to suffer, and lawmakers, who are just as susceptible to accepting myths of unstoppable attacks as any non-IT savvy citizen is, may overreact, changing the internet as we know it.
Ultimately, though, I think I'd be satisfied if a CISO who experiences a breach came forward and simply said: "We messed up. We'll do better next time."
Apologies can go a long way, you know.
Each winter, when the Ponemon Institute releases its annual "Cost of a Data Breach" study, we are reminded of the financial and reputational damage that a data-leakage incident can deal a victim brand.
This year's study found that breaches cost organizations $7.2 million on average in 2010. Business-related costs, such as customer loss and decreases in employee productivity, account for the largest proportion of total breach expenses. Other cost areas result from detection or discovery of the breach, notification and response activities to help victims.
Yet despite this, many of the companies that have experienced massive breaches in recent years (think: TJX, Heartland Payment Systems, Epsilon, and Sony) all seem no worse for the wear. Sure, stock prices may have taken a brief hit, or losses may have piled up due to certain factors, like paying for identity protection for customers. But by and large, big-name organizations that have been compromised of, in some cases, tens of millions of credit card numbers, have stuck around and even flourished. This video on The CMO Site, while short on statistics outside of a couple of anecdotes, makes a relatively compelling argument that breaches cause no lasting damage to brands.
Perhaps credit is due the sheer size of these companies, that they are financially healthy enough to overcome breach-related fees or a percentage loss of their customer base (Ponemon has pointed out that post-breach churn rates hover near 4 percent). Or maybe customers have become increasingly desensitized to hacks. They receive so many notification letters in the mail, how can they possibly take their business elsewhere, when, chances are, the alternative will be compromised too at some point?
Are breaches simply a part of doing business?
Not so fast. Just when you thought a brand will bend, but not break, in the wake of a breach, look no further than DigiNotar, the Dutch-based certificate authority that went bust a mere three weeks after admitting that its systems were infiltrated to issue counterfeit SSL credentials.
Of course, DigiNotar is different than, say, a traditional retailer. Not to mention it is in the business of security. But a company is a company. And the minute people stop trusting you – quite literally in DigiNotar's case – doom is on the horizon.
So let this case be a wake-up call that information security must be valued as a business-enabler. And if it's forgotten about, it could be a business-ender.
- Conspiracy theories are running rampant after Riley Hassell and Shane Macaulay, two researchers with Privateer Labs, didn't show up for their planned (and highly anticipated) 10 a.m.Thursday talk at Black Hat: "Hacking Androids for Profit."
The presentation promised to reveal "new threats to Android apps and discuss known and unknown weaknesses in the Android OS and Android Market," according to the Black Hat program guide. Audience members sat and waited for several minutes, as the person scheduled to introduce the researchers asked if anyone knew a way to contact them.
While some speculated that the pair may have had too much to drink the night before – Black Hat is known for its rowdy parties – a spokeswoman for the conference wasn't letting on. Nico Sell did say the pulled presentation was not related to any legal threat, as has been the case before.
"It happens," she said of the talks when the speakers simply fail to show. "DEFCON (Black Hat's sister show), more."
The security industry's version of the Oscars, the offbeat Pwnie Awards, were announced Tuesday night.
Awards were handed out in categories ranging from "Best Client-Side Bug" to "Most Innovative Research" to "Lifetime Achievement."
Sony received all five of the nominations in the "Most Epic" category. Lulz.
Find the list of winners here.
- Black Hat representatives expected more than 6,000 people at the 15th annual installment, which would be up from last year, though official tallies were not available.
Introducing the show on Wednesday morning, conference founder Jeff Moss said this year's attendee pool covered a swath of nations around the world, with the United States, Canada, the U.K. and Sweden leading the pack.
Moss said he wants audience members to take what they learn from the presentations to highlight the need for business leaders to more closely collaborate with security teams at their organizations, especially as we live in a new era where compromise should be assumed.
"But if you only call us after the house is on fire, you have very few options," he said.
Moss underscored the need for events like Black Hat, one of the rare forums for the good guys to openly discuss the reality of the modern-day threat landscape.
"They're one of the very few people who are talking about what's going on," Moss said, adding that vendors often have limited insight into the motives of the attackers.
- With Black Hat winding down, attention now turns to the less formal, even more unpredictable, DEFCON event, held for the first time this year at the Rio hotel.
SCMagazineUS.com reported on Monday that the National Security Agency will be on hand to recruit hackers at the $150-cash-only event.
But there's at least one person who argues that attendees should stay far away from the men in suits.
DEFCON is known for allowing attendees to remain anonymous at the show. Event registrants don't even ask for a name.
So it's no surprise that two of the security industry's most nameless (and bitter rivals) are supposedly on hand.
We've known for some time that one of the key tools in the cybercriminals' arsenal is social engineering, namely the ability to make their scams look legitimate by capitalizing on the trust users have in well-known brands.
It's known as "brandjacking," and it's been happening for years in phishing attacks, where high-profile companies like Bank of America and PayPal are routinely used as bait to either siphon personal information from unsuspecting individuals or to drive them to malware-serving websites.
We've also seen it in rogue anti-virus campaigns, where criminals leverage reputable brands, such as Microsoft, in order to trick users into paying for and installing a fake product that does nothing more than make you $49.95 poorer.
They say imitation is the highest form of flattery. So, in that regard, these companies whose brands are hijacked should give themselves a pat on the back for being an established and dependable name. But they also should be concerned, as being associated with any criminal undertaking can have a negative impact on one's reputation.
And that is exactly the boat SC Magazine finds itself in right now. Thanks to the always-shrewd detective work of Gary Warner, director of research in computer forensics at the University of Alabama at Birmingham, we've learned that our well-respected brand is being used as part of a new, largely undetectable rogue AV scam. (Scroll down for the image).
Apparently, the crooks are trying to peddle their fake anti-virus program with the added "selling point" that it was a 2011 SC Magazine Awards finalist. Such a claim is, of course, patently untrue, and it's nothing more than a ploy to increase the hoax's legitimacy.
But it's still a bit unnerving.
"We knew IT buyers around the world look at SC Awards as barometers of the best in today's security, but we were a little surprised to find the bad guys using it to try to trick people," said Illena Armstrong, SC Magazine's editor-in-chief.
But the reality is, hackers will stop at nothing to spread their wares, as we've seen with recent Facebook cons taking advantage of such tragic events as the Oslo terrorist attacks.
The best lesson is to "think before you click," as this particular rogue AV scam was kicked off when users clicked a malicious attachment claiming to come from MasterCard.
Our job at SC Magazine has always been to provide you with the facts.
So, with that in mind, here is a list of the *real* SC Magazine Awards 2011 U.S. finalists. And (shameless plug), if you wish to get information on the 2012 installment and submit your entry, please visit here.
Stay safe out there.
-Dan Kaplan, executive editor
As if 2011 hasn't been interesting enough, given the sheer number of data breaches (CNET has posted a nifty chart), the next several days promise to yield even more stolen records, at least according to the latest dispatch from the hacker group LulzSec.
The collective, which has been all the talk of the security industry over the past several weeks since it launched its attack on PBS, announced later Sunday that it is hooking up with the Anonymous group, best known for its attacks on HBGary Federal, to launch "Operation Anti-Security."
The mission is to expose government and corporate corruption by way of stealing and leaking classified data.
"Together, we can defend ourselves so that our privacy is not overrun by profiteering gluttons," Lulz Security wrote. "Your hat can be white, gray or black. Your skin or race are not important. If you're aware of the corruption, expose it now, in the name of Anti-Security."
The call to arms is a testament to how unpredictable LulzSec has been. Just a few days ago, it was leaking the usernames and passwords of pornographic subscribers, was asking its followers on Twitter to call a phone number to suggest a candidate to DDoS, and was using its call center to flood the World of Warcraft support line. All for, as the group said, the lulz.
The fact that LulzSec is allying with the more established Anonymous gang, and asking for any outsiders to join in for a more principled cause, could be an indication that the group is losing some steam – especially in light of a series of alleged outings last week and over the weekend.
No matter their identities, and even if the LulzSec group was all apprehended by authorities tomorrow, one can't deny that they have changed the landscape. Members have infiltrated a number of high-profile websites, including those of Sony, the CIA and the U.S. Senate, with apparent stunning ease.
The question on some people's minds is: What impact do these "hacktivist" groups have on infosec as a whole?
There are two scenarios that may play out, as I see it.
1). Anonymous, LulzSec and whichever groups follow -- and we know there will be others -- significantly help to secure cyberspace, by catapulting data breaches into the mainstream and forcing all organizations to assess their security stance.
Tales of LulzSec conquests have escaped the traditional trade press ceiling and have found their way into the mainstream media with regularity. Surely, the budget decision-makers at various firms have seen the headlines and are well aware that they could be next.
Of course, containing these hackers is not easy. While the infiltrators, for the most part, appear to be using relatively simple means of gaining access (i.e., no customized malware), organizations are struggling to respond.
Ideally, what would result is a new way of thinking about cyber defense.
Jeffrey Carr, founder and CEO of Taia Capital, which specializes in cybersecurity countermeasures for corporate executives and government officials, wrote an interesting blog post Sunday where he challenged organizations to think like an attacker. Among his suggestions:
- Uncertainty and randomness favor the adversary, therefore defenders must implement components of randomness and uncertainty as part of a network defense strategy.
- Since it isn't possible to anticipate every type of attack, the defender must become a competitor to the adversary and continually attack his own system "in the hopes of finding heretofore undiscovered attacks" before the adversary does.
2). The second scenario that might play out is the government overreacting to the actions of LulzSec and, as a result, lawmakers enact stiff legislation that considerably limits the openness and freedom of the internet. Such a prospect was warned about in a paper written earlier this year by researchers at George Mason University.
Two other academics, Ronald Deibert and Rafal Rohozinski of the Munk School of Global Affairs at the University of Toronto, also addressed this possibility during a video I shot with them last week at SC Congress Canada. (We start talking about it at approximately the 3:45 mark).
LulzSec is certainly baiting the government to go this route, with its CIA and Senate infiltrations, and the latest rallying cry. And we might already be seeing the first signs of this overreaction already appearing.
I should also mention that the possibility exists that LulzSec is not who we think they are, but are instead, say, a government-hired band of digital assassins. Hey, the conspiracy theories are out there. And at the rate this year is going, nothing would surprise me.
In a perfect world, the legacy of 2011 and LulzSec will be that the web remained open and free, governments and corporations were held accountable when they did wrong, all organizations recognized that resilient security (and proper responses in light of a breach) are merely table stakes for doing business, and hackers who victimized the innocent were brought to justice.A guy can dream, right?
There's an old adage in sports that defense wins championships.
When I hear this phrase, I often think back to the 2001 Super Bowl. From what I can remember of that night -- I was a senior at Syracuse University at the time, and the $1.50 Labatt Blues were definitely flowing, so cut me some slack if I'm a little fuzzy on the details -- I'm fairly certain the relentless D of the Baltimore Ravens made mincemeat of the New York Giants.
It was a fairly boring game, and the Ravens were a fairly boring team all season long, but because they bent, yet rarely broke, while in defense of their end zone, they were the ones hoisting the Lombardi Trophy, not the hometown Boys in Blue.
I reference this memory not to reveal how drunk I was for Super Bowl XXV -- or how cheap the drinks were -- but because I think the outcome of the game applies to the information security industry, now more than ever before.
We've seen at least four major security companies -- HBGary, RSA, Comodo and Barracuda Networks -- fall to attack this year. And, outside of our industry, experts concede that most, if not all, of the Fortune 100 likely have lost intellectual property to hackers.
Are we now ready to accept that some of today's malware is too sophisticated to detect, and vulnerable entryways within organizations are too prevalent to completely plug?
It's inevitable. The bad guys are going to get in. Actually, never mind, they are here already. Might as well offer them a Labatt Blue because, like it or not, they are crashing the party. They got their varsity jacket on, and they're eyeing the person you're interested in.So what is there to do?
I've written before about the bane of compliance and its negative effect on the advancement of innovative security solutions. But I think the problem runs deeper than that. And a partial blame may lie with the culture we've created.
Thanks to heavily attended and widely publicized events, such as Black Hat, we have come to think of security researchers like rock stars – bestowing seemingly unending praise on them each time they discover a gaping vulnerability that can lead to devastating attacks.
That is in no way to cast aspersion on white-hat researchers. No doubt, their discoveries have led to more awareness about the weaknesses of the systems, platforms and underlying infrastructure on which we rely on a daily basis. And they expend countless hours doing the work they do.
But the problem is that there appears to be a gaping imbalance between offensive and defensive research that needs some closing. This has never seemed more evident than right now.
Marc Maiffret, the CTO of eEye Digital Security, raised this concern to me in a recent conversation. Maiffret knows a thing or two about being on the offensive side – he discovered many of the earliest Microsoft vulnerabilities back when he was barely old enough to drive – but over the years he has had an awakening, of sorts.
He said he grew tired of it. "I kind of got sick of it in a way, it got repetitive and I don't know if it's helping people," Maiffret told me.The information security industry of today is much like the military industry, he said. where "it's all about who is creating the better and coolest missile." (Think HBGary Federal). Many of our industry's smartest minds are looking for the next way to break into a computer and not using their "talent and brainpower" to learn "how do we actually stop these things?" he said.
But we don't have to take this lying down. The security industry can – no, must – do a better job of creating defensive remedies that will limit the scope of the damage that "advanced persistent threats (APT)" cause and make the efforts of adversaries way more challenging than they would like.
Maybe that means security vendors providing more information about how exactly their products work, or maybe victim end-users need to do a better job of communicating what methods they used to repel an APT, or maybe that means solutions creators need to drop group-think and idealize outside of the box to create more innovative stuff.
Or perhaps that means making defense more glamorous and sexy.
"I've always wanted to do [a conference] that is the complete opposite of what you see with Black Hat," Maiffret told me.
Most security companies, I like to believe, are noble and ethical enterprises. Yes they make good money out of the fact that the online world is a dark, scary place, but they also provide an invaluable service: protecting innocent individuals and organizations from the dangers that lurk in the shadows.
But when the hacker group Anonymous recently leaked a stolen slide deck that revealed how three security and intelligence firms (Palantir Technologies, HBGary Federal and Berico Technologies) planned to silence WikiLeaks, and its proponents, including a journalist, in the name of a possible lucrative contract from Bank of America, I was deeply offended and insulted.
(Palantir's CEO has since apologized).
My emotions did not just stem from the fact that the presentation's section on "potential proactive tactics" alluded to conducting illegal activities in order to bring down WikiLeaks, though that certainly raised some ire in me – especially when I consider that federal law enforcement only seems interested in going after those cybercriminals engaged in pro-WikiLeaks conduct, never the other way around.
But a major source of my frustration also emanated from the fact that the presentation suggested targeting arguably one of the world's most truth-telling, roving, talented, cogent and investigative journalists: Glenn Greenwald of Salon. Yes, he is one of my favorites, but that is not the reason I am here defending him.
In a blog post today, Greenwald did an admirable job of not just acknowledging the absurdity of the three companies' proposal but also defining how this situation ultimately reflects the class war being waged by society's most powerful and rich in order to get, you guessed it, more powerful and rich.
Greenwald is offended. Journalism is his livelihood. He is in the business of free speech. I am too.
So when security firms seemingly bent on winning a big contract essentially offer up their firepower, likely at the encouragement of the nation's biggest bank, to infringe on one of the most inherent rights of all Americans, we've got big problems.
After speaking last night with a journalist who is covering the anti-government protests in Egypt, MSNBC's Rachel Maddow joked that she had been tempted to stop everything during the interview to tweet what the reporter had been telling her.
That impulse, she said, was an indication of how critical Twitter and social media channels are to the unrest taking place halfway around the world. In many cases, news of what has been happening in Egypt has been disseminated to a global audience by users in other countries who found ways to reach protesters in Egypt and gather the facts.
This roundabout way of delivering the news was a result of the oppressive government in Egypt blocking internet access (but later restoring it). The shutdown was a tactic that the Moubarak administration had hoped would quell dissident.
Clearly it failed for a number of reasons. And for those in Egypt who were still able to digitally communicate with the outside world despite the plug-pull, we can thank really smart and awesome hackers.
This Orwellian-style information repression seems far removed from our country – one that has led the world in its creation of social media tools – that it is easy to pass it off as something that would never, really, affect Americans. But before we count the blessings of living in a unyielding democracy, where people (and the flow of information) is free, it is important to revisit a controversial bill in Congress that is certain to be revived this year.
Yes, I am speaking of the Protecting Cyberspace as a National Asset Act, introduced last summer by Sens. Susan Collins and Joseph Lieberman – yes, the same Connecticut lawmaker who encouraged certain web stalwarts, such as Amazon, to stop doing business with WikiLeaks even though that site is nothing different than The New York Times or any other newspaper, for that matter.
I've taken issue with this proposed measure in the past and believe that the events in Egypt, combined with apparent plans for the bill's resurrection sometime this year, warrant another mention. While supporters insist that the legislation wouldn't stifle free speech and only would enable the president to cut off certain parts of the internet in the unlikely event of America's critical infrastructure coming under siege, we must wonder how far the U.S. government would go to hush dissenting speech if an Egypt-like incident occurred within our borders.
It is hard to fathom such a scenario, but I encourage each and every one of you to not only look at the positives of such a bill – safeguarding our most precious resources – but also the potential ramifications of unrestrained and unchecked presidential powers.
Really, though, the differences between a thesis paper published by University of Cambridge computer science student Omar Choudary, which highlights a dangerous security flaw in a system designed to reduce credit card fraud, and the hundreds of cables (and it is just hundreds) so far released by whistleblower website WikiLeaks seem to end there.
On one side is a faction that believes that information that exposes poor practices, whether it is by government or by a powerful lobby such as the banking industry, is meant to be free. On the other side is a faction that hates being embarrassed and will pull out all the stops to save face.
Really, it's that simple.
So I was pleased to read today that Cambridge professor Ross Anderson staunchly is defending the student over his decision to publish the academic research, despite being pressured to censor the paper by a powerful lobbying group, the U.K. Cards Association, which represents that nation's largest banks.
In a response letter to the association, Anderson wrote:
You seem to think that we might censor a student's thesis, which is lawful and already in the public domain, simply because a powerful interest finds it inconvenient. This shows a deep misconception of what universities are and how we work. Cambridge is the University of Erasmus, of Newton, and of Darwin; censoring writings that offend the powerful is offensive to our deepest values. Thus even though the decision to put the thesis online was Omar's, we have no choice but to back him.
I wish I could say the same about organizations such as Amazon, PayPal and others that have faced political pressures from the U.S. government to cut ties with WikiLeaks and which justified their ultimate acquiescence by citing "terms of service" violations.
But back to Cambridge. I've written in the past about the benefits that transparency and openness can do for the security industry. By making issues known publicly, researchers are able to hold the feet of those responsible to the fire, forcing them to get better at what they're doing.
Now, certainly, there is a responsible way to go about disclosure. We've extensively covered this debate this year, and I do agree that if a researcher discovers a security vulnerability, they have the responsibility to notify the vendor in question, giving them reasonable time to fix the issue. Then, they should be free to publish their findings.
In the case of the chip-and-PIN flaw known as "no-PIN," it appears the vulnerability was already known and little was done about. All the graduate student did was expand on the scope of the problem, months after it initially was disclosed, and offer recommendations for patching it.
Anderson concluded: "You complain that our work may undermine public confidence in the payments system. What will support public confidence in the payments system is evidence that the banks are frank and honest in admitting its weaknesses when they are exposed, and diligent in effecting the necessary remedies. Your letter shows that, instead, your member banks do their lamentable best to deprecate the work of those outside their cosy club, and indeed to censor it."
I admire Anderson, but I worry not everyone will have the courage he has to stand up to those more powerful.
And if this story, and the WikiLeaks saga, are any indication, corporate and government interests are slowly but surely chipping away at academic and journalistic freedoms, which really are foundational concepts to a true democracy.
I'm sorry to hear that federal prosecutors, in a desire to get WikiLeaks founder Julian Assange to the United States to face charges for his role in the exposure of classified diplomatic cables, are turning to the Computer Fraud and Abuse Act for help.
Prosecutors, according to reports, are trying to determine whether Assange had any connection with Bradley Manning, the Army soldier who exceeded his privileges to exfiltrate some 250,000 secret records out of a State Department database and into the hands of WikiLeaks.
If they can establish such an association, prosecutors may be able to charge Assange with conspiracy.
I don't know, sounds like a stretch to me.
Our nation's anti-hacking law should be reserved for actual hackers, not those individuals who received leaked documents from whistleblowers and then passed them on to others, much like newspapers have done in the past.
But Attorney General Eric Holder appears committed to throwing significant resources at the case. Sigh.
Before we handcuff Assange as a hacker, I'd like to find the real computer fraudsters and abusers, you know, like the orchestators of the data-stealing Zeus trojan, which literally is bringing some small American business owners to their knees.
Or what about the criminals who recently stole email lists belonging to 105 companies, including Walgreens and McDonald's, in an apparent attempt to launch spam and spear phishing attacks?
I can go on and on about cold-hearted digital vandals and identity thieves who have ruined a lot of lives.
As for Julian Assange, we can debate all day whether to call him a journalist. I have my opinions, as I'm sure you have yours. Let's just say, I like openness, truth and transparency.
But let's agree on one thing, in his role as the founder of WikiLeaks, Assange is no hacker.
If you think there is nothing personal to gain for public officials who use words like "cyberterrorism" and "Digital Pearl Harbor," think again.
This came to mind recently with all the hoopla surrounding TSA's new screening procedures. (As an aside: I unequivocally oppose them, partly because of the concerns over privacy and health, but mostly because I think they are a complete waste of time. Airport security has truly turned into theater, to borrow a phrase from Bruce Schneier. I can say with a fair amount of confidence that TSA has never, nor ever will, stop a terrorist. Terrorists are too smart. Like a good hacker wishing to break into a corporate network, if terrorists want to get on a plane and kill innocent Americans, they will find a way).
What irks me most, though, about this TSA controversy is the fact that federal officials, such as the agency's head John Pistole, capitalize on fear as their lone justification, really, for enacting tougher screening procedures.
And apparently it pays off in the end, as this Huffington Post article from Tuesday describes, citing people like former Department of Homeland Security Secretary Michael Chertoff, who has parlayed his career in government into private ventures, including companies that make airport screening equipment, that pocket him good money when Americans are worried they may get blown up by a terrorist.
Which brings me to cyberterrorism.
The risk of a devastating digital attack on our nation's most critical of infrastructure, such as the power grid or financial systems, is obviously possible. If there were any doubt before Stuxnet, it should now be quite apparent that sophisticated, well-funded and precise malware writers have the capability of doing some serious damage to the things we rely on the most for our daily existence.
Will it happen tomorrow? Doubtful. Will it happen someday? Probably, though who can be certain?
Regardless, as important as it is to accurately relay these stories to the audience, we, as the media (even a magazine such as SC which is dedicated to reporting about this stuff), must be sure to vet the source's motives. Particularly those who have transitioned from a role as a public servant into one in the private sector, where they now stand to profit off of others' fears.
At the very least, we should be making those interviews more transparent than we do.
Grope-free Thanksgiving travels, everyone.
Pardon me for being a little suspicious of the so-called Lieberman-Collins-Carper cybersecurity bill.
In late August, GovInfoSecurity.com reported that the Senate is considering attaching the legislation, known as the Protecting Cyberspace as a National Asset Act, as a rider to a sure-to-be-passed bill, such as the National Defense Authorization Act.
But this doesn't seem like the kind of legislation that should get the rush job.
According to OpenCongress.org, the proposed Lieberman-Collins-Carper measure:
Creates the Office of Cyberspace Policy and National Center for Cybersecurity and Communications to set standards and coordinate cybersecurity efforts within the government. Gives the NCCC broad powers over "critical infrastructure" in the case of a "national cyber emergency" (as declared by the President).
That last sentence is the stickler. Since the proposal was announced, much debate has centered around this so-called "kill switch" authority that would be granted the government. Some sides have argued that such a provision would deal a major blow to American democracy and could prove an example of unrestrained presidential power.
In August, Adam Cohen of Time opined:
It is not hard to see why everyone is so worried. Imagine a president misusing this particular power: If the people are rising up against an unpopular administration, the president could cool things down by shutting off a large swath of the internet. He could target certain geographical regions ("We've heard enough from New York and California for a while"). Or he could single out particular websites.
Others, such as the SANS Institute's Alan Paller, have argued that the bill is sorely needed, considering government and critical infrastructure systems are probed by enemy hackers with stunning regularity, not to mention the proposal does many more things than simply grant emergency internet shutdown power. Besides, he argues, that particular stipulation is nothing new at all.
As Cohen explains:
The president already has broad power under the Communications Act of 1934 to shut down wire communications, which includes the internet, if he determines that there is a "state or threat of war." When [co-sponsor Maine Republican Sen. Susan] Collins says that the bill would limit the president's power, she means it would impose more restrictions on when he could shut down parts of the internet than the 1934 act does.
True enough. But critics of the bill point out that it expands the president's power over the internet in a key respect: the 1934 law only applies when there is war or a threat of war, while the new law would allow the president to act even when there is not a war or a threat of war. "All I can say is it gives him power to act where he wouldn't necessarily have the power to act" under existing law, says Lee Tien, a lawyer with the Electronic Frontier Foundation.
But I do want to note the juxtaposition of this bill with whistleblower website WikiLeaks releasing 91,000 reports concerning the war in Afghanistan. Since that disclosure, the government has moved to investigate and possibly prosecute WikiLeaks' founder Julian Assange over the release.
Makes you think: What if the government decided to block traffic coming from WikiLeaks' servers in Sweden?
And it should also be noted that two of the sponsors of the bill are the same duo behind the proposed Whistleblower Protection Enhancement Act of 2009, which according to critics, repeals whistleblower rights for FBI agents.
In a March 10 letter to Lieberman and Collins, members of the National Whisteblowers Center wrote that the "current version of S. 372 will set whistleblower protections back 30 years for hundreds of thousands of federal employees. It will become almost impossible for employees in various "national security" related agencies to obtain protection against retaliation if they disclose contractor fraud, waste and misuse of federal monies, mismanagement and threats to the public health and safety."
(The bill hasn't had much movement this year).
It is hard to imagine that the Lieberman-Collins-Carper bill would turn the United States into a communist state. That would be a tough act to get away with in a nation that prides itself on internet freedom.
Secretary of State Hillary Clinton said so this summer.
But in my five-plus years covering this industry, I have never seen such a rush to push through cybersecurity legislation. Sure, the threat of foreign attackers is far worse than it was when I started, but this seems a little, well, sneaky.
All I ask is for transparency in government. Like you promised, President Obama.
I recently chatted with Randi Levin, CTO of the city of Los Angeles, for a cover story I'm writing about cloud computing and the security ramifications of the technology.
When I asked Levin about her critics, those who wonder whether the nation's second-largest city is setting itself up for failure by outsourcing its sensitive email information to a third-party (Google), she didn't flinch. (Or at least I don't think she did. We were on the phone, after all).
That is because Levin is a pragmatist, which is really the only way to be when you are paddling through one of the worst budget crises in LA history. In essence, she'd love to have a staff of skilled security personnel — who wouldn't? — but that just can't happen. So why not turn to others for help?
"We're in a union environment, so that makes it much more difficult," she explains. "You have to come through the union structure. And yes, we can't pay as much as the private sector."
Also, there is a skill set issue. Levin says. To achieve certain certification and training levels, employees would have to go through classes. Something the city can't afford to pay for.
Is there any interest in hiring and/or educating IT security workers?
"It's not something we're discussing right now," she told me. "We're all in survival mode. The discussion is how do we get through this period of time."
Los Angeles' budget crisis aside, this appears to be a common theme, especially at the federal level of government.
Clearly, if lawmakers and military commanders continually are going to warn about the imminence of a cyber war, certainly we want our government to include a skilled IT security workforce. But that's not happening, at least not right now.
There are a number of reasons why there such an apparent gap in cyber expertise at the federal level.
From an Aug. 2 blog post on the Baltimore Sun's website:
A report on preparing for the nation's cyber security needs last year by Booz Allen Hamilton, a consulting firm, found that federal scholarship programs designed to fill government openings were producing only 120 graduates a year with cyber security education — while the need was closer to 1,000 a year across several federal agencies.
Challenges abound in building a cyber security workforce, particularly for the federal government's defense and intelligence agencies and private contractors that work with them. Part of the difficulty isn't simply finding people with the right technical abilities, but making sure they can also qualify for a security clearance.
And the limited workforce means that government agencies and the private sector must compete. McCullough said defense agencies often can't match salaries paid by corporations and contractors, but they can provide workers tremendous real-life experiences and involvement in critical missions.
But with demand for human capital outpacing supply even at the private-sector level, the government has its work cut out for it. Maybe government needs to take some tips from Wall Street. I mean, tons of my friends were just dying to get into a finance job after college. And recruitment is once again growing. Not even a financial collapse could slow this industry down!
As for cybersecurity and government, initiatives are underway. They include pushes for scholarships for college students who agree to take jobs in government after they graduate, boot camps and efforts by the U.S. Office of Personnel Management and National Initiative for Cybersecurity Education to define cybersecurity roles across federal agencies.
Other areas to correct: Make the recruitment process more seamless. Make the hiring process faster. And clear up any confusion around certifications.
Higher salaries wouldn't hurt either.(Cough, Wall Street, cough).
I have an idea. How about plucking a few bucks from those bottomless defense budgets?
The second and final day of Black Hat is upon us, but with all the robust content the show is producing, it feels for many like the conference has been running much longer.
- Not as long, perhaps, as the line in the hallway to acquire a a badge for DEFCON, the sister conference that kicks off this weekend.
And it is no ordinary conference badge. Over the last five years, DEFCON has become famous for its skillfully designed electronic badges. This year's version is the brainchild of Joe Grand, owner of Grand Idea Studio and host of Discovery's "Prototype This!" Grand is one of the world's most famous hardware hackers.
The badge may not look impressive to people who have become enamored by flashy web software. But to the hardware geeks, this is the creme de la creme.
The badge is an aluminium circuit board with laser engraving. It includes a 128-by-32 display screen designed by Kent Displays. The display requires no power to keep the screen image on.
The badge even has a social networking aspect to it: Users can push a few buttons on the back of the badge (basically a circuit board) to display icons of their interests, such beer bottles and floppy disks.
"It's the whole community thing," Grand told reporters today. "They want to share one piece of data with everyone else."
- Security firm SecureWorks unveiled new research, the culmination of a three-month-long investigation into the workings of a cunning Russian check counterfeit gang.
Essentially, the cybercrooks installed Zeus and Gozi trojans onto victims' machines, which enabled them control over the computers. They used the infected PCs to get access to check image archiving services. They also cracked into job websites to deliver messages to unsuspecting individuals, who were recruited as money mules to cash checks on behalf of the racket. Nearly 3,000 job seekers responded, and they cashed counterfeit checks worth in excess of $9 million.
Sounds like a standard Russian mob cyber scam, right? Not quite.
What made the operation so original was the crooks' usage of VPN tunnels, which enabled them to make it appear as if the botnet was not operating. From the report:
Although it is very common for trojans (especially ones designed to aid in financial fraud) to employ proxy
server capability, this is the first time that the CTU has seen the use of VPN technology in such software.
However, by employing the very simple VPN functionality built right in to Windows, the criminal bypasses the need to develop complex systems, and can simply route his/her malicious traffic over the VPN. If done correctly, this gives the criminal three primary benefits:
1. The VPN traffic can be encrypted, defeating signature-based network IPS/IDS devices that
might detect the malicious transfer of data
2. A VPN can give the criminal the ability to connect-back into the protected computer, and even
use the infected system as a route to other systems on the protected network
3. The criminal could route all traffic from the bots to the botnet controller over the VPN, and deny
connections to the VPN controller from all sources but the VPN exit IP address. In doing so, the criminal
could make it appear to the world that the botnet controller is offline, while still serving commands to and
stealing data from the infected systems under its control
- The Black Hat crowed seemed to enjoy this morning's keynote quite a bit more than yesterday's less content-rich presentation from Jane Lute, deputy secretary of the U.S. Department of Homeland Security.
Today's keynote came from Ret. Gen. Michael Hayden, a former director of the CIA and deputy director of national intelligence, who spent his talk defining cyberwar and discussing what rules apply to cyberwar.
Cyberspace, like the air, land, sea and outer space, is also a military domain, he said.
But unlike the physical domains, a number of questions about cyberspace remain unresolved, such as what constitutes an attack or a cyberwar.
“We are thinking a lot about it [cyberspace], but not very clearly,” Hayden said. “We throw the term 'cyberwar' at everything unpleasant.”
Additionally, one unique aspect sets cyberspace apart from other military domains, he said.
“God made the other four, you made the last one,” Hayden said. “God did a better job.”
While the physical world has mountains and other terrain that aid the military in their defense operations, the
landscape of cyberspace only provides advantages to attackers, not those seeking to defend it. Fixing this problem, Hayden said, requires altering the architecture of cyberspace.
“You are going to build rivers and hills into the web,” he said. “You are going to create geography that is going to help the defense.”
Here are some interesting tidbits coming out of the first day of the world's biggest hacker conference, taking place in Las Vegas. Consider it a running log, of sorts.
- Adobe announced this morning that it will begin sharing vulnerability details through the Microsoft Active Protections Program (MAPP).
The initiative, announced in August 2008, originally was devised so Microsoft could share flaw information with approved software security providers prior to its monthly fixes being released. Now, Adobe now will be able to do the same with MAPP's 65 members.
"By receiving vulnerability information prior to the public release of a security update, MAPP partners get an early start over exploit code writers, enabling them to offer protection to customers in a timely manner," Adobe's Brad Arkin said in a blog post.
- RFID researcher Chris Paget showed how he created equipment that allowed him to read an EPC Gen 2 RFID tag at 217 feet, believed to be a world record.
In his talk, Paget described how he replaced antennae and established a fixed frequency on the transmitter to increase range and power - all while staying in compliance with Federal Communications Commission ham radio laws.
He predicts that under the right testing conditions he could read a tag at 1,000 feet. There are ways to abuse the technology, though, Paget said. He said RFID tags should not be placed in identifying documents and retail stores should disable the tags (aka bar codes) upon customer checkout.
Best way to destroy an RFID tag? Place it in a microwave for three seconds. "Five seconds, and it will probably catch fire," Paget said.
- Judging form reaction on Twitter, the keynote from Jane Lute, deputy secretary of the U.S. Department of Homeland Security, didn't seem to go over too well with the jeans-wearing, free-speech-loving Black Hat audience.
She described how government can help secure cyberspace, partially through DHS initiatives.
The most exciting part of the discussion came when an audience member asked Lute why people should trust DHS to secure the internet without slowing down "commerce and knowledge," especially when considering how much criticism the Transportation Security Administration has absorbed since it was founded.
Lute said DHS wants to serve as the "portal" for debate on how to strike this balance.
"Since the vulnerability was first publicized, we've made several attempts to contact Craig Heffner, the researcher, and get more detail," Ulevitch wrote in a blog post. "We've phoned. We've emailed. We've contacted reporters who've spoken to the researcher and had their help connecting to the researcher. I've even Facebook messaged his coworkers. I haven't had a single reply."
Ulevitch said OpenDNS is a free service that helps resolve "many problems system administrators and security pros face." He said the company would keep the details of the vulnerability private; its only goal is to protect users.
Heffner could not be reached for comment by SCMagazineUS.com.
While Microsoft would never go on the record and admit it, surely the software giant's ego was bruised when a report emerged last week that Google planned to phase out its internal use of Windows, apparently out of security concerns resulting from the coordinated Chinese-led attacks it suffered.
But as most anyone within the security community will tell you, Google seemed to be misplacing its rationale, especially when talking about smart, sophisticated, targeted hackers who just need one weak entry point (read: a naive user who likes to click on untrusted links) to start plundering intellectual property.
But fine, Google decided to abandon Windows. It still has to hurt, regardless of the reasons.
So when an information security engineer named Tavis Ormandy, who claimed he was acting independently, went public Thursday with exploit code for a Windows Help Center vulnerability five days after reporting it to Microsoft, one can't blame Redmond for dragging Ormandy's employer into the mix.
Because his employer just so happens to be Google, Microsoft's bitter rival.
Microsoft's Mike Reavey, who directs the company's Security Response Center, posted a blog describing the vulnerability, and he used some interesting wording at one point. See if you can catch it:
One of the main reasons we and many others across the industry advocate for responsible disclosure is that the software vendor who wrote the code is in the best position to fully understand the root cause. While this was a good find by the Google researcher, it turns out that the analysis is incomplete and the actual workaround Google suggested is easily circumvented. In some cases, more time is required for a comprehensive update that cannot be bypassed, and does not cause quality problems.
Notice how Reavey didn't say "the actual workaround Ormandy suggested" but instead implied that Google, as a company, was responsible for this disclosure. Sounds like fightin' words to me.
A Google spokesman reportedly denied the company's involvement and stated that Ormandy's work was independent.
Some security bloggers, such as Alan Shimel, weren't buying it.
You can tell me that Ormandy did this without Google's knowledge and consent. If that is so, they should fire him tomorrow. If it is not true, shame, shame, shame on Google.
I don't think it's fair for Microsoft to officially imply that Google was totally aware of this whole thing, but I also don't think it's fair for Ormandy to alert Microsoft about the vulnerability — as if he was prepared to act in a so-called responsible way — only to change his mind five days later and go full disclosure.
I think he'd be better served if he picked a side and stuck with it.
In Ormandy's defense, though, it sounds like he feels sorta bad: "I believe in [full disclosure], but making enemies of people I truly respect may not have been my smartest decision ever," he tweeted Thursday.
This mess also brings to light the continual challenge researchers face when they receive their paychecks from software companies that make products that have holes. After all, Google surely wouldn't want a researcher from Microsoft to discover a vulnerability in Gmail, only to go public with the exploit a few days after reporting it.
Maybe the guy and gal researchers and consultants who stay independent are on to something.
Few would argue that BP has been less than forthcoming with information related to the oil spill in the Gulf of Mexico.
The company has pinned the blame on the oil rig owner. Scientists have publicly disputed BP's projections of exactly how much oil is shooting from the underwater geyser each day. There have been repeated reports of reporters and photographers being blocked from visiting the crude-fouled beaches — some are even being threatened with arrest. Even the petroleum giant's CEO is doing his best "under embargo" impression.
BP's image is such an open target that a wryly social media enthusiast has created a fake Twitter account claiming to be the company's official public relations account. Check it out here. It's HI-LAR-IOUS.
One of my favorites: "The ocean looks just a bit slimmer today. Dressing it in black really did the trick! #bpcares"
The account has amassed some 60,000 followers (and growing), eons more than the real BP twitter account. Pretty telling of how ticked off people are at BP's response to what is now confirmed as the worst oil spill in U.S. history and one which may forever change the Gulf region's ecosystem.
But there is an information security connection here, because after all, a breach is a breach.
Let's pretend for a second that instead of tens of thousands of barrels of oil spewing in the gulf, it was tens of thousands of credit card numbers. Ears perking up? You see, public relations plays an important role into any major company incident, whether we are talking about a broken riser pipe buried deep beneath the Gulf of Mexico or a vulnerable web server.
This is what Steve Collins, the security sector lead at Text 100 Public Relations, had to say about the topic:
If you're still questioning the importance of effective breach communications, consider the reality of living in a 24-hour news cycle these days. Bad news travels fast, and with the emergence of social media, the chances of keeping a lid on such news are pretty slim. An employee's blog or tweet, or an overheard conversation at the grocery store, could let the cat out of the bag, unwittingly or not. And the more time that lapses while you're scrambling to determine how to communicate the breach, the greater the risk that news of your breach will be broken in terms you can't control, with serious implications for your brand and your ability to remain competitive.
In the case of BP, of course, it is pretty difficult to hide oil-drenched birds washing up onshore. But you get the idea. Transparency is the name of the game. Customers, plain and simple, will turn their backs on you if you let them down and fail to properly convey what happened. Client retention and brand reputation will suffer.
Some folks, like Bob Carr, the CEO of Heartland Payment Systems, which lost an estimated 130 million credit card records, gets this. In fact, as I was typing this post, a PR rep for Carr left me a message, asking to set aside some time to meet with Carr when he visits New York City in a couple of weeks. Yes, Carr wants to promote the company's new encryption solution that it will begin marketing to the merchants for whom it processes transactions. But, knowing Carr, I bet you he won't shy away from answering questions about the breach either.
Oil spills are going to happen. Data breaches are going to happen. But you don't have to suffer any worse than you already are.
Act quickly. Be contrite. Greet the media with open arms. Tell it like it is. Americans are more forgiving than most people give them credit for.
Keep this in mind, if for no other reason than it would stink to be the butt of a viral Twitter joke.
When I typed "How do I" into Google today, the first auto response to show was "How do I delete my Facebook account?"
"Whaaat?" was my first reaction. After all, this is the most popular website in the world. Why would anyone want to leave it?
In fact, just today a friend joined the site for the first time. Apparently he was being incessantly poked (no pun) and prodded to sign up by his peers, most of whom made fun of him for still relying on things like phone calls, emails and even, gasp, face-to-face communication to interact with others. He finally gave in. He told me he held off for so long out of "principle" and ultimately caved in due to "loneliness." (We'll examine his personal demons in a later blog post).
If this guy could join, someone who was so adamantly against the concept for so long, perhaps it's time to finally admit that Facebook controls the world.
But wait, you're telling me there is now a mass push to exit Facebook. I don't believe it. But it's true. The "how do I?" test doesn't lie. (Well, No. 5 is "How do I love thee." Not even sure what that means).
The fact is, though, that in recent months, Facebook has found itself mired in an increasingly deepening sinkhole around privacy. The crisis reached a peak a few weeks ago when the site announced its "Instant Personalization" and "social plug-in" features, which automatically opt in users to share data with some third-party websites in an effort to make their total web experience a more sociable one.
Privacy advocates are calling for founder Mark Zuckerberg's head - and these recently unearthed instant messenger exchanges from six years ago haven't helped the cause. Sophos' Graham Cluley, never shy of calling out Facebook for its privacy and security shortfalls, is hosting a poll asking users if they'll quit the Book. (Many say they will). And now's there a grass-roots internet effort forming that is asking users to avoid signing into Facebook for an entire day on June 6. It better be sunny out that day.
I'm not sold that Facebook is going to lose many members because of this whole debate, but if I were keeping score, I'd have Facebook down a couple of runs right now, if from nothing else than a bruised ego.
Of course, the ultimate goal of all of these new features is so Facebook can "expand revenue streams." It wants to make money, and who can really blame it? Wouldn't you want to be well compensated too if you were responsible for creating one of the biggest sensations of modern times?
Now, has Facebook been less transparent and explanatory than it should be when it makes these, and other, privacy changes to the website? Of course.
I agree with what Slate's Farhad Manjoo says:
I'm also wondering: Can Facebook have a customer service phone number to call if you have a problem? Can it do a better job to prevent against things like spam, phishing and malware? Can it better build in secure coding to its platform to prevent vulnerabilities like this one.
Any sort of privacy outcry (and potential revenue hit) will only work to make Facebook stronger on all those fronts.
Facebook certainly has the money to invest in improvements. Even though the service is free, it's not like Zuckerberg needs to stand on any street corner with a cardboard sign.
But it should surprise nobody that when a website with an estimated 500 million members makes some change, it is going to ruffle the feathers of a good number of users (and gain the attention of the media).
Remember the numerous layout design alterations that have occurred over the past couple of years. Judging from the status updates of my friends, it was like Facebook just axed their grandmother to death. They hated them. But I bet you if you ask someone to recall what the previous design looked like, they couldn't remember.
People don't like change. Plain and simple. But is Facebook nothing more than a data warehouse out to compromise your identity? Doubtful.
Let's appreciate Facebook for all it has done for us and what it will do for us in the future. Remember, we originally joined because we kind of, sort of like sharing our private things - photos, interests, happenings - with others.
Now's not the time to suddenly turn our backs on a site still finding its way.
We're just getting started.
My friend doesn't seem to be too upset. In fact, he just posted a status, not an hour ago: "I finally joined facebook, so please be gentle on me. this is a brave new world to me."
Remember to check your privacy settings, but don't you dare leave us.
The update plugs three holes in Java. Presumably the Java Web Start fix addresses the flaw in question, which involves the Java Deployment Toolkit browser plug-in failing to properly validate parameters, according to a Secunia advisory issued Monday. This can allow attackers to execute a JAR (Java Archive) file "on a network share in a privileged context."
In fact, the flaw has been leveraged in active attacks beginning this week.
However, I can't confirm the update closes the vulnerability because Oracle, which owns Sun, won't get back to me. And in its update advisory, it does not credit anyone with the flaw find.
Matter of fact, the company has made no mention of the bug at all since it was announced. One of the discovering researchers said the company told him that it didn't consider the issue enough of a big deal to warrant an out-of-cycle fix.
It appears Oracle has changed its mind. Today's update, especially considering it was distributed out of cycle, certainly looks like the patch.
But, through some casual Twitter browsing today, I've seen contradictory tweets from researchers on whether this is actually the update for the vulnerability. (The Ormandy the second tweet refers to is Tavis Ormandy, the Google researcher who went Full Disclosure with the bug last Friday).
The "against": http://twitter.com/vlna/status/12230959161
So which one is it? I don't know.
I must admit, it's very disconcerting that a software vendor would not publicly make any statements regarding a security issue that has gotten widespread coverage, both in established media outlets and across social networking channels.
There are customers to worry about...right, Oracle?
The information security industry took a step back this week with news that the CISO of the state of Pennsylvania, Bob Maley, lost his job, likely over remarks he made during a panel discussion last week at the RSA Conference.
In an industry where information sharing is widely agreed upon as one of the paramount ways to combat the world's cybercriminal element, it is truly upsetting to see a security pro lose his job over doing just that.
Although a spokesman for the Pennsylvania governor wouldn't admit it, that is exactly what appears to have caused Maley's departure from a role he held for five years.
On a panel at the RSA show last week, on which he was joined by three other state CISOs, Maley offered details into a recent intrusion affecting the state's Department of Transportation website. He didn't get too specific, but it was specific enough to surely prove instructional to the scores of conference attendees in the audience.
He described, according to a report on govinfosecurity.com, how the owner of a driving school in Philadelphia used a Russian-based proxy to hide his identity as he exploited a vulnerability so that he could schedule his students for driving exams. (The wait list to take the test usually runs up to six weeks).
Maley, an SC Magazine CSO of the Year finalist, has always been a candid, shoot-from-the-hip kind of guy. I learned this from our conversation last summer when I interviewed the former cop for a cover story on data breach response. For the story, he recounted a number of breaches that have affected the state, rarely holding back details.
I'm assuming that this particular incident touched a nerve with state officials because the hacking was relatively recent, and there was still an investigation underway.
But even so, I find the firing to be counterproductive to what the security community is attempting to accomplish. The key to winning the battle against sophisticated hackers is with details and anecdotes, exactly what Maley appears to have been doing. Speaking generally just doesn't cut it, not in this industry. And especially not at the world's premiere gathering of information security professionals — one of the few times in the year when practitioners get together to swap stories on life in the trenches.
It's a shame, too. We were only just applauding Google for its transparency over the China attacks. Many had lauded the internet giant for coming clean about being the victim of a massive intrusion.
We seemed to be turning a corner...and then this.
In 2010, remaining mum, or too close to the vest, about incidents benefits nobody. Every organization in the country is being probed on a daily basis. Vulnerabilities are going to be there. Hacks are going to happen. Data is going to be exposed. The criminals are going to be one step ahead. Let's move on from this prevailing wisdom that any one organization is immune from attack.
Once we do that, and only then, can we take back the internet.
One of the great unintended consequences of my job, having covered the IT security space for nearly four years, is my great inability to accurately gauge the awareness that mainstream America has for cyber-risks.
Because I am so immersed in the topic, covering stories on a daily basis, writing about the vast array of vulnerabilities and breaches, legislation and lawsuits, phishing and spam, arrests and prosecutions, that I often forget infosec is not your typical cocktail party material.
But while I am certain that most of my friends and family aren't aware of even a small percentage of the digital threats out there today, I do believe that they are catching on to the problem, bit by bit.
The tipping point is still not here -- just last night, for example, I was borrowing a friend's laptop and noticed it didn't have active AV protection. She didn't seem too pressed to fix the problem.
Part of the blame for this apathy could be sheer risk/reward. Why accept security advice when the rational economic move is to ignore it, as a Microsoft researcher recently wrote about? Not to mention, attacks are more targeted these days (meaning nobody notices the threats out there), and banks are pretty good at reimbursing you if you do happen to fall victim to financial fraud.
Still, each year, cognizance grows.
So, with that said, here is SC Magazine's token summation of 2010 threat predictions, compiled through the dozens of emails we received from the Nostradamus' of the IT security community.
- Social networking threats: Experts seem to be in across-the-board agreement that cybercrooks are going to increasingly target these new media platforms to push their wares. Also, organizations will have to worry that their end-users will leak sensitive information. I mean, this makes sense. And it's been happening already. After all, where else can you find 350 million people chilling out on a website?
- Windows 7: Well, that whole Vista thing didn't go over so well, but all signs seem to be pointing to much higher adoption of the next iteration of the Microsoft OS. So that means cybercrooks will begin targeting this platform.
- New platforms: No surprises here. Take your pick. Mobile devices, though, seem the likeliest candidate -- yet some experts seem unconvinced. Still, one has to believe that once people are actively using these smartphones to make transactions, the bad guys will be riding right along.
- Apple: I'll believe that the Mac OS has become a viable target when the PR folks in Cupertino start returning my phone calls. Next...
- Peer-to-peer malware/data leakage: This seems more plausible, and we saw some examples of it this year. But, with increased organizational awareness to the dangers of file-sharing networks, and a focus around this on Capitol Hill, one is foolish to expect an epidemic.
- HTML5/IPV6: Updated web language and increased address space have some believing that these new technologies are going to be abused. But adoption may not come in 2010. I'm sure this will be on the list next year, as well.
The news, though, is not all doom-and-gloom. One interesting prediction from McAfee suggests that the threat of rogue anti-virus will actually drop now that "the fake anti-virus market has...been saturated and the profits for cybercriminals have fallen."
With all this said, I wish you all a Happy New Year, and look forward to talking about cybercrime over a cocktail with you in 2010. Or 2011.
I find it hard to believe that Citigroup's media relations department would so adamantly deny the occurrence of a breach if it wasn't being completely genuine.
Because that is what they have done today in light of a report in The Wall Street Journal that the partially government-owned financial services firm was the victim of a hack that stole tens of millions of dollars.
When I read this story, there wasn't much meat, and I was pretty skeptical. I got even more skeptical when the FBI wouldn't comment on the story at all — not even to say that it was investigating.
So I did some searching around the blogosphere and saw that many others were equally suspicious of the story.
And then I remembered a story we wrote not too long ago, when the FBI said it was actively investigating a huge number of Automated Clearing House (ACH) fraud cases in which cybercriminals got a hold of mostly small- and mid-size corporate bank accounts to transfer large sums of money out. Attempted losses, the FBI said, have reached more than a $100 million.
This type of fraud, made possible by the data-stealing Zeus, or Zbot trojan, is arguably the biggest information security news story of the year.
So here's the FBI saying Citi, one of the world's biggest banks, has lost tens of millions of dollars due to a breach.
Well, I wouldn't call ACH a breach — it's more of an issue of a customer getting hacked than any bank — but I could see how something like this could get lost in translation.
So there you have it. This is nothing new.
Call it a scoop that wasn't.
Then again, maybe this was, in fact, a well-orchestrated Russian Business Network hack, and nobody is talking because the presidential administration wants to protect one of the financial services industry's most prized assets from any additional pounding.
Can you say data breach bailout?
Happy Holidays everyone.
Time and time again, we've seen information security regulations and guidelines delayed due to the burden they might impose on small businesses.
For example, state officials, on multiple occasions, have pushed back enforcement of the Massachusetts data security regulations due to small business complaints, and most recently, the Federal Trade Commission announced it would postpone enforcement of the the Red Flags Rules until next summer.
The economy is partially to blame, and it is a decent justification. After all, many small- and mid-size businesses are having enough trouble simply surviving the worst recession in a half-decade, never mind needing to concern themselves with additional costs.
But then comes astounding alerts from the FBI that hackers have this year seriously turned their attention to smaller organizations as part of their slick, moneymaking operations. Bigger businesses may have the resources to better deal with the problem, and cybercrooks know this. So they now seem to be focusing more on the weakest link. And why not? Raiding the bank accounts of 10 mom-and-pop shops is likely just as valuable as compromising one big business. And probably much easier.
In today's threat landscape, it is incomprehensible for any size organization to consider implementing tougher security controls an unnecessary burden.
I've had discussions with experts about this. And they've told me that securing an organization does not require a great deal of investment. In fact, the basics -- updated anti-virus, patched machines, a comprehensive security policy, employee training, some web and email filtering -- should be enough to keep the bad guys out. The sad part is, many firms simply aren't doing the most fundamental stuff.
There is another side to this coin. Regulators must stiffen their enforcement agendas. Enough submitting to the concerns of business owners. It's 2009. There is no more slack that can be given. The losses are simply too large to bear any longer.
Thanksgiving is a holiday during which to cherish what we have. But the organized cybercriminal groups that always seem to be one step ahead of everyone else want to take all of that away, one phishing email or compromised PC at a time.
It's time the smaller firms fight back.
Joe Simitian, a Democratic state senator from California, is still scratching his head, some two weeks after Gov. Arnold Schwarzenegger vetoed SB-20, an update to the landmark 2003 Golden State breach notification bill, known as SB-1386.
They say that imitation is the highest form of flattery. Well, some 45 states have more or less copied California's pioneering move. And there was no reason to believe that a similar scenario wouldn't have played out again had the Governator signed SB-20 into law.
But, alas, it was not to be. The new legislation would have required that breach notification letters going to California residents also contain specifics around the data-loss incident, including the type of personal information exposed, a description of the incident, and advice on steps to take to protect oneself from identity theft. The law also would have mandated that organizations that suffer a breach affecting 500 or more people must submit a copy of the alert letter to the state attorney general's office
"“It was one of the most surprising vetoes I've gotten in nine years in the legislature,” Simitian told ApparelNews.net. “There were no amendments from the business community. There was no cost to the state.”
But Schwarzenegger, known for his large army of business allies, argued that the additional information that corporations would have been required to provide would have proved an additional burden to them, while not really helping consumers.
Simitian isn't the only one reacting with displeasure. From the Consumer Federation of California:
Governor Schwarzenegger's final verdict on a host of critical consumer protection bills this past weekend left consumer advocates disappointed. Of the 14 bills identified by the Consumer Federation of California (CFC) as most important, in only six instances did thegovernor take the side of the consumer.
While acknowledging that the governor signed several consumer protection laws, Richard Holober, executive director of the Consumer Federation of California stated: “We are disappointed that the governor sided with big business interests and against consumers on the majority of bills that reached his desk. The governor turned a deaf ear to California consumers on key food safety, automobile insurance and financial privacy proposals."
I also must respectfully disagree with the governor. How does he know the additional details won't help consumers. With data breaches becoming such a regularity, I would think consumers are now demanding more details, if for no other reason so they can discern between incidents.
And I'm not so sure that I can empathize with businesses. While the law may require organizations to do some additional work, I would argue that it is work that should be done anyway. After all, businesses must learn from their mistakes. Isn't the best way to do that by understanding the entire scope of an incident.
Simitian, is pledging that, pardon the metaphor, he'll be back with this bill in next year's session.
And at least not all of Schwarzenegger's legislative decisions are bad ones.
The security of online banking is being tested like it's never been tested before. A number of recent incidents have made the news in which mostly small businesses have lost tens of thousands of dollars to overseas cybercrooks.
Hats off to The Washington Post's Brian Krebs for breaking most of these stories and getting the victims on the phone to discuss exactly what happened.
As Krebs describes, many of the scenarios are being played out in a similar fashion. A targeted, socially-engineered email arrives at a business or other organization, such as a school district. A gullible employee opens it and installs a pernicious, difficult-to-detect trojan, such as Zeus or Clampi, which sits quietly on the infected desktop until that employee visits the company's online bank site. At this point, the malware lifts username and password, sends it back to the attacker, who quickly wires money out of the victim's account to a "money mule" -- and the rest is pretty much history.
What makes these attacks interesting is that apparently such technologies as tokens are not helping much. The attackers have created a slick scheme so that when the user visits the bank site, he or she is greeted with a fake login screen. Not sensing the page is a fake, the victim will give up his or her username and password (and one-time token or other second-factor, if applicable). The crooks will capture these details in real time and enter them into the real bank page, allowing them to transfer cash before the victim can even bat an eyelid.
It sounds as if it is time for end-users and banks to shift some their existing habits.
They may want to consider out-of-band authentication -- meaning get that second factor off the computer that the hijacker already has compromised. Technologies such as those offered by Phone Factor, which offers a phone-based tokenless authentication system, may answer the call for additional security, no pun intended.
Banks, meanwhile, should look into additional fraud detection capabilities. I recently got briefed by ArcSight, which has launched a new security information and event management solution specifically for financial institutions.
And, it might be wise to revisit such ideas as single-site browsers, in which the user can only login to his or her bank through a web browser that sits as an application on the desktop. You can navigate all you want to one particular site -- say Bank of America -- but you won't be able to get anywhere else.
Clearly, better front- and back-end controls are needed.
But as Krebs writes, perhaps banks don't need to care.
Businesses and consumers do not enjoy the same legal protections when banking online as consumers. Consumers typically have up to 60 days from the receipt of a monthly statement to dispute any unauthorized charges.
In contrast, companies that bank online are regulated under the Universal Commercial Code, which holds that commercial banking customers have roughly two business days to spot and dispute unauthorized activity if they want to hold out any hope of recovering unauthorized transfers from their accounts.
Banks may just assume the risk that the consumer is not going to immediately spot the fraudulent transaction, thus buying them time and saving them the cost of recouping losses.
Of course, it all goes back to end-user awareness. Trojans don't magically appear on victim machines. Organizations need to do a better job of patching for client-side vulnerabilities -- they're nowhere close, right now -- and in training employees to not open (or act on) emails that look suspicious.
More to come, surely, with this story.
As we gear up for the 20th anniversary edition of SC Magazine, set to drop in November, I've been forced to get pretty nostalgic about the security industry.
Considering I joined the staff here in January 2006 -- and the extent of my IT security knowledge prior to that was the Melissa worm -- I don't have a lot of memories from which to draw.
In fact, I still can't believe that SC is turning 20. I would have loved to see that inaugural 1989 edition. Hopefully, we still have it laying around somewhere, but considering the publication took shape in the UK, under different ownership, I'm not so sure that gem will ever be found.
But as our staff brainstorms ideas for this momentous occasion, we, of course, plan to look at how the threat landscape has changed. Clearly, quite a bit. Compliance demands, the rise of the CSO, botnets, targeted malware.
I don't need to rehash how professional and sophisticated the cybercriminal underground has gotten compared to as soon as just a few years ago.
Yet, there's also so much that remains the same. And I think it's important to show that.
Spam immediately comes to mind. But so does the biggest security story of the last couple of weeks: the Twitter distributed denial-of-service attacks. DDoS attacks have been happening for years -- an assault on the Department of Justice website in 1996 was how former OMB director Karen Evans got her first taste of cybercrime.
It was funny seeing some of the more mainstream outlets last week write the obligatory sidebar about what a DDoS is. They could've just as easily pulled from the archives. Not much was different about this attack -- other than the target. (If anything, let this be a wake-up call to some of these social networking sites that security must be a priority).
So, in the end, not much has changed within this space. Maybe that's why security pros can get pretty frustrated with their jobs -- they're always fighting the same fires. And now with more check boxes to fill out than ever before.
I am just back from two weeks on jury duty. The hours were good, lunch in Chinatown was a treat, and I was heartened by the legal process. However, as someone in the security field, one element of the experience stood out for me.
While the security guards screening everyone entering the municipal building were friendlier than those at airports, the procedure was the same. We had to pass our bags through an X-ray machine while we passed through a metal detector. So, they've got the physical security part covered.
However, once inside the building, security concerns seem to have been abandoned. Virtual security, that is.
A few times the court officer who shuffled us around requested that cell phones be turned off when we entered the court room. Makes sense. But, while that prevented interruptions from incoming calls, it didn't stop my fellow citizens from taking the devices out to make use of their 3G and Wi-Fi connections and web and text communication options.
I was surprised to witness the use of laptops and smart phones, even during the voir dire process. My fellow jurors were permitted to text away even as lawyers were questioning the jury pool. The iPhones and BlackBerries came out even from the jury box during breaks in the trial presentations.
I'm not saying my fellow jurors were revealing details of the proceedings. Likely, they were scanning headlines and checking in with the office and with loved ones. But talk about the insider threat.
Was the integrity of the judicial process breached? Who knows. Perhaps I'm being overly cautious. But, obviously there's some call for a ruling here. On a higher profile case, I can imagine tweets being fed to media outlets, or details being shared for whatever reason.
Ban cell phones and laptops from the courtroom? Let's start, at least, by monitoring their use.
Ever since the economy went down the toilet, and President Obama took office, I've been doing a lot of thinking about infrastructure -- and how our country stinks at it compared to other parts of the world, namely Europe.
Our roads and bridges are cracking at the seams, our trains go too slow, our lights don't always stay on....I could go on and on in addressing the deficiencies.
Perhaps the reason for this is that we've poured too much money into the Iraq war -- what did that exactly solve, again?
Or maybe it's because Wall Street lured our best and brightest with promises of big paychecks, even heftier bonuses and an extravagant lifestyle. Instead of coming up with a cure for cancer or designing a superior air traffic control system, these grads took trading jobs with Goldman and Merrill and Bank of America.
That's at least what Tom Friedman suggested in this New York Times Op-Ed piece from late last year. In it, he argues that America needs a "makeover," and fast, if it is to thrive in the 21st century:
To top it off, we’ve fallen into a trend of diverting and rewarding the best of our collective I.Q. to people doing financial engineering rather than real engineering. These rocket scientists and engineers were designing complex financial instruments to make money out of money — rather than designing cars, phones, computers, teaching tools, Internet programs and medical equipment that could improve the lives and productivity of millions.
Which brings me to security. Specifically, payment security, and why we need an infrastructure overhaul.
The Payment Card Industry Data Security Standard (PCI DSS) does a baseline job of requiring that merchants get better at securing cardholder data. But, breaches, monster breaches, actually, still are happening on a regular basis and many people are having their data fraudulently used by cybercrooks.
In the end, as Gartner analyst Avivah Litan told me today in a conversation, merchants aren't -- and will never be -- in the business of security. That's why to truly push back the sophisticated cybercriminal element, the payment system must be "fundamentally upgraded," Litan said.
I agree. Technologies such as Chip and PIN, tokenization and end-to-end encryption are ways to take much of the burden out of the hands of merchants -- who, let's agree, aren't exactly the best data gatekeepers. Fraud would go down.
Chip and PIN, specifically, involves cards being embedded with a customized chip that would be authenticated when a customer entered their PIN. In the UK, it has resulted in a dramatic decline in fraud rates for card-present transactions.
Bob Carr, CEO of Heartland Payment Systems, which suffered the worst reported data breach of all time, is trying to do that something similar. He said PCI is too human-intensive, so why not incorporate a technology across the payment chain that would work to mask the data at its source. His idea is end-to-end encryption.
Of course, there's cost. But merchants have to now accept the fact that security is part of their business objective. It's not going away.
(And just think, maybe a whiz kid who would've, before the economy tanked, opted for a hedge fund job will be the one who designs a way to affordably overhaul the payment infrastructure).
*You won't want to miss our September cover story, where we'll look at exactly what happened at Heartland, whether the PCI certification process needs a revamping and what companies need to do beyond PCI.
News this week that Juniper Networks had pulled Barnaby Jack's planned Black Hat presentation and demo on ATM software vulnerabilities was met with dismay by the security community.
Is anyone else tired of this already? It seems not a year passes when a researcher isn't threatened with a lawsuit for plans to expose flaws in a particular technology. (This one probably struck most people harder than others because Jack actually planned to wheel an ATM on stage and make it spew out twenties).
I know that if the craps table had been mean to me the night before -- everyone else always seems to have the luck -- I would've been running for the cash and worried about getting quotes later.
All kidding aside, I just wish this "responsible disclosure" debate was just sorted out already by the courts so we wouldn't have these same issues year after year. Wouldn't it be easier if, say, there was a Nevada law that required researchers to supply affected vendors with X number of days notice prior to presenting flaw findings. And if they didn't have the problem fixed by then, then it's game on?
Because, as it stands now, it sounds as if companies such as Juniper, where Jack works, immediately cave to any semblance of resistance from the affected technology manufacturer.
ISS, IOActive, they've all done it in recent years.
Researcher Alexander Sotirov suggests that this epidemic of nixed presentations likely can be blamed on overly sensitive researcher's employers. He tweeted on Tuesday:
Barnaby should quit Juniper and join me in being an independent consultant. The corporate environment stifles interesting security research.
For me, I think the right answer is telling these software and hardware makers to build their product secure from the start, so smart researchers like Jack can't figure out a way to exploit them.
At the minimum, vendors should get their act together to issue a patch in time for the researcher to present his or her findings. That's the least they can do for someone who likely saved them a fortune before the bad guys figured out the security hole.
News late last week that Jeff Moss was appointed as one of 16 fresh faces to the U.S. Department of Homeland Security Advisory Council didn't quite draw the same amount of attention as President Obama's cybersecurity speech did a few days earlier.
But it should have.
You see, Jeff Moss is a hacker. He still is widely known by his online alias Dark Tangent.
A hacker being named to a government advisory role? It can't be.
Look how far we've come.
To put this in some perspective, the HSAC is chaired by a judge and a senator. Its member list is undeniably blue blooded, riddled with titles such as CEO, president, partner, governor, trustee, mayor.
Moss is a refreshing addition.
Granted, Moss is no longer on the side of the fence that could land him in jail. Actually, that's why he gave up the trade after high school. But as the founder of the Black Hat and DEFCON conferences -- arguably the biggest hacker events during the year -- he clearly still considers himself very much a part of the security research community, which quite often blurs the line between the lawful and the questionable.
With that said, Moss' representation on the council serves as an eye-opening moment for the federal government. I liken it to placing a former mobster on anti-racketeering board. Moss is very smart; he can offer perspective that few others can.
Our nation's leaders finally understand that to fight cybercrime requires the cooperation of everybody, even if that somebody formerly hacked phone systems so he could make free international calls.
Moss will be able to draw from his rich experience as a hacker and call on his many interactions with both the good guys and, I'm sure, the bad guys.
Of course, that's not to say that Moss can't also lend some perspective as a business leader. He did start Black Hat and DEFCON from scratch, successfully selling the former to CMP Media in 2005. Moss also has held roles at Ernst & Young and Secure Computing -- so he surely knows a thing or two about wearing a tie to the board room.
Apparently, the DHS isn't only looking to the private sector for advisory help. The Pentagon also is leveraging America's IT security gene pool to recruit "hacker soldiers," who will help the government prepare for the next generation of war. The kind that isn't fought on the deserts of Iraq or Afghanistan.
I see these developments as two great positives.
Experience ultimately can save our nation's cyberinfrastructure. No more political posturing.
First it was Microsoft, then Oracle, then Cisco, and now Adobe.
The San Jose, Calif. maker of the ubiquitous Acrobat and Reader software is the latest software vendor to announce a strategy for dealing with vulnerabilities. Adobe announced this week that it plans to release quarterly fixes, joining a number of other high-profile players who decided to make their security patches available on a scheduled basis, to make life easier for everyone.
In addition, Adobe said it will begin placing increased efforts on hardening its code (to prevent vulnerabilities wherever possible) and distributing pertinent information to security professionals (if a flaw can't be avoided).
This undertaking by Adobe was critical, considering the company was getting some serious bad press within the blogosphere after it took a while to patch a critical zero-day early this year. Some experts -- and rightfully so -- asked why organizations have decided to make Reader their de facto standard, when other, seemingly more secure (or at least less targeted) PDF viewers exist.
Adobe recognized the possibility of losing market share over this - and responded.
While we're on the subject of major software makers, when is Apple going to get its act together? My own issues aside -- Apple is notoriously poor at responding to press calls -- the Cupertino computing giant must start being more transparent with its security efforts.
As it stands now, Apple gives little information about issues affecting its Mac OS X platform, and users typically are caught off guard when patches are released. This has incensed a number of very smart security researchers. It even prompted one, Landon Fuller, to this week publish an, albeit benign, proof-of-concept for a Sun Java bug that was fixed months earlier but still was present in the Mac OS X ships. Fuller, a former Apple engineer himself, said the only way to get Apple to act is by demonstrating a flaw's severity.
Apple, we know your box is not nearly as targeted as Windows. Maybe it's because of more secure code. Maybe it's because you have a lesser market share. Heck, maybe it's because a lot of hackers like the iPhone and feel bad trying to intrude on your IP.
But, even so, even if one person in the world uses your platform, it's your duty to be as responsive about security issues as you possibly can be.
And right now, you're failing at it. (And not returning my phone calls to boot).
If there was one buzzword during the recent RSA Conference that permeated across the session halls at the Moscone Center (and likely even reached the bar at the W), it was information sharing.
The concept is pretty simple, really. For a discipline as young but profoundly complicated as information security to succeed, communication is key. Because, in the end, information systems all touch each other and, really, we're all in this together.
(Insert image of IT admins sitting around a campfire singing "Kumbaya.")
Of course, getting people into a room and talking about breaches they've had or threats they've seen is inherently complex because of things like fear of punishment, competition, and classified documents.
But mostly everyone has recognized that information sharing is an absolute must if America is to keep up with the sophistication of hackers, some of whom are state-sponsored and thereby threatening the very foundation of the country as a whole.
Perhaps in no other industry is protecting the networks as fundamentally important to the nation's day-to-day living than the electric grid. But as we know, this sector is far from immune from the wrath of cybercriminals, as SCADA control systems are now being built on top of traditional operations systems, such as Windows or Linux, which contain IP-based components.
In other words, the networks tasked with keeping the lights on are susceptible to the same types of attacks that can impact an average business.
One organization has been quietly meeting over the last several years to make sure these critical systems stay protected. Now, they're ready to let everyone know about them.
The Energy Sector Security Consortium, or EnergySec, is made up of about 75 of the power sector's 1,800 asset owners - but now they are trying to "scale" out and reach a wide audience. (Scalability: another RSA buzzword, by the way).
The goal of the organization is to, you guessed it, share information. But here's why they may succeed at it.
According to Chris Jager, the group's chairman, and Seth Bromberger, its director, the energy sector doesn't compete - therefore, asset owners are more likely to collaborate. And with no fear of sanctions, they may be more willing to volunteer the type of information that could prevent another power company from suffering the same type of attack.
Jager says the energy industry has had a tough time responding to today's security threats because many of the publicized events have been based on unnamed sources and classified information. EnergySec, however, wants it members to feel comfortable detailing specifics of a breach so that the group can better arm its members with information.
"We're not interested in putting any names in lights," he says. "It's more like these types of incidents have occurred and this is how you should mitigate your exposure."
And EnergySec can also value the government by providing them with real-time and historical data that they can use to "validate or nullify some of the assertions they're making concerning threats and vulnerability," Jager says.
If there is any industry that needs some cold, hard facts about hacker attempts, it's energy. And it sounds as if EnergySec is going to help sound the alarm on an increasingly worrying situation.
"There are people poking at these networks," Jager says. "That's real."
I just got finished reading a lengthy article about Facebook in New York Magazine - easily my favorite magazine in the whole world, well, aside from SC Magazine - and, like I figured, it failed to touch on any of the information security risks of the popular social-networking site.
That's not to say the story overlooked the privacy ramifications of the site. In fact, much of the article revolved around the inarguable fact that Mark Zuckerberg and his cronies are amassing huge amounts of data on you - you gotta be on Facebook, right? - and tens of millions of your friends all over the world (even if they promise to protect it while you're here and get rid of it if you decide to leave).
But I'm not here to debate this point, although it seems as if Facebook is making a good faith effort to satiate privacy advocates. The problem with Facebook, and other burgeoning social networking sites like Twitter, is that we get all caught up in this data privacy issue and never talk much about the insecurity of web applications - and how that can be a really bad thing.
We saw it over the weekend, up close and personal, when an attention-seeking teenager from Brooklyn (aren't they all, really?) devised a cross-site scripting worm that was able to cut across Twitter and infect -albeit benignly - a vast number of profiles.
But what if this attack were more profit-driven? What if the worm spread links to a more malicious website than it did? What if the code asked the user to divulge personal information?
Sites such as Facebook and Twitter have a lot on their minds, mainly figuring out how to monetize their insane popularity. (It's harder than it seems; nobody wants to pay for anything on the internet.)
But amid their revenue-generating boardroom meetings, they must stop for at least a few minutes to show users their committment to code security and recognize their place as a pioneer in the web's revolution. Pretty soon, everyone is going to be doing something at least somewhat similar to Facebook and Twitter.
As a blog post on the Gnucitizen think tank said soon after the Twitter attack:
There is no merit in discussing how this has been done and for what purposes but this incident is yet another proof that the attack landscape is rapidly changing and moving towards web enabled infrastructures and the client-side. Soon or later almost every website will be equipped with social capabilities (google’s own opensocial and friendconnect platforms) and than simple persistent XSS attacks will turn into quite nasty problems.
John Pescatore of Gartner was a tad more terse in his "Twelve Word Tuesday" blog post:
Malware just taught Twitter the lesson Microsoft learned in 2001: security matters.
We're looking up to you Facebook, Twitter, MySpace, etc. Please don't let us down.
*** The SC Magazine editorial team will be out in San Franciso next week for the annual SC Magazine U.S. Awards Gala and, of course, the RSA Conference. A quick scan of the conference agenda reveals some potentially meaty sessions. I've noticed many are going to be hitting on either cloud computing security, organized crime or government. I think that'll end up being the theme of the show.
Follow us on Twitter (SCMagazine) and please frequently visit the website (www.scmagazineus.com) for updated news, blogs, videos, etc.
Rest your livers! We'll see you out there.
Well, as most rational-minded people predicted, April 1 came and went with a barely a whimper (as far as we know) from our pal Conficker.
I have mixed feelings about this worm.
The positive side is that, because of mainstream news coverage such as the 60 Minutes segment last Sunday, Conficker's presence undoubtedly raised awareness to the dangers of internet threats. In the 3 1/2 years that I have been writing for SC Magazine, this is the first time that my family has called me with a computer security question. (My mom called Tuesday morning, my older brother the night before. Both were convinced, as Lesley Stahl may or may not have wanted them to believe, that the sky was falling).
The negative side is that this threat, much like media-hyped worms of the past, are the only times the average end-user seems to pay any attention to security at all. They may assume that the only times they need to be careful are on these "D-Days," when in fact, they are much more likely to have their identity stolen on an idle Tuesday in November.
These days, in the cybercrime world, it's all about the money. More so, though, it's all about flying under the radar and not raising suspicion. That's why if Conficker ever causes a big problem, it'll be when nobody is expecting it.
That's why people should be more concerned about well-groomed social engineering attacks trying to get you to enter in your credit card information, or buy some fake anti-virus, or click on some sketchy attachment.
Just yesterday, Microsoft announced a dangerous, zero-day PowerPoint vulnerability that is being actively exploited?
Funny, my mom or brother never called me to ask about it. But I bet you they wouldn't think twice about clicking.
I have a pretty good feeling that on April 1, the joke will be on us.
Us, being the media, which has flocked to news that on Wednesday, Conficker's code is programmed to contact some 50,000 websites for more instruction -- which conceivably could give the millions of compromised machines the power to do almost anything. The major news outlets are fully on board with this story, because, after all, who doesn't love to report on a doomsday scenario?
(SC Magazine is planning its "What will happen?!?!" expose next week).
The possibilities are real, of course, if the botmaster really got serious about what is under his (her?) control. A massive DDoS attack could be launched. A mega spam campaign could be unleashed. Historic amounts of confidential data could be hijacked.
Or, perhaps, searchable and sellable data -- as one researcher told The New York Times:
What if Conficker is intended to give the computer underworld the ability to search for data on all the infected computers around the globe and then sell the answers?
While I'm one to typically fall for the hype -- or at least Armageddon prognostications -- this one I'm not buying. I'm going with the prediction of SophosLabs Global Director Mark Harris who told me today that he thinks next Wednesday brings nothing more than infected machines getting an updated version of the worm.
That's what I'm betting on.
Then again, I'm not really the best gambler. I had UCLA going Final Four. Maybe we should ask this guy what he thinks.
Was the campaign for Sen. Norm Coleman, R-Minn., serious when it tried to throw around a bunch of fancy security technology jargon and emotion-provoking adjectives in the wake of its data breach revelation?
Based on a statement from the campaign and the senator himself (who reportedly used words like "chilling" and "frightening" to describe the attacks), you might think the campaign was the target of some sophisticated hacker attack. And, I guess that's believable, considering Coleman is locked in a nasty legal battle with Al Franken over who won November's election.
(Franken is all but assured the seat, once the mess is sorted out).
But this data-loss incident was anything but "chilling" or "frightening" or the subject of a breached firewall or any other complicated compromise, as the campaign suggested in a statement. Instead, it was an IT consultant who randomly stumbled upon a spreadsheet -- sitting publicly available on the web -- and containing Coleman donors' credit card records.
From the Minneapolis Star-Tribune:
One of the first to discover the exposed database was Adria Richards, a Minneapolis freelance technical consultant. Richards checked the Coleman site on the night of Jan. 28 after getting reports that heavy traffic had crashed it; less than two minutes of poking with her browser put her into the database, she said. "A third-grader could have done it," she said.
Third-graders don't know how to breach firewalls, but they certainly know how to type a URL into an address bar and find a document that shouldn't be publicly viewable on the web.
Shame on you, Coleman campaign for trying to spin this like some big-bad hacker infiltrated your database.
And while we're at it, the campaign should also be sorry for not alerting the victims sooner.
Maybe they were doing a recount, hoping the number wasn't really 4,700.
The SC Magazine team was not in Washington, D.C. for the Black Hat show, but we certainly didn't want the great research revelations and other talks that came out of the hacker conference to go uncovered.
Here are five (abbreviated) highlights, in no particular order, that we put together based on news reports of the event:
- Dan Kaminsky - The researcher who made all the news at last year's Black Hat Vegas show over the big DNS flaw he discovered (by accident) stumped for the first time for DNSSEC, an Internet Engineering Task Force set of specifications that secures communication between DNS name servers and clients. Kaminsky had never spoken favorably about the implementation, which he said is riddled with challenges, until now. He said we have find a way to make DNSSEC deployments - now a requirement for all federal agencies - easier.
- Michael Sutton - The vice president of research at online web startup Zscaler showed how Google Gears, a browser plugin that allows web apps to work offline, when used on a site vulnerable to cross-site scripting, can be exploited by hackers to steal sensitive, locally stored data. He described the attack scenario (better than I certainly can) on his company blog.
- Nguyen Minh Duc - The researcher at a Vietnam-based security firm demonstrated how hackers can fool facial-recognition technologies of Lenova, Toshiba and Asus, allowing them access to computers. The vulnerability exists because the solutions can't tell a real face from a digitally mastered one.
- Paul Kurtz - The current executive director of SAFECode and a member of the Obama transition team delivered a keynote that warned audience members that the government has a poor disaster recovery plan in place in case of a major cyberattack. Likening the situation to Hurricane Katrina, Kurtz said no agencies are prepared to take an immediate lead role. To respond to a massive assault, the United States should considering militarizing cyberspace, he said.
- "Moxie Marlinspike" - The researcher detailed the use of a "SSLstrip" app that enables the launch of a man-in-the-middle attack that will bring users who try to access an "https" version of a website to the unencrypted "http" version. The only way users could tell anything is up is if they look in the browser, but few would notice the URL switched to "http."
The latest trend in cybercrime appears to be trying to crack into the websites belonging to companies that are in the business of stopping cybercrime.
Two weekends ago, a Romanian hacker going by the handle Unu first blogged about using a SQL injection attack to gain access to Kaspersky Lab's U.S. support website. Then, he chronicled a successful infiltration of F-Secure and BitDefender.
In none of the cases was any sensitive data exposed. It's difficult to say whether that is because the hacker stopped short of doing this because he merely was trying to demonstrate the insecurity of these sites -- or because he simply was not sophisticated enough.
Either way, his point was well taken. Because of the amount of code used to build today's flashy and information-filled websites, pages are going to be insecure. And while Kaspersky, for good reason, expressed shame and disappointment over the hack, situations like this are going to happen.
After all, if a determined hacker wants to find a way in, chances are, he will.
I was speaking recently to the owner of a security consulting firm who said he was absolutely sure that, sooner than later, hackers were going to compromise his site. Just to prove they could do it. He could run the latest and greatest to stop them, but an attack was inevitable.
So how does he sleep at night, knowing the phone might ring at 3 a.m. (sorry, Hillary), telling him that his site was illegally accessed?
By doing the most important thing one can do: Mitigating the threat by limiting the amount of sensitive data that resides in database servers serving public-facing websites.
This should be a best practice that not only applies to SQL databases but across enterprise networks. If you don't need it, don't keep it.
The worst-case scenario, my source told me, was that the thieves would get some email addresses.
Sounds a lot better to me than names, Socials and credit card numbers.
When Google flagged the entire World Wide Web as malicious for about an hour stretch on Saturday morning EST, I was fast asleep, nursing away the effects of the previous night's revelry.
But to the millions of users worldwide who were awake and who solely rely on Google to tell them exactly which sites they should be visiting on that crazy thing we call the internet, it must have come as a real scare.
I'm surprised all the screams of horror didn't wake me up. (Perhaps they weren't screaming at all and instead visited Dogpile.com and just pretended like it was 1999.)
Google later chalked the incident up to human error, something about mistakenly checking the URL of "/" as a value to the file, a seemingly simple oversight that caused each and every search result to include the message: "This site may harm your computer. "
Typically, that message is one of the very nice security features that Google offers its users, allowing them to avoid known dangerous sites that populate the search giant's URL blacklist.
But on this morning, even the most benign of sites was labeled as pure evil.
Now, we'll forgive Google. The company has revolutionized the act of scouring the internet and, for that, it deserves a free pass. The problem took about an hour to fix and probably would have been resolved much sooner had it not occurred during the wee hours of the Mountain View, Calif. weekend.
But the tale is cautionary, especially as we learn more about the rumored (and greatly anticipated) GDrive, a controversial online storage system that Google may unveil this year. The premise is that all users would need to access their files is the internet because Google would be storing all the data "in the cloud" for them. Can you say, good night hard drive?
Saturday morning's incident makes me worry about relying too heavily on any one provider for all my computing needs. It also underscores the need to have a Dogpile-like backup plan in place.
After all, if Google can label the entire internet as bad, who says it can't lose my data for an hour or two, or longer?
If it does, hopefully I'll be sleeping. But when I wake up, I'll be reasurred to know that I also had saved my data (or at least most of it) somewhere else.
The Payment Card Industry Data Security Standard (PCI DSS) took a severe blow this week when leading payment processor Heartland Payment Systems announced it had been breached.
That's because the Princeton, N.J. firm was certified as PCI DSS compliant, according to Visa. (That status is now, not surprisingly, "under review.")
But whoever these intruders were that got away with potentially tens of millions of credit and debit card numbers being processed by Heartland, they were able to do it without without a stir.
Many experts this week are surmising that the cybercrooks took advantage of a vector that PCI doesn't address: Data traversing over private networks. In the case of Heartland, it appears the vandals were able to insert data-sniffing trojans on unencrypted private lines, which enabled them to siphon the credit card numbers in real time.
The PCI council, charged with administering the standard, will argue that other controls required under the guidelines can prevent this type of attack.. But perhaps it's time to revisit the need to require the encryption of all networks, both public and private.
Meanwhile, Mike Rothman, a former analyst, argues that the council might want to also give a closer look to the monitoring requirements, which, in his opinion, aren't strict enough:
If you are not monitoring configuration, asset, performance, and flow data in addition to logs, you are exposed.
Rothman and others are becoming increasingly critical of PCI because Heartland marks the second high-profile breach in less than a year in which a PCI-compliant company suffered a massive hack. Supermarket chain Hannaford was the other.
The state of Massachusetts, in a report that reviewed the number of breaches that affected state residents in recent months, questioned the effectiveness of mandates such as PCI.
Hannaford had been certified as PCI compliant in 2007 and in February 2008, at the very time, we are told, that the malware interception was taking place! While reasonably up-to-date malware protection might not have been effective against the new and sophisticated malware used in the Hannaford case, encryption of that data would probably have rendered its interception harmless.
And now for the zinger in the report:
The Hannaford incident suggests that the Payment Card Industry Data Security Standards are not an effective standard in light of the need for encryption.
Harsh, for sure. But perhaps not too out of line. Clearly, PCI presents comprehensive and prescriptive guidelines that have been instrumental in forcing companies not in the business of protecting data -- retailers, processors, etc. -- to think about the need to safeguard this stuff. But perhaps it's time for a more robust overhaul.
Or -- and this is more likely -- maybe it's time for organizations to truly grasp the concept that compliance does not equal security. It's a common refrain sung by vendors and analysts alike, but it's true. Compliance is merely a snapshot in time. So if Heartland was deemed compliant last April, as it was, the company could've been way out of compliance by the time the hackers got in. Or maybe even as soon as the next day.
The real worry is that, given the sophistication of the criminal community, 2009 is going to bring a lot of Heartlands.
And if records are made to be broken, TJX has no shot of keeping its title of largest reported data breach.
When President-elect Barack Obama is sworn in today as the 44th commander-in-chief, will his BlackBerry be bolted to his belt, as we have become so accustomed to seeing?
Well - maybe that's an unfair question, considering the formality of the inauguration proceedings. But right after he pledges to preserve, protect and defend the Constitution, so help him God, will he reach for that addictive device and immediately start firing off emails.
From Barack Obama (12:26 p.m.) Hey did you just check me out on TV?
Perhaps. And perhaps he'll be allowed to do so, as he recently told CNN.
There has been much debate that Obama was going to have to relinquish control of his "third child" come Inauguration Day, mainly because of security concerns.
And there certainly are real concerns, as we have expounded upon in the pages of SC Magazine. We learned, though, that the devices contain some robust encryption and malicious software- and spam-fighting capabilities.
Still, Obama's fight surely got a little more complicated last week when BlackBerry's maker Research in Motion issued a security advisory for two enterprise server vulnerabilities that could lead to remote code execution.
Obama is the first president to truly embrace the power of the web. As Arianna Huffington of The Huffington Post argued at a conference in California not long ago, without the web as a vehicle to raise money and get the word out on his campaign, he never would be taking that oath of office today.
He also is the first president to make cybersecurity a real priority.
So if this former senator is going to preach change on the steps of the Capitol today, we must allow for that change to happen. And that means giving him access to his BlackBerry.
And in a way, wouldn't a mandate forcing Obama to give up the device be a slap in the face to the security industry, the very industry we write about day in and day out? The bad guys always are going to be out there - that doesn't mean we have apply draconian measures to stop them. After 9/11, did we close our borders?
Then again, the president is the president. So if 2008 Obama isn't allowed to keep his prized toy, he's going to have to adjust.
He apparently beat back his cigarette addicition. But believe it or not, quitting technology can be an entirely different story.
There's a lot of bad news circulating these days around cyberthreats - and I'll spare you the somber recount, especially on a Friday.
So, instead, let's focus on where some noticeable improvements have been made.
This week, the Identity Theft Resource Center put out its 2008 breach report, which showed that data-loss incidents soared by 47 percent last year.
I know what you're thinking: I was supposed to give you some good news. Well, here it is. The government/military sector suffered 110 breaches -- categorized as either insider theft, hacking, data in motion, accidental exposure or subcontractor related. That compromised 16.8 percent of the total.
In 2007, this vertical was responsible for 24.6 percent of all breaches. A year prior, it was 30 percent.
All told, the percentage of breaches that the government/military sector suffered in 2008 was down 44 percent from 2006.
If you recall, 2006 was a particularly embarrassing year for government agencies and military branches. It seemed as if every week, I was writing about another lost laptop or exposed sensitive data. Of course, everything paled in comparison to the monster laptop breach that affected the Department of Veterans Affairs.
But once the hurricane went out to sea, the picture turned rosier. The federal Office of Management and Budget (OMB) was a big reason for the turnaround.
In a June 2006 memo, OMB ordered agencies to encrypt all sensitive data, in addition to requiring the implementation of two-factor authentication for remote users. Also, agencies must use the National Institute of Standards and Technology (NIST) security checklist as a baseline for its security practices.
In 2007, OMB issued a 22-page memo that directed federal agencies to, among other things, create a breach notification plan for the timely reporting and notification of data-loss incidents.
Feds also have been told to eliminate the unnecessary use of Social Security numbers.
And just a few months ago, the military announced it is banning the use of USB thumb drives.
This is not to mention the countless security education programs that surely took effect across government and military, amid the rash of data breaches.
So let's give credit where credit is due. And let's hope it keeps up. After all, there is no greater custodian of citizen data than the federal government.
One of the bigger announcements coming out of Macworld Expo in San Francisco today is a new pricing structure for iTunes: Beginning today, songs on iTunes will be offered in three tiers: 69 cents, 99 cents and $1.29. After years of holding firmly to a 99 cent price point – to the consternation of many record companies, which wanted a bigger share of the royalty pie – Apple has re-negotiated the terms of its licensing deals. It won’t be long before the Beatles catalog becomes available via download.
But for our purposes, the security angle comes in Apple's announcement that it is banishing DRM (digital rights management) from its music library. Apple is saying the new offerings, without DRM, provide “higher-quality 256 kbps AAC encoding for audio quality virtually indistinguishable from the original recordings.”
DRM, you may remember, was the culprit in one of the first rootkit cases to make its way into mainstream media. In 2005, Sony installed DRM protection on several of its CDs to prevent consumers from making multiple copies of the digital music files. But there was a more nefarious component. Two software technologies embedded in the download - SunnComm's MediaMax and First4Internet's Extended Copy Protection – enabled Sony to gather information on customers listening to these CDs, and the software installed hidden files on users' computers that opened consumers to attacks from third parties.
Even though the CD packages were clearly marked with a DRM warning, consumers knew little about the technology, so went ahead and got their PCs infected.
Beyond the technical intrusion, the issue raised a number of ethical and privacy concerns. And the uproar, aided by an angered community formed via internet connection, took Sony to task. The company’s initial dismissive response only increased the volatility of the situation and managed to bring the issue into the spotlight. It was a PR nightmare for Sony, which eventually relented, attempted to fix the problem, settled court cases (with compensation for some victims), and ultimately, as had several other record labels, stopped using DRM on its CDs for the American market.
So, while it’s been a year since any major record label has placed DRM on its CDs, Apple’s announcement today – that music offered on iTunes will be DRM free – hopefully puts the issue to rest.
In his speech, Philip W. Schiller, Apple’s senior vice president of worldwide product marketing, filling in for Steve Jobs, said that Apple will offer eight million songs without DRM and add the store’s remaining two million songs by the end of the quarter.
It's been more than five years since California passed its pioneering SB-1386, which requires companies that lose personal information of customers to notify them, took effect. Since then, about 45 states have followed suit.
But still no federal law. (To find out why, perhaps it would be wise to ask those five hold-out states why they haven't approved similar legislation).
It's not that Congress hasn't tried. Over the past few years, a number of bills have circulated the two houses. But none have found their way to the president.
When President-elect Obama takes office, there surely will be renewed optimism that such a law could get the green light. After all, the Illinois senator seems more interested in cybersecurity than President Bush - and he's receiving detailed guidance from the Commission on Cybersecurity for the 44th President.
But, corporations and consumer-rights advocates will continue to wrangle over what the threshold should be to report. And, remember, Congress will be busy. There's that whole worst-economic-climate-in-80-years thing to deal with.
I'm thinking we're going to have to wait until 2010. Of course, another TJX just may fast-track a federal data security bill right to the Oval Office.
One thing is for sure, though: Creating a nationwide law will standardize and, as a result, simplify the reporting process for companies that experience a breach. And as we all know, it's not "if" but "when" you'll be drafting that "We lost your Social Security number" letter to consumers.
I have a surefire way to gauge the state of the economy: Count how many holiday cards I receive in my office mailbox.
Two years ago, plenty. Last year, a whole lot. This year, not so much.
Most of the cards I receive here at the offices of Haymarket Media in New York come from PR agencies with whom I deal on pretty much a daily basis. This year, a majority are opting to send their warm wishes (A.K.A. - keep writing stories about our clients) to my inbox.
It's gotta be the economy. Why shell out 42 cents (and the cost of paper) to send a letter when you can do it for free over the internet?
But with all this Christmas goodwill comes a real risk: Some of these e-greeting cards are actually fakes, containing an embedded trojan or a link to a malicious site.
Now, that's not to say the rogue cards are coming from my PR contacts (although I was kind of - shall we say? - short with a few of them over the course of the year).
But there are lots of others out there looking to take advantage of our instinct to open a card. This is a threat worth paying attention to. And, as email security firm Commtouch will tell you, these socially engineered cards are becoming more and more real looking.
Kind of makes me yearn for the good 'ole days of greeting cards I could touch. But then, there's that whole recycling thing to worry about.
Each and every day, we write about the latest IT security news - and often our connection to the story ends right after we hit "Publish" in our CMS.
However, this week, the SC Magazine editorial team - as well as the hundreds of other employees of our publishing parent, Haymarket Media - are witnessing firsthand how potentially serious cyberthreats can be.
That's because so far this week, we have received two separate emails from IT, one warning about a virus outbreak believed to be emanating from Facebook and MySpace, the other about the wicked Internet Explorer zero-day.
As a result, IT has recommended users browse the web on Firefox only until Microsoft issues a patch. (Considering the extent of this exploit, the fix might come before next month's regularly scheduled security update).
OK, no big deal, I use Firefox anyway because I find it's more stable on my work PC.
But it was the other email that is really going to hit home. IT has blocked access to Facebook and MySpace until our London offices contain the problem.
If you just heard a scream, it was me.
Now, one would think that because I write about this stuff, I might be more understanding to defense strategies that must be applied to remediate malware occurrences. After all, I knew exactly what IT was referring to in those emails.
But nope, I'm in serious withdrawal. Need my Facebook. (To bosses reading this: I only log onto Facebook while eating lunch. I swear).
Oh, well. IT has assured me that access to the popular social-networking sites should be returned to the good graces of our whitelist in short order.
And I always have my web-enabled cell phone if the urge gets really overwhelming.
This had to tick off a lot of people: I read this week that convicted New Zealand bot herder Owen Thor Walker, 19, did not receive any jail time for his lead role in a major botnet operation that involved at least eight Americans.
Instead, a judge gave him a fine, despite Walker admitting to running a botnet that compromised upward of a million computers. (By comparison, Robert Alan Soloway, who was charged in a similar FBI investigation, received a 47-month prison sentence).
Authorities in New Zealand defended the judge's decision by saying:
"The worst thing that society could have done was put him in jail, where his mind would have been corrupted," Maarten Kleintjes, head of e-crime at the New Zealand Police, said during an interview on New Zealand's 60 Minutes show, according to an IDG News Service story.
While that may have been true, this type of mentality absolutely diminishes what law enforcement across the world is trying to do to stem the pervasiveness of botnets.
If cybercriminals know they'll get off the hook because they are too smart to go to jail, then -- I'll just take a wild stab at this one -- they're going to keep doing it until they get caught.
Now, by all accounts, Walker may be far gifted than most crooks associated with botnets. And, according to the story, he's currently working on the right side of the law, with a software company.
But still, this certainly sends the wrong message and only works to deter what is needed: A cooperative effort among back-end providers, ISPs, enterprises, law enforcement and end-users to eliminate bots and all they're capable of, namely spam, DDoS attacks and information stealing.
If you do the crime, expect to do the time. Even if that means trading in your laptop for prison garb at the door.
** What is up with Apple's flip-flop on its support note that recommended Mac users install anti-virus software?
First, Cupertino says users should deploy AV, then the company removes the note, calling it "old and inaccurate."
My money is on this: Lots of media outlets picked up the story of Apple quietly encouraging users to install AV. That surprised the computing giant. They didn't want potential customers to start thinking that Macs weren't as safe as they have been made out to be.
So Apple, sensing a possible impact on its computer sales, decided the best way out of the problem was to remove the document and pretend like it was never there to begin with.
But with the sales of Macs rising and more malware writers taking notice, Apple will have to do something other than roll over and play dead the next time the conversation of AV comes up.
Something, soon, will have to give. Communication will be key.
*** All of us here at SC Magazine are counting down the minutes - literally, just check out the home page - until our inaugural, two-day SC World Congress kicks off next week at the Javits Convention Center in New York.
So far, the response has been great. Since this is our first event of this kind, there is certainly an air of anxiousness and tension, but considering our strong speaker list, we are confident the show will be a huge success.
It promises to be quite the event, with the goal of providing attendees with as much practical advice as they can carry out of the conference center doors.
If you can't join us, please follow along with the latest news, photos and videos at SCMagazineUS.com.
With a new presidential administration about to take office, many are hopeful that the “change” promised on the campaign trail will begin to take effect sooner than later.
When it comes to industry regulations and the variety of data breach laws on the books, some look to President-elect Obama and express confidence that he can garner the momentum to help bring some needed order to the disparate edicts on the books, regulating everything from patient health care records to financial data to retail customers’ credit card information.
The Obama platform has offered specific remedies to help the government and private industry to become more efficient, including more automating of data accumulation. But, some warn that it will likely take time for any meaningful legislation to make its way through the Congress.
“With the current budget, it may or may not happen,” one vendor of compliance tools told SC yesterday. “In the early part of the administration, a reform bill is not likely to come out early,” he said.
But, as the stock market rally the past two days may show, the reaction to Obama’s competency in putting together an economic team portends positive results for future initiatives.
Even though he may be forbidden – for state security reasons – to use his BlackBerry, it’s comforting to know that the person in charge has an acute awareness of technology. We can pretty well assume he will be a champion and strong advocate for procedures affecting the transmission of data.
As well, President Obama is likely to show more concern than the previous administration for the affairs of the nation’s citizens, meaning that he will likely work to protect consumers from data fraud and enact stronger punishments for those responsible for data breaches.
In the January issue of SC Magazine, our reporter Angela Moscaritolo speaks with several experts on how an Obama presidency will affect the IT security field, referencing Obama's speech at Purdue University where he pointed out that our country’s system of information networks are the backbone of our economy.
We will also examine a brand new data breach law in Massachusetts, said to be the strictest in the nation. Will this become a model for federal legislation? Please check back, it's an ever evolving stage.
The web, you see, is connectionless at bottom. I’m not referring to protocols, for those of you technically bent.
What I mean, in a non-engineering way, is that in the old days (say about the time of Alexander Graham Bell), to have your device connect to another person’s, you had to physically hook wires to it, generally by way of young women sitting at a wall of jack fields. That, by the way, led to a prediction that eventually we would run out of people to sit in central offices and shove plugs into jacks.
That notion evolved – I’m skipping forward rapidly – to massive computers in central offices doing the plug shoving (at least virtually). That era was called the circuit-switched era (I just coined an era!).
Then, of course, we entered the era of packet switching (skipping even more). In this era, the destination device is connected (virtually) not by wires and plugs, but by way of little packets that contain destination addresses. All these little packets find their own way to their destination. They are trusted to get there safely and without modification.
Which leads to my latest theory (file this under Harebrained, Latest): Packet switching causes the security problems inherent with the internet.
I know, I know -- nothing is that simple. But when you have a system that can be used to intercept, modify, or connive readily, you will find people who intercept, modify and connive. If you can anonymously change, or spoof, a few packets instead of running drugs, heisting banks, or doping horses, crime will pay.
When the internet first started to actually work, it worked because the people building it trusted one another. That is, when you sent your personal information, Social Security number, bank account numbers, and children’s ages, the guy at the other end just figured it was test data, or that you were terribly confused, or both. They typically did not use the info to open bogus credit cards, drain your bank account, or kidnap your kids.
How things change!
Maybe a circuit-switched network was no safer, and there may be no causal link between an open, trusted model of networking and cybercrime, but it would likely be safer to run transactions on the Graham Bell, “Watson, come here” model.
Of course, it would be inefficient, expensive, and very near impossible to maintain. And life would be dull without what the internet has evolved to.
But the idea of talking to someone and otherwise exchanging information without worrying about devastating financial loss lurking behind every link is blissful.
When that universe opens up, let me know.
In today's sophisticated threat landscape, innovation is a critical component to an effective defense strategy.
That innovation typically comes to bear at the tiny technology companies, whose goal, in most cases, is to create that next big thing, so the firm can go public or get acquired.
But with the economy in ruins, investors are growing increasingly wary of taking chances with their money. As a result, the funding needed to support startups - in our case, those focused on IT security - is drying up ever so quickly.
According to the Arizona Republic, venture capitalists nationally invested $7.1 billion in 907 deals this year compared to $7.8 billion in 981 deals last year.
So it was certainly good news to hear this week of plans by the University of Texas at San Antonio to launch an incubator inside its Institute for Cyber Security.
It works sort of like a hospital incubator might for a premature baby - IT security firms who face challenges that prevent them for launching on their own can turn to the incubator to "fast track their product development efforts and expedite time to capital, market and profitability."
In return, participants must agree to "significant collaboration" with university staff.
While the incubator only stands to help a few companies at a time, hopefully it will encourage other universities to embark on similar missions. For more information, visit here.
Spam filters, junk mail folders and honeypots across the globe got a much-needed respite this week after a Northern California-based web hosting firm - McColo - was taken offline by a pair of its upstream internet service providers.
Few people have ever heard of McColo, but apparently this small Silcon Valley tech company was providing connectivity to countless groups of shady cybercrooks. It's doubtful McColo was in on the scam, but when it was shut down, security pros saw an estimated two-thirds to 75 percent drop in the amount of spam circulating around the world.
Practically every major security company noticed the stunning decline and made mention of it in research posts and blogs. But practically everyone also agreed that this likely was only a flash-in-the-pan-type victory against the spread of unwanted (and often malicious) messages.
Some experts have predicted the amount of spam would soon begin creeping back upward, with numbers returning to normal levels by the holidays, just in time for the traditional influx of fake e-greeting cards and the like.
While bonet herders will quickly find a new host to which they can connect their command-and-control centers, this news shows that companies who provide access to these crooks, especially if they are based in America, won't be tolerated.
Many companies such as McColo and Atrivo/Intercage - which was rendered a similar fate earlier this year - will play dumb as to the types of operations they are supporting.
But the fact is, going after these enablers who are turning a blind eye to to the motives of their customers seems to be the most effective solution anyone has come up with yet to stop the spread of junk mail.
There is plenty of reason for cautious optimism, though. As long as there is money to be made, criminals will find a way. So maybe Bill Gates' prognostication will never come true.
There’s nothing new about heading to the polls and picking a president, but citizens have a new source today for obtaining the results: the internet.
In addition to the mainstream online news sources, hundreds of citizen journalists on hundreds of different personal websites will be blogging, crunching numbers and analyzing results, making predictions and providing commentary. And much of this journalism and opinion will be of expert caliber, as many of these new pundits are, in the noble tradition of democracy, committed to sharing their views with the populace. And blogging makes it easy.
Irregularities at the polling place? You can be sure these dedicated watchdogs will be reporting on it. While they may not have access to the big players, these investigators will be keeping a close eye on every conceivable angle related to the election process – from the size of the crowds to the effectiveness of the polling procedures. They will doggedly interview any disgruntled voter coming out of a polling place upset because of some procedural glitch. Nonstop coverage will detail not only all the news that’s fit to print, but also the color commentary missing from the premier editions.
Our special election report on e-voting security concerns by our ace reporter Angela Moscaritolo, investigates some of the conflicts that may be in store for some voters: the possibility of votes not being counted, of security vulnerabilities in e-voting machines. For example, the article explains:
Touch-screen machines have come under fire. Numerous studies have shown that it would be easy to introduce malicious software to these machines, potentially allowing rogue insiders or malicious outsiders to sway an election.
While stories like this may or may not break through into mainstream media, independent bloggers will pounce at the opportunity to right a wrong, and it’s more likely we’ll see ancillary coverage digging deep into the mysteries and inadequately explained.
Giving voice to the marginalized. A venue for the disenfranchised presenting the average citizen’s experience. This is the provenance of the internet. And you don’t have to wait for the evening edition.
For up-to-the-nanosecond election results and coverage, the Huffington Post, for example, calls attention to dozens of sites to which internet users can tune in, each cornering a niche, a particular area of expertise and/or speculation.
Lately, it seems everything's (and everyone's) been going rogue.
You might be most familiar with claims by an aide of Sen. John McCain that GOP vice presidential candidate Sarah Palin is going rogue and instead concentrating on her own run for the president in 2012.
But, when faithful readers of SC Magazine hear the word "rogue" - especially of late - they likely immediately think of rogue anti-virus software, the au courant way to steal money off unsuspecting victims.
It seems many of the recent malicious payloads are fake pop-up warnings alerting users their computer is infected with viruses. To fix the "problem," they must pay - usually $40 or so - to purchase the attacker's rogue AV solution.
Except it fixes nothing.
Cybercrooks appear to be dropping traditional keylogging and phishing attacks in favor of preying on the fear factor. After all, fear is in the air.
The way they figure, why not have the victim send money directly to them instead of going through the often challenging process of stealing it from them.
Makes sense to me. So until users catch on to this growing trend, the criminals are going to keep doing it.
Protect yourself by protecting yourself. If you know you've got the latest real anti-virus product running, then you can safely ignore any pop-ups telling you otherwise.
(BTW, we're going to host a podcast Monday with researcher Joe Stewart of SecureWorks on this very topic, so please be sure to listen starting next week).
WIth that said, it's getting near 5 p.m. EST on Friday. Almost time for me to go Rogue.
Actually, that's go to Rogue - this publishing company's favorite watering hole on 6th Avenue between 25 and 26th streets in New York.
Talk to you next week. And remember to vote!
It wasn't too long ago that Microsoft bore constant criticism for its lack of transparency regarding security vulnerabilities and subsequent fixes.
One cannot objectively still accuse the software giant of similar evasiveness.
Nowhere has this change in approach been more evident than Thursday's unexpected out-of-cycle patch for a Windows Server service vulnerability. Immediately following the issuance of the fix, Microsoft staff wrote posts on not one, not two, not three, but four different Microsoft blogs. You can find them here.
That's not to mention the webcasts -- Microsoft added two on Friday because of popular demand -- where end-users could hear specifics about the major flaw.
Certainly this was an urgent matter that companies across the globe needed to be aware of and act on quickly to prevent the possibility of a major internet worm a la Nimda, Code Red and Blaster.
And Microsoft realized that corporations would have a lot of questions - why did Microsoft rush this fix? How did this one get past the secure code team? Which Windows versions are most affected? What do the active attacks look like - and the software giant did its best to provide answers.
They should be commended, especially on the heels of the first-ever round of Patch Tuesday bulletins that included an Exploitability Index, by which users can measure the likelihood of the vulnerability in question being exploited.
Needless to say, Thursday's out-of-cycle fix aimed to correct a gaping hole that could have been consistently exploited.
And thanks to Microsoft's candor, not only are businesses patching before anything got out of hand but they are patching with an understanding of what and why they're patching.
And information is power, after all.
H4ck3rs Are People Too is a recently released documentary that gives an enlightening and comical glimpse into the hacker community. Not just the cybercriminal launching attacks from the dark shadows of their basement, the film proves that hackers are fun, passionate, beer-drinking, normal people.
The film dispels the notion that all hackers are out to steal your credit card info and replaces it with the reality that many hackers are IT security professionals, computer analysts and researchers. The message that comes through, for me, is that a lot of hackers are just normal people trying to break things to make them better.
The film was edited and directed by Ashley Schwartau, a 23 year-old University of Central Florida digital media student. Daughter of Winn Schwartau, CEO of The Security Awareness Company, Schwartau has been going to hacker conventions since the age of 16. The documentary was shot at a recent Defcon conference where Schwartau interviewed some prominent names in the IT security community.
In a few hours at a press conference in California, Apple is expected to announce two new MacBook laptops priced at around $1,200 and $1,500. Considering the downturn, let’s call it, in the economy, their strategy of offering more affordable laptops seems to be particularly well-timed.
Rumor sites are touting pumped up functionality – faster processing speed, faster wireless connectivity, better screen resolution, longer battery life – all the improvements one would expect with a new product line.
Our concern, however, is the security angle. As the price point of laptops continue to lower and they become more easily procurable to larger segments of the marketplace, their function broadens as well. No longer are they simply a hard drive on which road warriors can keep their accounts up to date. Laptops are quickly evolving into mobile devices. Anyone with one of these tools can flip it open and easily connect to a wireless network to send email or check their stocks or Facebook page. Witness the scene at any Starbucks or park.
In the old days, a year or so ago, laptops were generally checked out of the office, presumably with some security oversight. Nowadays, as they become more of a consumer buy, laptops are functioning in much the same manner as a smart phone or PDA. They’re not quite down to the size of a Dick Tracy wrist phone, but are certainly more ubiquitous.
Apple is not immune to vulnerabilities. In fact, just last week, in its latest software update, Apple fixed a security vulnerability which could have led to cross site request forgery. Sophos recently released a whitepaper offering 10 steps to better protect Macs from data theft.
But, while the Apple OS has been less of a target for malware writers than Microsoft’s Windows and Vista, that luxury may be waning. The popularity of the iPhone, and now the introduction of near-$1,000 laptops, while benefitting Apple shareholders by increasing the Cupertino, Calif-based company’s slice of the computer pie, is certain to invite assaults by virus writers, spear phishers, trojan spreaders and all the other n’er-do-wells who feed off the success of others.
Fox News, in an exclusive, says yes.
Citing some unnamed sources, Fox reported Friday that the World Bank, which provides financial assistance to developing countries, has had some 40 servers compromised and an unknown amount of personal data stolen.
The bank, however, denies this, saying no sensitive information has been hijacked and that most businesses suffer attempted hacks, so this is nothing out of the ordinary.
I think the truth lies somewhere in the middle. Sounds as if attackers may have been targeting the venerable organization in much more sustained ways that your average business might see. But it also is likely that no major breach has occurred.
We'll have to see what comes of this.
But a general takeaway: Monitor your network for suspicious activity. Whenever we hear about a mega breach, the attackers, it seems, were able to go about their business without disturbing a soul.
When I wrote this week about the breach at the University of Indianapolis, in which the personal data of some 11,000 students, faculty and staff was potentially compromised by hackers, I couldn't help but think about that SNL Weekend Update skit called "Really?!"
It's a hilarious segment where Amy Poehler and Seth Meyers make fun of famous people for lacking common sense.
Well in the case of this breach, I was just shaking my head when I read a quote from University President Beverley Pitts:
Our investigation leaves no doubt that this was a professional job from outside, and it was well beyond our control.
Really, Beverley!?! Beyond your control.
OK, first of all, the University of Indianapolis should be lauded for no longer using Social Security numbers as identifiers, something the federal government is currently evaluating itself. (It appears, in this case, the hackers lifted old credentials that were still floating around in some database).
And yes, colleges face bigger IT security challenges than a lot of verticals, due to their open environments, limited budgets and sometimes inexperienced staff.
But - to say it was beyond your control, in 2008, considering all the awareness and all the headlines and all the security solutions, is just plain senseless.
Maybe it was a poor choice of words, Beverley. But if you get breached, admit that there was a shortfall somewhere in your baseline and then immediately work on rectifying it so that it never happens again.
Don't proclaim helplessness.
The launch today of Android, Google’s new cell phone OS, has elicited the usual hoopla.
The system, in partnership with T-Mobile’s G1 cell phone, may prove to be, despite some lukewarm reviews, a worthy competitor to Apple’s iPhone. While many of its features are similar, offering the now standard Wi-Fi and Bluetooth, the prime selling point is the OS’s underlying Linux-based open source mobile platform.
The company is touting how this will allow its app store, called the Android Marketplace, to be completely open – the inference being that it will be easier for developers to create and distribute their applications for the device without the policing Apple provides with its app store.
Critics are already pointing out how this lack of security oversight could lead to viruses and malware being dropped into coding as easily as adding salt to a recipe.
In a piece today, NY Times tech and gadget guru David Pogue responds to those accusations, saying, “[Google] will remove apps that contain malware, copyright infringement, pornography, etc…”
But we have to wonder. Last year, Google got things rolling by offering $10 million in prizes to developers. Recently announced winners included Wertago, a social networking app that lets users hook up with their friends; and cab4me, which enables users to summon a taxi with one click.
Certainly, the first wave of apps will prove useful and fun for the ever-burgeoning techno set. However, the next wave of apps is sure to take advantage of the popularity of the new smart phone technology to launch insidious malware attacks.
Gene Munster, an analyst at Piper Jaffray, predicts that Google’s take from mobile search revenue will reach about $2 billion by 2012. So the stakes are high.
After word spread that a hacker leaked the contents of vice presidential candidate Sarah Palin's Yahoo email account by knowing a couple of pieces of background information about the Alaska governor, I could hear the collective mouse-click of panicked web mail users, from Wasilla to Worcester.
If it was that easy for someone who'd never met Palin to break into her email account, what did that mean for the millions of users of Google Talk, Yahoo, MSN Hotmail, AOL, etc. whose identities could be just as easily impersonated.
Here's what went through my mind:
"What does my account require to retrieve a forgotten password? What's my 'secret' question? Darn it, everyone knows who my childhood best friend was....Why did I pick that as my question?"
Well you get the idea. But this is a real risk for so many people who rely on personal emails to transfer back and forth a lot of critical information about their lives.
Seriously, I doubt I was the only one who after hearing about the Palin incident, had flashbacks of that crazy ex who knew a lot about you and wouldn't mind using that knowledge to excavate your email account in hopes of confirming her wild suspicions of where you really were on that night when you swore you were working on an all-night project at work...I digress.
But I'm a curious guy, so I decided to try it out myself. With the permission of my twin brother, I tried to access his Gmail account.
So I entered his username and clicked on the "I cannot access my account" link, then the "I forgot my password link." What I learned was that my brother set up his account so the proper password would be sent to his AOL account.
Hmmm. Well, I'll try there then. So I go to AOL.com, enter in his username, some annoying CAPTCHA and then it asks me: What is your favorite movie? Bingo, I'm almost there.
Well I tried three films that I was certain would get me in - and they didn't work. So I tried one or two more. No luck. Then it said the account would be locked for 24 hours due to too many attempts at this. Oops. Sorry, Dave.
Turns out, the guesses I made were the ones my brother thought they'd be. Either way, I'm assuming that if I would've correctly answered that "secret" question, it would've been pwnage.
(My little experiment sounds cool, but not nearly as well-documented as our friend Hugh Thompson wrote here in an article he did for Scientific American).
Since the Palin hack, my inbox has been predictably flooded with a number of requests to speak with vendors who claim to be able to solve this weak web mail authentication issue. From the Trusted Platform Module to outright blocking, there's a lot of of ideas out there.
But one thing is for sure: While we can never expect personal email accounts to undergo the same scrutinies and protections as corporate accounts, the burden is on the web mail providers to offer users some more comprehensive security.
Something beyond what someone's favorite movie is or where a husband and wife originally met...These answers are easily discoverable on the internet.
Didn't the Yahoos and Googles of the world ever hear of social networking sites or, better yet, internet searching?
Considering two years of feedback have gone into revising the Payment Card Industry Data Security Standard (PCI DSS) for its next coming-out party, the most prescriptive IT security mandate in all the land actually hasn't changed that much.
And that's good news. It proves that a set of guidelines can be industry driven, without any reliance on the government, and still motivate companies to take action.
That's, of course, not to say there hasn't been lots of kicking and screaming along the way, but considering Visa's latest compliance figures, merchants are accepting the reality that is PCI DSS.
Version 1.2 of the standard gets released today to the hundreds of participating members of the PCI Security Standards Council. On Oct. 1, the day 1.2 officially takes effect, everyone can see it.
With that said, there are some very significant additions to the new version.
Chief among them is the removal of references to the WEP (Wireless Equivalent Privacy) encryption standard, an outdated algorithm that, depending on who you ask, is filled with more holes than Swiss cheese. By 2010, organizations encrypting wireless communication must have fully transitioned to the WPA (Wi-Fi Protected Access) model, a grown-up version of WEP.
Other changes include making requirement 6.6, which says organizations need to either perform application code review or implement a web application firewall, mandatory - no longer just a best practice.
There also are some clarifications and adjustments, such as using consistent terminology, like "strong cryptography," in addition to defining some deadlines not in terms of time but based on risk to that individual merchant.
Absent from the latest version is a requirement to encrypt internal communication from point-of-sale device to credit card processor, something I thought might have found its way into the updated version after the Hannaford breach.
I met with Bob Russo, the PCI council's general manager on Thursday, who told me the change could someday become part of the standard. But if retailers comply with existing sections of the standard, they should be able to avoid a rogue person inserting a sniffer on their private network. Plus, the council - which administers the standard - tries to avoid pushing out new, potentially time consuming and costly requirements on merchants, whenever possible.
"My objective when I put out a new standard is not to put people out of compliance," Russo says.
He also told me that he has yet to know of a single retailer who has been PCI compliant and simultaneously breached. When I asked him about Hannaford, which supposedly had just successfully completed a PCI audit prior to its major data compromise, he told me the supermarket chain's former CIO could never prove it to him.
Regardless, I have to believe that even if retailers are close to PCI compliance, they're in pretty good shape. The cybercriminals of the world are looking for the lowest common denominator, the type of business whose defenses aren't going to make it difficult on them.
Believe me, there are still plenty of TJXs and Hannafords to go around.
So keep it up, merchants! I know PCI can be costly and riddled with some complexities but isn't it better to be told what to do by your peers rather than the federal government?
Oh, and be happy version 1.2, not 2.0, is showing up at your doorstep in two weeks. Because that would mean a lot more work would be in order.
To believe the data, the trends, the analysts and the other interested observers, lawlessness is the status quo in computer security.
I’m just talking here. And as a colleague of mine used to grumble, I know nothing…
But what happened to the implied social contract of the internet?
In society, the theory goes, people go about living without fear because of protection afforded by the policing function of government. In fact, the need for effective protection arose from an inability of ordinary individuals to curb lawlessness.
And where does lawlessness stem from? Criminal minds, of course. That is the purview of criminologists, right? Criminology theoretically draws on the study of multiple disciplines from biology to anthropology. Crime relates to a multiplicity of conflicting and convergent influences, so any understanding of causality remains hard to pin down.
In general, however, security implies prevention – preventative measures and investigation of incidents after the fact (in theory to prevent future incidents and discourage wrongdoers). Most organizations are on their own in terms of prevention; and investigating is the last measure one would engage in if it involves outside help and notoriety.
Even if outside help were relied on, the nature of computer offenses is not something that lends itself to everyday recourses. In this country, there is a very disjointed system of governmental administration, including thousands of disparate municipal and county law-enforcement agencies and even more federal, state, and local agencies with specialized jurisdictions.
Whether or not you agree that computer security is a law-enforcement problem, the enforcers cannot be expected to create order from whole cloth; we’re talking about a criminal behavior quite different from the usual street crime.
That is, though crimes are considered injurious to society, the onus of cybercrime is addressed mainly by commercial products aimed at prevention of overt acts in private organizations.
People engaged in business should be able to go about being productive without concern that assets they create and work with will be drained and sold in cyberspace. This freedom of action has to be protected, and it is now only through a strange amalgam of government and private efforts.
Where does one begin and the other end?
A new spam campaign is emerging that exploits the seedier side of computer users. In a new wave of social engineering, in language that might have been written by Borat, the spam promises videos of presidential candidate Barack Obama having “sex action with many ukrainian girls.”
If a moron clicks on the moronic message, a sex video begins playing. But at the same time, in the background, information-stealing code is downloaded to the victim’s machine, according to a release from Websense, which claims it discovered the email campaign.
This email campaign loads a trojan dropper, which then installs a file in the computer user’s Temporary Internet Files folder, according to the Websense report. A browser helper object (BHO) is also registered, an information-stealing app that siphons off data from the end-user to a site registered in Finland.
We’ve been seeing various methods of phishing scams being perpetrated that exploit the topicality of the presidential campaign, but this one is particularly outrageous for the blatancy of its lies. It almost obliterates ethics in its stupidity. The message is so obviously untrue, yet it attempts to gain a measure of credibility by associating itself with a real person/event. It almost doesn’t matter that it is discrediting Obama. It could just as well be promising free jewels.
We’ve seen it before. Any item in the headlines – a natural disaster or celebrity disaster, say -- draws out the malicious exploiters intent on capitalizing on people’s natural proclivity to be empathetic, or their being susceptible to voyeuristic opportunities.
While the Red Cross solicits funds for victims of hurricanes, ruthless parasites get in on the action to redirect the well-intentioned, or the bored.
As the field of information security continues to evolve into, well, a true field, many professionals are starting to ask themselves: How should I be approaching my career?
A new (fairly vendor neutral) survey seeks to answer that. Created by executive recruiter Lee Kushner, independent infosec professional Mike Murray and Max Kilger, senior member of the Honeynet Project, the 63-question survey is meant for security workers of all skill levels.
"The benefit of it is getting a true sampling of the industry," Kusher told me this week. "It's not going to be people who have the same career goals in mind."
The purpose, he said, is to get a overall handle on how information security pros are managing and investing in their careers (certs, degrees).
It's easy to look at this as just another survey, but I think these career-oriented ones are particularly important because there is a lot of confusion out there. In fact, SC Magazine undertook a similar endeavor in June with our 2008 Salary and Career Survey.
"The competition for the best positions out there is definitely increasing," Kushner said. "A lot of people are having a hard time figuring out how to climb the ladder."
If you want to participate, the survey can be found here. It closes Jan. 15, 2009.
Because our current administration seems committed to conditioning Iraqi security forces (with the hope that they'll be able to restore order when U.S. troops eventually withdraw), there may be one other training exercise to include:
It seems, according to this USA Today story this week, Iraqi computer networks are sitting ducks for al-Qaeda cyberattacks. This state of affairs should surprise nobody, especially considering Iraq largely outlawed the internet under Saddam Hussein's power, according to the story.
That means the rest of the civilized world has more than a five-year head start on Iraq.
And cyberdefenses are suffering as a result.
The nation is responding, having recently formed a new division specifically aimed at stopping computer crimes. Still, the agency ranks low on the totem pole of priority in this war-ravaged nation. Here's how Ali Hussein, one of a dozen recently employed computer science grads to join the cybercrime team, summarizes the situation:
We could have the most powerful anti-hacking force in the world, but we'd still have no computers, so we couldn't do anything. The government thinks about guns, tanks, and raiding houses. Hackers just aren't a priority.
While it appears Iraq has been safe thus far from serious, malicious attacks, the clock is ticking. With the U.S. still very much trying to help the Iraqi national police take back its nation, it might want to consider lending some cybercrime help.
Then again, if America is often slow to respond to cyberthreats within its own borders, how can we expect Iraq to improve?
The news: Gary McKinnon, the alleged NASA hacker, has failed in his last ditch appeal to the European Court of Human Rights to have his extradition to the United States quashed.
Here's the background: In 2002, McKinnon, also known as Solo, left this message on a computer belonging to the U.S. Army:
“US foreign policy is akin to government-sponsored terrorism these days... It was not a mistake that there was a huge security stand-down on September 11 ... I am SOLO. I will continue to disrupt at the highest levels.”
As a result of this action, and a few others, he was indicted in 2002 by a federal grand jury on seven counts of computer fraud and related activity, and faces on each count a maximum sentence of 10 years of prison and a $250,000 fine.
The indictment says that in one instance he obtained administrator privileges to a military computer, deleted approximately 1,300 user accounts, deleted critical system files, copied a file containing usernames and encrypted passwords for the computer; and installed tools for obtaining unauthorized access to networked peers. What’s more, he did the same thing to Army, Navy, Air Force and NASA computers from Groton, CT to Pearl Harbor.
Specifically, the indictment charged that McKinnon scanned a large number of computers in the .mil network and was able to obtain administrative privileges to many of them. Once he was able to access the computers, McKinnon installed a number of hacker tools (one of which was “Remotely/Anywhere”), copied password files, then deleted a number of user accounts and critical system files. Eventually, he was able to scan more than 73,000 computers.
At the Naval Weapons Station Earle, on one of the computers used for monitoring the identity, location, physical condition, staffing and battle readiness of Navy ships, he deleted files that rendered the base’s entire network of over 300 computers inoperable. This was at a critical time: immediately following September 11.
The indictment goes on to say that once inside a network, McKinnon would use the hacked computers to find additional military and NASA hosts. In one attack, McKinnon caused a network in the Washington D.C. area to shut down, resulting in the total loss of internet access and email service to approximately 2,000 users for three days. The estimated loss for all of this has been put at approximately $900,000.
OK, then. Let me get this straight. Using his home computer, McKinnon, through the internet, identified networked government computers and from those extracted the identities of certain administrative accounts and associated passwords. Having gained access to those accounts he installed Remotely/Anywhere, which enabled him to access and alter data at any time. Right...
It’s hard to feel too sorry for this guy, considering the nature of the charges against him. If he didn’t do this stuff, or if he can justify his actions in some way (he claims he was looking for UFO information), he should tell it to the judge.
As the Rolling Stones used to say, “What can a poor boy do?"
Despite taking all the prescribed precautions and having proper defenses in place, late last week, hotel chain Best Western allegedly suffered the indignity of a breach of its reservation system. Reportedly, the personal information of eight million customers was put up for sale on a pirate site (reportedly via a Russian mob), though the hotel issued a statement refuting this accounting.
While the facts at this point in the investigation are sketchy, a trojan placed on a computer within the chain is being cited as the hacker’s entry point. And this occurred even as the chain was doing everything it should to prevent such an intrusion. In a statement issued in response to a news report of the breach, the chain outlined all the steps it takes in its information security processes:
- “We comply with the Payment Card Industry (PCI) Data Security Standards (DSS). To maintain that compliance, Best Western maintains a secure network protected by firewalls and governed by a strong information security policy. We collect credit card information only when it is necessary to process a guest's reservation; we restrict access to that information to only those requiring access and through the use of unique and individual, password-protected points of entry; we encrypt credit card information in our systems and databases and in any electronic transmission over public networks; and again, we delete credit card information and all other personal information upon guest departure. We regularly test our systems and processes in an effort to protect customer information, and employ the services of industry-leading third-party firms to evaluate our safeguards.”
From this security profile, it’s reasonable to assess that Best Western was doing everything “right.” But the end result proves that “right” just might not be enough.
As we hear over and over again: compliance does not necessarily equal security. Experts repeat ad nauseum that compliance is useful (even if begrudged), but that other measures must also be put in place to build up a stronger defense against the loss of data, both from without and within.
This latest alleged exposure raises a number of issues: Was Best Western doing everything right to defend its database and network? Can it have done anything different to beef up its defense? Is it inevitable, as many say, that it’s impossible to stop a breach? And, the inevitable, what now?
Whether the accusations are accurate or not, whether the charge that the personal info of eight million customers was exposed is overblown, as some are saying (including the hotel chain), or whether that number turns out to be much smaller, almost doesn’t matter at this point. Beyond the need for a reassessment of its information security systems, it’s a PR nightmare for Best Western.
“So much public scrutiny as a result of the published report could be detrimental to Best Western’s brand,” Ed Moyle, manager, CTG, a firm that provides information technology staffing and solutions, told SCMagazine.com yesterday.
Whether Best Western is the victim of a hacker or of a campaign to besmirch its name, this week's latest entry into security celebrity status unfolds as an illustration for the rest of us. Will this negative attention mean much to the public? How will Best Western handle the accusations and the tangible setup of its IT security systems and processes?
Clue: They might look to Hannaford, who handled the aftermath of its breach with transparency.
“So, I have this watch I’d like to sell you. You probably don’t need a watch, and you could likely live without this one, but the nice lady you’re with would surely be impressed if you were wearing some nice new shiny man-links on your wrist. Just look at the way she’s studying your face as you examine it!
“And the price! How can you go wrong? $20 dollars and it’s yours. You walk away a new man, your girl is bowled over, and at that price -- well, you really put one over on me.”
"He’s right," you think. "It’s flashy, I dig the design, she’s really acting as though she’s impressed. The guy looks like a good guy, and he’s talking a square deal … I think.
"What the heck? Call me a sucker, but what if this thing is legit? I may have just stepped into a bit of good luck. I’ll hand over this nice new twenty and put the glitz on…"
As you walk away, the seller disappears, the watch stops, and your girl can’t get over why in the world you would do such a thing. Her look of being impressed was really one of incredulous amazement at your stupidity.
To be human is to be weak, just read Hamlet or King Lear. And tragedy is not limited to storied interactions. It permeates all human activity, right? So it is in the modern corporations, peopled by potential tragedies sitting at every monitor and keyboard. Any user falling for a seemingly innocent ploy can bring down the whole company. Click that email attachment, download that fun game, and unknown -- unseen even -- a door opens to the Raiders of the Lost Bot.
The modern term of art is “social engineering,” but it may be the world’s third-oldest profession. Every generation produces people who are skilled at conning others, and a sucker is born every 60,000 milliseconds. It’s the final frontier for the current con artist, the guy who lurks around every corner of the internet stalking his next mark.
The only effective way to combat this menace, the experts agree, is end-user training, constant vigilance, and up-to-date patches. Train, watch, patch… Train, watch, patch…
Why am I reminded of a half strophe, “the day the music died” (from Don McLean’s American Pie)? The internet made the world different, but in a lot of ways the world is just the same. The criminal tragedy suffusing the internet parallels the demise of hope that the internet could be free of human malfeasance.
But, alas poor Yorrick, fellow of infinite jest, we must progress: Train, watch, patch…
The university environment tends toward open communications. The free flow of information is not only encouraged, but necessary for learning. Millions of students at these institutions, on their own for the first time, feed their hunger for information and stimulation via campus networks. They are researching and, in their more leisure moments, downloading songs and videos, playing online games, sharing data with peers via social networking sites. In other words, this population is maximizing the potential of the internet.
The IT staffs charged with keeping university networks operating are faced with a dilemma specific to this vertical: how to maintain network defenses within a culture that thrives on unfettered access to information. Students demand the fastest connections and cannot tolerate obstacles put in their path. IT administrators must provide as free and open a network as possible to foster the furthest evolution of their users.
But the academic environment is not only nurturing its students. It must also enable campus staff to perform their tasks as well. That means all the machinations of any commerce site are part of the mix. Personal student info must be protected from breaches. Protections must be put in place to guard against outside network intrusions, as well as data leaving from within.
The philosophy of the campus is ideal. Running the operation involves nuts and bolts. A special, new online exclusive section on the SC website looks at several instances of how various campuses are contending with these issues. You are invited to take a look:
Perhaps it's the writer in me, but I view a federal judge's decision to bar three MIT students from presenting research findings at the recent Defcon convention as a huge problem.
Not just from a free speech perspective, although, given that whole U.S. Constitution thing, that standpoint is a pretty darn valid one.
But what really grinds my gears - have I been watching too much Family Guy? - is what this ruling might mean for security research in academia going forward.
As would have been clearly evidenced by the students' talk - in which they planned to detail ways to hack into Boston's subway payment system to enable free rides for life - there is some outstanding work coming out of colleges and universities across the world, specifically related to security vulnerabilities.
Time and time again, we have written stories about remarkable discoveries made by undergrads, graduates or Ph.D. students. Remember the cold-boot memory attack?
All too often, though, the legal community doesn't see the benefits that those in the security community surely do. Judges and prosecutors assume that when a band of T-shirt wearing, long-haired youths (OK I'm massively generalizing here) get together at some hacker con to talk, it must mean they're up to no good - and want more bad people to learn about it.
That couldn't be any further from the truth. Talks like the one dropped from the Defcon bill last weekend actually do the opposite. They get people thinking about security, especially agencies like the Massachusetts Bay Transportation Authority, which decided to think novelty first, security second - or third, or fourth...well you get the idea...when it designed its CharlieCard subway passes.
Yet the MBTA, instead of thanking the students for their research and hopping on the Neon Express to Vegas, they filed a motion for injunction. Great.
And the judge agreed. Now, it's difficult to say whether the judge, when making his decision, realized that the students weren't planning on giving away the hack blueprint - just some interesting observations. But that lack of technical awareness within the judicial community is another matter entirely.
What we should be especially concerned about is in this era of black markets, where folks like Dan Kaminsky could have netted in the hundreds of thousands for his DNS design bug, these discoveries could result in monster paydays.
But if you saw Kaminsky running around the Caesar's Palace convention center in Vegas, you could tell he was more than happy to be Black Hat's version of Brad Pitt for the week.
These MIT students weren't looking to make any cash on this discovery, either. They felt rewarded enough by getting an "A" from their professor and, barely old enough to gamble, address an audience in Vegas.
Not everyone will feel that way. Most will want to break the bank, much like the MIT students' classmates had done years earlier at the blackjack tables.
So let's not do anything to discourage the good people, while we still have them.
It’s been a busy time on the cyber warfare front. First there were rumblings of attacks on Georgia governmental websites, then actual attacks, followed by gunfire. The usual suspects are being blamed: overzealous teenagers, Russian mafia hoodlums, nefarious spy rings.
Then speculation came in over the wire that the Air Force Cyber Command was doomed. The Navy was supposed to take over. A statement rushed out from the pentagon countered by saying:
“The Air Force remains committed to providing full-spectrum cyber capabilities to include global command and control, electronic warfare and network defense. The Secretary and Chief of Staff of the Air Force have considered delaying currently planned actions on Air Force Cyber Command to allow ample time for a comprehensive assessment of all AFCYBER requirements and to synchronize the AFCYBER mission with other key Air Force initiatives. The new Air Force leaders continue to make a fresh assessment of all our efforts to provide our nation and the joint force the full spectrum of air, space, and cyberspace capabilities.”
So now what? One of the main tenets of modern warfare is that the first target of choice in any campaign is the enemies’ command and control capability. Destroy that, and you can get on with obliterating the civilian populace. Given that most command and control relies on IP networks everywhere, instead of wasting munitions on cabling plant and computer centers, all that is necessary is to overwhelm the enemy with a few dozen hackers in a well-connected bunker.
Nevertheless, a cyber arms race is raging. McAfee has claimed that approximately 120 countries have been developing ways to use the internet as a weapon. And the U.S. military, the most technological in the world, is not exactly unaware of its cyber strengths and vulnerabilities. For example, it has long implemented a classified, encrypted military internet that parallels the ordinary internet, called SIPRNet. SIPRNet is made up of interconnected computer networks to transmit secret information by packet switching over TCP/IP protocols. Sound familiar?
Considering the general impression that comes on the heels of Black Hat and Defcon, this is a daunting revelation considering how dozens of presenters seemed to prove once again that IP is doomed. SIPRNet is securely sealed off, but you get the impression from some researchers that, regardless, implementing military network security is like chasing a will o’ the wisp.
The point is that conflict in the future, if the Georgian conflict is any guide, will involve cyberspace in a big way, and reliance on internet communications should be considered tenuous even before the bullets fly.
After his presentation at the Black Hat conference in Las Vegas, keynoter Ian O. Angell, professor of Information Systems. London School of Economics, sat down with reporters in the Black Hat press room (yes, the one that was hacked), and talked about his take on technology, security, and much of the rest of the universe, as fits his philosophical bent. In some quarters, he is known as a cheerful pessimist.
In a prologue in the Black Hat brochure, he is described as having “very radical and constructive views on his subject, and is very critical of what he calls the pseudo-science of academic information systems. He has gained notoriety worldwide for his aggressive polemics against the inappropriate use of artificial intelligence and so-called knowledge management, and against the hyperbole surrounding e-commerce.”
Here are some excerpts from what he had to say.
“The problem with security is that there are so many silos of specialty that do not interact with each other. The breakdown is because they don’t talk to one another; they can accidentally conspire against one another.
“What we are seeing is not there; that is, we are not seeing security as it is. There is a latency, a link that does not appear anywhere in what we see.”
“The art of the security professional is seeing the problem before the amateur does.”
On the futility of quick fixes:
“When you focus on any single thing, you leave many things unobserved. There is no way to fully observe anything. There is a paradox as a result – almost a butterfly effect of paradoxes, a paradox that is smirking. The only thing systems have in common is that they fail.”
“If the internet crashes, it will be an accident.”
“Any entrenched culture is self-referential. The self interests of disparate groups are not alike.”
“We need more innovation. Innovation is ideas following on ideas, following on ideas. Large organizations do not innovate; they only fund orthodoxy. Professors research yesterday’s failures.”
“Privacy is monitoring people. The collective thinks it owns the individual; the collective strives until it destroys itself.”
“The government cannot ever get inside your head. When you see control freaks recognize their impotence, it’s wonderful.”
“Regulation can be the ultimate destroyer of the internet.”
“Using the internet in attacking another country is just another example of the use of innovative weapons in the history of warfare.”
Ladies and gentlemen, we got 'em.
The Saddam Husseins of the computer hacker world have been caught, federal authorities announced today.
(Of course, it's the same day I'm rushing to get on a plane to Black Hat in Vegas, so it only figures a major news story breaks).
The feds nabbed 11 people, three from the United States, who are accused of hacking into the wireless networks of nine retailers - including TJX - and netting more than 40 million credit and debit card numbers.
(This number seems a little out-of-whack to me, considering some estimates have placed the TJX breach at nearly 100 million card numbers).
The defendants are not just responsible for TJX - that, in itself, would have been a heckuva takedown - but also some of the other biggest reported breaches of all time, including BJ's Wholesale Club, DSW and Dave & Buster's.
I must say, I'm quite shocked that this gang was involved in all of these digital heists. But it sounds like they got greedy - and everyone is gonna get caught sooner or later.
I am normally a little cynical about cybercriminal arrests - I just figure there's plenty of other folks waiting in the wings to fill the void - but this sounds like it could have punched quite the hole into the problem. Of course, given the vulnerability of many businesses, I still have to believe another TJX isn't too far off.
Imagine a web browser that sits as an application on your desktop. If you click to open, it delivers you to a previously set website. You can navigate all you want through that particular website - maybe it's Bank of America - but don't try going to Facebook. It won't let you. There's no address bar.
They're called single-site browsers (SSBs), or site-specific browsers, or maybe some other alliteration that I haven't heard about yet.
The security benefits are easy to get. As Andrew Jaquith of the Yankee Group - I believe the first analyst to publicly present on this topic - said in an April blog post, "Because SSBs can, by definition, browse to only one website, many of the web-based attacks against users (phishing, cross-site scripting, cross-site request forgery) won't work."
Bored by the security ramifications? Mac enthusiast Todd Ditchendorf explains some of the more tangible benefits here.
The concept is still a nascent one, but we can expect to hear a lot more about in the coming months. Rumor has it that when Apple releases Safari 4, will include a capability to create SSBs.
As is often the case with neat innovations, the open-source community is leading the charge.
Ditchendorf, in fact, has already designed such an Mac application to make an SSB possible. It's called Fluid. And the smart folks over at Mozilla are working on their version, known as Prism.
A big challenge will be getting the banks and other heavily phished retailers interested in offering this to customers. But it might be worth it. As Jaquith notes, SSBs could be "a great way to 'brand' a website and keep users safer, all at the same time."
Of course, as with any security technology, this is not a silver bullet. Jaquith points out that previously installed malware, such as keyloggers, can still work on SSBs, as can things like DNS exploits.
The Neosploit team is leaving the IT underground.
Citing a negative return on investment, the Neosploit developers are walking away from their support for their web exploitation malware suite. There will be no new exploit sets available.
Phishing kits found to be compromised.
Kits available for sale on the internet to steal information from phishing victims have been set up with backdoors. When they are used, information stolen by the phishers is sent back to the kits’ creators.
Hi, I am Iggior. I just purchased a nice new suite of malware from my dealer. I am so proud. I spent at least half the money I have been saving for my wedding, but it will surely be worth it. I can make money by the fistful. And I can keep making money, all I want, in huge quantities! And all I have to do is push a few buttons!
And guess what else? If I don’t make money, all I have to do is tell my malware dealer, and he will return my wedding money. Wow!
And if you can’t trust your malware dealer, who can you trust? Yeah!
What’s that? The phishing software I bought has a backdoor? What? My dealer can’t get in touch with Neosploit?
Hmmmm. Oh, man — I was almost rich...
But I know the woman will understand…after all, it's not like she hasn't seen this before. She calls me Ralph Kramden. I prefer Homer Simpson...
Maybe it’s just me, but it seems that some small inroads are being made by law enforcement in fighting cybercrime. For example, in recent weeks signs of progress have come to light, according to headlines such as:
New York Man Who Participated in Online Piracy Ring is Sentenced
Chinese National Sentenced for Committing Economic Espionage to Benefit China Navy Research Center
Botmaster Robert Matthew Bentley AKA LSDigital Sentenced
Largo Man Sentenced in Certegy Data Theft
Woman Gets Two Years for Aiding Nigerian Internet Check Scam
Romanian Pleads Guilty Over Phishing Scam
DBA Gets Jail Time for Data Thefts
AOL Spammer Gets 30 Months in Prison
Chinese Man Jailed for Hacking Red Cross Quake Site
Hacker Sentenced for Stalking Internet Celebrity
Seattle Spam King Dark Mailer Faces 47-Month Sentence
As Churchill might ask: Though this may not be the end, or even the beginning of the end, does it signal the end of the beginning? Not by a long shot.
The underworld market is just too lucrative, the ease of execution too great, the number of willing victims too high.
I am not a criminologist, and I’m not so sure there have been exhaustive studies into the mind of a cybercriminal, but I think the main thing on any criminal’s mind is: “I do not want to get caught!” So why risk pulling a gun on someone, when you can get much more money with far less danger and do it from thousands of miles away?
In any case, though it’s been difficult to catch them, and it is not likely to get much easier, at least some of those apprehended will get time to think about repeating another pushbutton crime.
Last Thursday, I wrote a news article for the SC website covering a speech on cybersecurity that Sen. Barack Obama delivered at Purdue University.
The point of the reporting was to acknowledge that a presidential candidate had an understanding of cybersecurity issues. The challenge was to not turn the piece into a testimonial.
Trying to retain a sense of fairness and balance proved difficult and, fortunately for me, astute online editor Chuck Miller was able to take my story and hack it to pieces in order to remove a tone of preferential treatment that I hadn’t quite masked.
But the fact is, irrespective of what you think about the two candidates’ positions on other issues, when it comes to cybersecurity, preferring Obama is a no-brainer.
Obama not only has an awareness of cybersecurity, but offers proposals and a strategy that would not only protect the nation’s computer networks, but also strengthen science and computer education programs.
Cybersecurity would be made a top priority in his administration, he said.
This is a stark contrast to the awareness, or lack of awareness, shown by his opponent in the presidential contest, Sen. John McCain. There is nothing on McCain’s website that addresses cybersecurity, and he has hardly addressed the issue.
In fact, last week, Richard Clarke, a partner at Good Harbor Consulting, who has served the last three presidents as a senior White House adviser, told SCMagazineUS.com: “We couldn't find that McCain has any position on cybersecurity. They just taught him how to ‘watch the Drudge Report.’ How can you expect a guy who has never used a PC to understand cybersecurity?”
Eugene Spafford, executive director of Purdue University’s CERIAS (Center for Education and Research in Information Assurance and Security), in response to Obama’s Purdue speech (and a discussion following), commented on CERIAS’s blog that, “Sen. Obama was engaged, attentive and several of his comments and questions displayed more than a superficial knowledge of the material in each area. Given our current president referring to ‘the Internets’ and Sen. McCain cheerfully admitting he doesn’t know how to use a computer, it was refreshing and hopeful that Sen. Obama knows what terms such as ‘fission’ and ‘phishing’ mean. And he can correctly pronounce ‘nuclear’! His comments didn’t appear to be rehearsed — I think he really does ‘get it.’”
Regarding McCain’s take on cybersecurity, after praising his service to the nation, Spafford said McCain is “a generation out of date on current technology and important related issues.”
Despite his unfamiliarity with computers and the internet, an argument could be made that Sen. McCain's stance on national security matters could lead to more money being budgeted for cybersecurity under his administration.
But I still prefer Sen. Obama's rationale in approaching the subject as an advancing of the technology and a means to "coordinate efforts across the federal government," not just as a matter of military readiness, as Sen. McCain claims.
A federal judge has put off until next week the sentencing of so-called spam king Robert Alan Soloway. More witnesses need to take the stand, after which Judge Marsha Pechman is expected to hand down her sentence. He could receive 20 years, though that seems unlikely.
The case is vexing for Pechman because there is little in the way of legal precedence, she said. While spam is an established fact of internet use these days, troubling and disruptive to most, determining an appropriate penalty for those responsible for the unwanted email raises issues beyond the annoyance factor.
Soloway was arrested in May 2007 following criminal charges brought by the U.S. Department of Justice. He plead guilty to single counts of mail fraud, email fraud and tax evasion.
His defense attorney argues that this is simply a spam case, which under the CAN-SPAM Act carries a maximum penalty of five years in prison.
But others contend that the case is not simply about a vendor sending out millions of unsolicited email messages. Soloway is also accused of misrepresenting his business and not delivering on services and products offered.
While Soloway’s being held responsible for his actions may bring satisfaction to many, the judge’s decision over how much jail time he should serve as a consequence has to be a tough call.
Were spam recipients victims of a crime? Certainly those who purchased from Soloway software that didn’t work were victims of fraud.
At times, judges in cases such as this like to say in their ruling that they are making an example. Determining the extent of criminality in this case, and the price to be paid, may or may not send shockwaves through the internet marketplace.
Perhaps the sentencing next week in Soloway’s case will cause a ripple. But, as long as there is money to be made from activities such as those embodied by Soloway, the verdict is neither in or out. Whether he gets a short or long sentence, those miscreants behind internet fraud schemes are unlikely to slow their activities.
Now if the feds can catch up to some perpetrators of identity fraud.
Some somber news to report in the information security community.
Sunbelt Software points us to this sad account of Webroot co-founder Steven Thomas. Apparently the 36-year-old, during the past year, has fallen victim to bipolar disorder and delusions. As a result, Thomas is lost somewhere in Hawaii, and his family is heartbroken.
Thomas founded anti-spyware company Webroot in 1997 as a way to raise money to buy a car. Eight years later, he sold Webroot, now with 330 employees and best known for its Spy Sweeper software, to a group of venture capitalists for more than $100 million.
Thomas is now involved with real estate investments, according to the Honolulu Star-Bulletin.
Even though Thomas has removed himself from the security software space for some time now, his accomplishments in the field of anti-spyware will live on. He was on to the problem in its earliest days and surely helped build the foundation of many of today's offerings.
And remember, the IT security community is small. Everyone seems to know everybody, maybe through a degree or two of separation, but nonetheless, it's a tight-knit group. Surely this news will hit home for many.
We hope for Thomas' safe return and that he gets the help he needs.
Updated Monday, July 14 at 5:17 p.m. EST: Thomas found dead Sunday in Hawaii by hikers. http://www.scmagazineus.com/Webroot-founder-found-dead-after-going-missing-in-Hawaii/article/112412/