A second reason is that so-called "bug bounties" have become a big deal with many companies offering bug bounty programs with big bucks attached. In my view this is a very positive step. Inviting the community to help test the security of a site or product, with the promise of a reward - sometimes substantial - is a very good way to encourage peer review. Cryptologists have known for decades that peer review of a new algorithm is the only way to develop truly strong encryption.
The key terms here, though, are "bug bounty program", invited, responsible and constructive. Bug poaching, as pointed out by John Kuhn in his 27 May article for "Security Intelligence" disguised as bug hunting can be an excuse for cyber extortion.
The notion of constructive/responsible disclosure (I'll use the terms interchangeably here even though the really are a bit different) was, arguably, born officially in 2001 with the publication of a paper by three folks from the Finnish University of Oulo computer engineering laboratory (Marko Laakso, Ari Takanen, Juha Röning). In this paper they suggest that policies developed by Russ Cooper of NTBugtraq, the CERT Coordination Center and vendors, for example, Microsoft, form a good starting point for disclosing vulnerabilities - "bugs" - responsibly while alerting security professionals to important issues that need rapid remediation. (Yes, I am aware of the giants upon whose shoulders the Finns stood to write their paper but this was the first formal discussion in academic research circles.)
Today this has coalesced into several venues, one of my favorites being the Full Disclosure mailing list. The list, started years ago, interrupted and subsequently picked up by Fyodor, the developer of NMap, is described on its web site:
"The Full Disclosure mailing list is a public forum for detailed discussion of vulnerabilities and exploitation techniques, as well as tools, papers, news, and events of interest to the community. FD differs from other security lists in its open nature and support for researchers' right to decide how to disclose their own discovered bugs."
This has one aspect with which I pointedly do not agree: "... support for researchers' right to decide how to disclose their own discovered bugs." On one hand we would expect responsible researchers to disclose responsibly. That, to me anyway, means that the owner of the bug - the product developer, site owner, etc. - should be notified in advance and given a reasonable period of time to remediate the problem in whatever way is most appropriate to the venue and the bug. Unfortunately, without responsible disclosure we tempt the problem that the bad guys will learn something they don't already know that will allow them to exploit the flaw.
Before you smirk and say something to the effect that the bad guys know already so we're not telling them anything they don't already know. Sadly, that response means that you are badly out of date. Ten - or, perhaps, even five - years ago this probably was the case. In fact I used that argument myself frequently in the full disclosure debates. Today products and systems are sufficiently complicated (perhaps, even, "complex") enough to be resistant to easy internal security testing. Evidence for that position is suggested by the fact that zero-day vulnerabilities are discovered daily.
A research paper published by Leyla Bilge and Tudor Dumitras of Symantec Research Labs has a great summary chart which I'm going to reproduce here because it tells an important story.
Table 1 - Summary of Findings: Bilge and Dumitras, Symantec Research Laboratories
Some key issues are addressed in this table. A couple of the important ones are that 42% of vulnerabilities are detected in field data within 30 days of disclosures, and after zero-day vulnerabilities are disclosed the number of malware variants exploiting them increases 183-85,000 time and the number of attacks increases 2-100,000 times. The strong implication here is that the bad guys are NOT discovering all of the zero-days themselves but that in many cases simply are trolling for them and then finding ways to exploit them and make their exploits available in the underground. In my trolling the underground I can say definitively that the chatter is less about discovering new zero-days (although, of course, there is a lot of that) than it is about developing - and selling - exploits for zero-days already discovered.
Simply, the problem with unrestricted disclosure and unethical bug hunting is that it prematurely exposes victims to exploitation. So, what is the solution? In today's threat environment old standards of disclosure just won't work. Or will they?
Microsoft has published some guidelines about what they call "Coordinated Vulnerability Disclosure" (CVD) . The company takes this stance:
Under the principle of Coordinated Vulnerability Disclosure, finders disclose newly discovered vulnerabilities in hardware, software, and services directly to the vendors of the affected product; to a national CERT or other coordinator who will report to the vendor privately; or to a private service that will likewise report to the vendor privately. The finder allows the vendor the opportunity to diagnose and offer fully tested updates, workarounds, or other corrective measures before any party discloses detailed vulnerability or exploit information to the public.
This has some excellent points such as early - before public disclosure - disclosure of findings to the vendor, product developer or site owner. Additionally, it mandates time to correct the flaw - within reason, of course. Some argue that this is way too slow a process for today's threatscape. The flaw likely will be discovered by the bad guys first. However, the Symantec research and my own experience do not support this hypothesis.
Going back to the Finnish paper, there are three phases of constructive disclosure that the authors propose:
1. Creation - this includes the testing - formal, if possible - that discovers, identifies and categorizes the bug
2. Pre-release - period of closed disclosure to vendors, developers, site owners of affected sites or products allowing a grace period for remediation before the Release phase
3. Release - Release - public release of the vulnerability and the test methodology used to confirm it - no disclosure of exploits associated with the bug, of course.
This seems on the surface still to be a pretty reasonable approach 15 years after it was written. And, perhaps in general terms, it is. But there are some wrinkles that definitely are twenty-first century gotchas.
First, we see evidence of compromised sites all the time. A good example is the malicious domain list at the end of all of my blogs. What of that? Isn't that non-constructive? Should we bash the Malware Domain List? If I thought that should be the case, believe me, you would not be seeing those listings in every blog. I fully support MDL and here is why.
The MDL is reporting sites that not only already have been exploited - not just exhibited vulnerabilities - but have succumbed to an exploit and as a result are serving malware or other attacks actively. I routinely come across these as I threat hunt - and so, I'm sure, do you. I try to notify the site admin but sometimes - mostly, actually - I don't even get the courtesy of a reply. If the site is in some rather uncontrolled country I don't expect to get a response so if I do, it's a welcome anomaly. As a threat hunter I depend upon sites such as MDL for crucial information that helps me trace down and protect against an attack campaign.
Second, what of the malicious underground community? That is worlds away from the similar community at the time the Finnish paper was written. Today it is organized, operates as a community of businesses with development, sales, distribution and marketing teams. It is fueled by individual greed, state-sponsored cyber war/crime, organized crime groups and terrorist organizations. It no longer is a game of leapfrog between the good guys and the hackers.
Today it is open warfare between the good guys and the organized community of malicious actors. And the malicious actors are winning at the moment because they are better organized and have more resources. If you doubt me, get a demo from our friends at Intel 471 and take a walk on the wild side of digital security. All of those things that you've said, "Oh, yeah, I know about that" you'll see orders of magnitude worse than you ever dreamed. Trust me... I spend a huge amount of my time wandering the back streets of the Internet, using a lot more than this fine tool.
Here's a rather benign example from the exploit.in forum where with my trusty Google Translate tool I poke my nose frequently - these are not current projects of mine... just a couple of things I came across in my meanderings:
From the exploit.in Russian language forum in late April (I have redacted a lot but you'll get the point):
Bank accounts for sale, See new arrivals
[three banks with total assets of the accounts worth $2.5 million plus a major credit card with accounts valued at $45K for bank withdrawals]
[banks and airlines]
[5 more banks in the US and elsewhere with accounts values at more than $1 million. This gave a total of credentials worth over $3.5 million overall.
The actor is asking for anything from $50 to over $200 for these accounts.]
As to selling exploits... in late May an actor advertised the following on exploit.in:
Selling a source code of Socks back connect service [bot]
I will sell the source code of the Socks-Backconnect-Service to a single customer only.
[followed by a very detailed description of the capabilities]
The source code is included and complimented by detailed comments in Russian.
Price $11,000 USD. Not negotiable.
On your request and for extra price we can develop and implement:
- a Bitcoin billing system: 2,000 USD;
- a web page for automatic registration of new users: 500 USD;
Now a test C&C and a proxy servers are up and running for those of you who wish to check the system's functionality;
Technical support is 2,000 USD per month (in case you need it). All bugs fixes are free of charge.
This is the enemy today. So how do we deal with that in an atmosphere of full - irresponsible - disclosure? The obvious answer is, "we can't". We need a better way.
There are a couple of things that need to be said at the start of any discussion of disclosures. First, we need to keep - and increase - the bug bounty programs, both current ones and proposed new ones - vendors, please take note!
Second, we cannot stop irresponsible disclosure by developing policies. Outsiders - those who are not employees, contractors or otherwise involved directly with the organization or are not legitimate users who have accepted a license agreement or terms of service - are not bound by our policies. And for those in other countries, bound or not, such agreements may be very hard to enforce. So, while policies are nice, don't depend upon them as a cure-all.
The second thing to keep in mind is that accessing a computer or system without permission is illegal in the US and most European countries as well as a few others. So when a "researcher" from the US or EU starts poking around your web site in the name of security research he or she may be breaking the law.
/soapbox on/ I need to digress just a moment and make what likely will be an unpopular statement of my personal position, but after 53 years in this field I feel that my two cents worth is, perhaps, justified.
The ideas of hats of different colors, ethical hacking and all of the other whitewash terms are nothing but misdirection and excuses for people to break the law without consequence. They are bogus security theater in my view and not worth beans. If you want to be a serious security researcher, be one. We need you. But get an education and some experience, perhaps with a good mentor - not just a week of boot camp leading to a test that says nothing about who you are, why you wanted the cert or how good you are at what you do - play by the rules and help the cause rather than hurt it.
Third, we should enforce violations of the law by "researchers" vigorously. The issues pointed out in the article by Kuhn are valid and worrying. He calls our attention to a campaign - currently active he tells us - targeting over 30 organizations over the past twelve months. After finding flaws in the systems the actors sometimes use various techniques (we can fix this for you) to extort payments of north of $30K to reveal the flaws.
There are proposals in congress to make security research something that requires licensing and regulation. I think that may be going too far. But - and make no mistake about this - if we as security professionals, especially threat hunters who are on the front line, do not find a way to police ourselves, law makers will do it for us. If your enterprise is breached by a "researcher" and you don't have a program or explicit permission in place, prosecute to the fullest extent of the law. This should be a zero-tolerance approach. And we all need to play.... otherwise, as we see way too often, the bad guys will win again.
Here's your malicious domain list for the past week courtesy of Malware Domain List (https://www.malwaredomainlist.com/)
Figure 1 - Malicious Domain List for the past week (Click on image to view the entire document.)
So… until next time….
If you use Flipboard, you can find my pages at https://tinyurl.com/FlipThreats. Here I flip the interesting threat-related stories of the day – focused on the technical, all interesting stories and definitely on target.
 "Introducing constructive vulnerability disclosures"
 "Before We Knew It An Empirical Study of Zero-Day Attacks In The Real World" - https://users.ece.cmu.edu/~tdumitra/public_documents/bilge12_zero_day.pdf