My planned blog for this week was the third installment of the DiamondFox saga. Sadly, world events have conspired to encourage me to put that off for next week. If this week's posting sounds like a rant I apologize, but this simply is too important to let go without comment.
I have been reading the commentary on Grizzly Steppe and my immediate reaction is that the bloggers, for the most part, are displaying a woeful ignorance of the cyber forensic process.
The commentary is blatantly partisan or self-serving or both. Stories that focused on the "outlandish names" of Russian cyber operations or that claim that the 13-page JAR that came out last week was deficient (and, therefore, can be discounted) also show a woeful ignorance of the national cyber intelligence process. Of the stories I read, only one really hits the point: we need to look more closely at how we protect our cyber infrastructure. Most of the rest are from "cybersecurity pros" that nobody's heard of voicing opinions based upon "analysis" of a single data point
|For more on the Russian hacks:
Clapper testimony: U.S. intel more confident than ever Russia interfered with elections
Before I get into a somewhat deeper analysis of what we know publicly, let's get a few things on the record.
First is the myth that the efforts of 17 intelligence agencies, and who knows how many private sector companies and analysts, are summed up in toto in a 13-page report. In case it's not obvious, that is just part of the unclassified piece. If that is all that the investigators have, I agree, it's a bit shaky, but you'd have to living under a rock to think that the publicly-available JAR is all there is. I know many of the people who did this work – in and outside of government – and none of them, believe me, live under rocks. These are competent, careful, skeptical analysts. I take them seriously. And, I take their consensus seriously.
Even so, there is a lot there and I'll take that up presently. Second, a single data point does not an analysis make. And the fact that there are some false positives does not invalidate the total analysis. I am quite sure that every one of the cybersecurity pundits - assuming they actually have the experience they claim - has seen at least one false positive from an IDS in his or her career. We strive to get rid of them but the little buggers pop up from time to time to confuse even the best forensic analysts. Clearing them is part of the forensic process. That process, by the way, is based upon the scientific method. We'll get to that presently, as well.
Third – and, perhaps, most important – as a nation we have a whole lot more to do than engage in political warfare and self-aggrandizement when the issue is the cybersecurity of our critical infrastructure and associated private sector organizations. So let's knock off the noise (one of the down sides of the internet is that everyone can become an immediate self-appointed expert) and put our collective heads together to make our nation more cyber-secure. All of that said, let's take a look at what we have.
The JAR is interesting as background, but the meat is in the IoCs. While it is true that some of the IoCs are somewhat less than useful, it's the whole picture that we need to look at. This is a forensic process. The forensic process is born out of two things: the first is the scientific method and the second is Locard's transfer principle. The scientific method tells us to create a hypothesis and attempt to falsify, or disprove, it. Locard tells us that if two things touch they each leave something of themselves behind. Both of these principles apply to the Grizzly Steppe analysis. Because the hackers with whom we deal routinely are quite clever we need to expect such things as obfuscation, false trails, and attempts to derail an effort at attribution. The solution is data – lots and lots of data.
And, while attribution is a hard problem, it is not an impossible one. There are several levels of granularity in attribution beginning at the command-and-control server and going all the way to the "butt-in-the-seat." Each increase in granularity is increasingly difficult. That's why a good cyber forensic analyst may use cyber intelligence findings to help interpret or add context to the purely forensic findings.So, to get us started, we form a null hypothesis. That is a statement that two groups to be measured have absolutely no relation to each other. We then need a couple of statements to test. We'll begin with the obvious: there is no evidence... to support that Russia hacked American computers. It is important to note that we picked this because we are going to attempt to falsify it, not because we want to prove it. It is, theoretically, not possible to prove a hypothesis because in the face of an extremely large number of affirmative examples it takes but one counter-example to disprove it.
There is an old saw that helps us understand the futility of trying to prove a hypothesis:
12 is a foot
A foot is a ruler
The "Queen Mary" was the ruler of the Seven Seas
Fish live in seas
Fish have fins
The Finns hate the Russians
Russians are red
Fire engines are red
That's why fire engines are always rushin'.
Clearly there are many false premises here but, on the surface, it appears that we have proved why fire engines are always rushin'. But what if we break the chain – falsify a premise? For example, perhaps the Finns really don't hate the Russians. Or, perhaps, Russians aren't really red. And, what happens if you have a yellow fire engine?
So, all that we need is a single counterexample to our null hypothesis to cast doubt on the position that because a single piece of publicly available malware might have been used to breach a system the Russians could not have hacked the election. Now we start taking a closer look at the evidence in the JAR. Wordfence has done us the favor of collecting much of the relevant data into a single collection on GitHub. I am puzzled at this because, reading their analysis, these folks – who certainly know their business – appear to have succumbed to the single-data-point myth.
Nonetheless, we found their data useful in that it did a lot of our pre-work for us. For another complete – or mostly complete – collection of indicators go to the AlienVault OTX.
I took the GitHub data and built it into a spreadsheet with a whole lot of other data points so that we would have everything in one place. My first task – sort of a top level sanity check – was to build a link analysis chart using the i2 Analyst's Notebook tool. I associated IP addresses with organizations and then did a betweenness calculation. This identifies the gatekeeper entities that control access to different parts of the network. There were, of course, lots of results, so we took the top five. This gave me the organizations Yota, OVH SAS, FlokiNET ehf, NForce Entertainment B. V. and OVH Hosting. Recognizing that these could be intermediate organizations rather than the source of the hackers' hosts, we went ahead anyway to see if a pattern emerged.
Yota is a Russian mobile broadband service. OVH is a French provider. Flokinet is located in Finland, Iceland and Romania. NForce is a co-lo and cloud provider in the Netherlands. While none of these is conclusive, Yota is, by far, the most connected using our betweenness calculation. As a further test, we ran the degree test against the same data. Yota came up by far the most active based on links to other entities. We did a final test by adding the Eigenvector measurement. Eigenvector measures influence of an entity on other entities in the network. Yota on top by a country mile again.
As one would expect, there is no cross-over between organizations (no IP appears in more than a single organization). We haven't proven anything yet, but we have some tantalizing connections that suggest Russian involvement. The i2 graphs are shown in figure 1. The chart is hard to read and for that I apologize.
Figure 1 - Top Organizations Based Upon Hosting Parameters
Now, let's dig a bit deeper into Yota since that is a Russian organization (and the winner in the analysis sweepstakes). We'll take all of the Yota IPs that are called out in the full list of data and cross them to hosts. I found that they all are wimax-client.yota.ru. If I run that host in CyMon, I find 317 active IPs hosted by that host. Digging on that host in a variety of other open sources shows that it is notorious for malware. There also are several IPs that show up both in CyMon and on the JAR's indicators.
The application of the various IPs goes from a week ago to over a year. Clearly this is an important host, clearly Russian, and has clearly been in business for some time. There is one more puzzling point that makes us curious: Yota retired WiMAX quite some time ago and yet a WiMAX-named host is active in these hacks. Why?
We're getting close to falsifying our null hypothesis. But we're not there yet.
Our next big one is OVH from France. That implies that, of course, it could be the French rather than the Russians who hacked U.S. interests. However, I found something curious when I started looking at OVH. Many, not all, of the IPs are TOR exit nodes. Many are spread all over the world implying either TOR nodes or hijacked hosts. OVH is not a valid test for attribution at this point.
FlokiNET is next. Interestingly, with only a couple of exceptions, there are no hosts associated with the FlokiNET IPs. The first host – srv6.tellsyourstory1.com – is registered by Name.com and the registrant is firstname.lastname@example.org. Not very useful at this point. The only other host is bti.mahronis.com. That domain hosts an IP that is known for malicious activity going back five months but there is no Russian attribution there.
Checking the rest of the FlokiNET IPs we find that they are hosted in Romania. Checking the IP block (from FlokiNET) 18.104.22.168 we find no reverse DNS. That may be important since IPs used by hackers often do not have a reverse lookup available because they are – often – created on the fly. This block covers many of the IPs that are not the two domains above.
As a possible aside – certainly something for further investigation – we have an interesting anomaly. It is known that Russian state actors cross over into cybercrime. In other words, some of the individual actors who hack in support of Russian organizations also hack in support of self-enrichment from cybercrime. Refer back to my blogs on FlokiBot and note the similarities in naming. It could mean nothing, but a competent analyst would find their curiosity aroused.
Is this all? Certainly not... for example, one of the C2s called out in the JAR not only serves the RAT identified by the analysts, it serves over 200 cybercrime malwares as well, including at least one ransomware. The other C2 with a specific malware hash warning serves a total of 57 unique malwares. If you have the time to go through the JAR in detail, using some creative imagination to think through how the IoCs are (or are not) connected, you'll draw a far more defensible conclusion than most of what I've seen by pundits so far.
So, where are we? Have we cast doubt on our null hypothesis? Certainly we have shown that IoCs uncovered and documented in the JAR lead back to Russia. In fact, they lead back with a history of up to two years of malicious activity that is attributable to the Yota organization alone. Yota is Russian-owned. One of the owners, businessman Denis Sverdlov, was, it appears, appointed Deputy Minister of Communications and Mass Communications of Russia. And, additionally, there is a wealth of un-mined data in the other IoCs that we can work through. This is an example of going beyond the bits and bytes when developing an intelligence profile.
To wrap this up, I certainly would not make any comments denigrating the veracity of the public JAR on Russian hacking against the United States. I think that – putting partisanship and self-aggrandizement aside – an objective analyst would have to conclude that there is, at least, smoke and very likely fire in the accusations against Russia. I would absolutely like to see someone who has done a more complete analysis than just about anything I've seen in the media so far ring in here and add to this.
And, please, stick to the forensics, don't base your argument on a single data point and back up your analysis with facts rather than suppositions. Our job as forensic investigators/threat hunters is not to bash the current president's actions or the incoming president's politics. Our job is to find the truth. And the truth is in the forensics.
Now here are your numbers for this week….
- Dr. S
Tools I used this week:
- Maltego, Classic Edition
- Cisco Investigate
- Niksun NetDetector Live
- i2 Analyst's Notebook
- AlienVault OTX
- Malware Domain List
Figure 2 - Top 5 Command and Control IPs Hitting the Packetsled Sensor on our Honeynet
Figure 3 - Top 10 IPs Hitting the Packetsled Sensor on our Honeynet
Figure 4 - This Week's New Malicious Domains from MDL
Figure 5 - Top Attack Types as Seen by our Niksun NetDetector against our Honeynet