Network Security

Tier one incident expected, Government cyber-specs likely – NCSC

We can expect to see a cyber-security incident at a category one level within the next few years,  Ian Levy, technical director, National Cyber Security Centre told delegates at today's Symantec Crystal Ball event, just around the corner from the NCSC offices in Victoria, London, today.

For context, Levy explained that WannaCry was a category two incident – of which there have been thirty tackled by the NCSC since it was set up 11 months ago – with the majority of the 500 incidents dealt with put in the category three level –  both from the public and private sector.

But Levy went on to say that while the initial response from those affected will be to say that it could not have been defended against, subsequent independent investigation was likely to show that it was entirely preventable, but caused by internal errors.   And the lesson for those who believed they had spent sufficiently to protect themselves, but been breached, was that you can't outsource risk.   And we need to manage cyber risk better, de-mystify it and make it just another risk category.

However Levy rejected the argument that people are the problem saying, “Techies build systems for techies, not ordinary people and we need to have systems that are usable [by all]”.  He added that the good news is that relatively small interventions can have a disproportionately good effect.  We just have to make it harder for criminals to make money – and they'll go elsewhere.

Keeping on the theme of the role of humans,  cyber security expert   Dr. Jessica Barker suggested that the future would see the impact of mass-social engineering, where selected targeted information would be leaked on a large scale.  While propaganda is not new, she says that we were all caught off guard about how we consume and share news today, with 50 percent of the US public saying Facebook is their main source of news.  And while only four percent say they trust that source, in the absence of other sources, it clearly influences how they see the world.   So far ‘Fake News' has primarily targeted politicians, but Barker says we will see it targeting corporations and individuals, based on algorithms.

The challenge is thus to build up critical thought so that people check and verify what they read, that ournalists check the biases of their reports.  Barker said that while regulation had its place, we shouldn't expect it to lead change – a point taken up later in the discussion (see below).

Moderator  Duncan Gallagher, European lead of the Edelman Data Security & Privacy Network interjected to ask, who is the arbiter of what is Fake news? 

Barker responded that opinion and fact are clearly distinguished from one another and opinion should not be presented as fact, and people should be taught to be critical and challenge what they read.

For Graeme Hackland, CIO at Williams F1 Team, the future issue is, will we allow artificial intelligence (AI) to call the shots, before we know where the kill switch is, as we are heading into pervasive use of AI without knowing where it is going.

He gave the example of how machine learning (and later AI) is expected to take an increasing role in FI,  with a ‘seat' on the panels where humans make decisions now.  By 2020, Hackland suggests no human will be involved in the decision to call a pit stop.  And this would make if vulnerable to someone accessing and corrupting the data.  He cited a wrong decision by a human had previously caused a wining car to go third place because a human called for a pit stop at the wong time – but what would happen if a machine made the wrong decision.

Another problem in the automotive world is the transition from ‘legacy cars' driven by humans while automated cars are being introduced. Accidents with automated cars have so far been because algorithms don't know how humans will behave, but we assume  that, while there will be incidents and accidents it will be safer and more efficient. However, how do we ensure data hasn't been tampered with, and what if our believe that AI will be right more often than humans turns out not to be true?

And if AI takes over from current F1 teams, who talk to each other, and bend the rules on occasion, will the AI learn to ‘cheat' and talk to the AI at other teams?  We have to be aware of risks we hadn't thought about.

Peter Wood, chief executive officer, First Base Technologies, who conducts Red Teaming excercises to test organisation's cyber-security, says his concern is that criminals will automate the entire kill chain.  Its always easier to use the credentials fo a legitimate user than find an new route in, and so people are targettted with social engineering, soft target companies, then individuals within them are targeteing; once in the system defences tend to be weak making it easy to locate, identify, then exfiltrate, destroy or pollute the data found.  Criminals are automatically scanning ports for unpatched systems now – but if this is extrapolated, they could use big data for powerful automation to provide kill chain as a service. If the AI is good enough, and the voice simulation sounds human, the ‘Microsoft service desk' scams can be automated, along with all steps in the attack.  And this criminal automation will release more of their human capital to be creative.

For defensive strategy, Wood agreed that people remain the key capability here too.

As might be expected, Darren Thomson, CTO & vice president, technology services at Symantec h had a more upbeat view of the future, while not denying that more, and more sophisticated attacks can be expected.  But his focus was on leveraging  mass deployment and reverse the attitude of a dis-economy of scale, whereby every new user or employee was viewed as an increased attack surface.  Instead it should be like neighbourhood watch, where every new member was an extra pair of eyes to better see where we are.  Thus the future was one where we would become more reliant on non-experts making good IT decisions.  As another speaker put it, crowd-funding of security.

One of the reasons for current poor decision making is that people respond to emotional triggers and our software does not have these built in.  ‘Do you want to do this?' has no psychological triggers and people click OK to get the next screen.

With mass non-expert data being collated, in five years time incident response can entail the indication of a likely attack based on the data being the incident.  So the issue is to change the engagement model to make people care.

The discussion moved from non-expert users to just users to just people, as we all use technology.  While the consensus was that people should be regarded as a strength, and encouraged and informed to do the right thing, the one CIO on the panel struck a contrary note.  “People are stupid and will do stupid things – including me.  It's not enough to make the tech easier.  And a dose of fear is not a bad thing if it makes people take responsibility,” declared Hackland.  Though he went on to say that, of course this was coupled with helping, such as educating staff for their personal life too. Both  Levy rand Wood ejected the idea that someone with a lack of tech understanding was stupid, making the point that most people are capable of learning tech but don't necessarily want to.

In response to a question about the growing issue of insecure IOT devices and what to do about it, it was noted that the economics of low cost devices and the preference of the public for cheap rather than secure, meant there was little incentive for manufacturers to add security.   Solutions proposed included education of the public, aided by more advice, similar to a food ingredient traffic light labelling system, noting heat generated, data sources and behaviour etc.

SC Media UK asked the panel, if they didn't believe the market was incentivised to provide secure products and consumers were not educated or inclined to demand them, surely the only way to achieve cyber-security would be to regulate, similar to health and safety requirements, not allowing the sale of unsafe devices or software?  The response was concern about introducing a compliance/tick box culture, though Levy did say that governments could provide a lead, and he would be proposing that the UK government only purchases software that meets set standards, and while people  may complain, the government specifying what it will buy was an effective way of changing the market.  

Another questioner suggested that a major incident could be a driver for more regulation, but legislation was not viewed as an appropriate response if it encouraged a tick box approach. Harnessing the wisdom of crowds was the preferred solution.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.