The age of autonomy is upon us. While talk of autonomous cars in the not-too-distant future captures the imagination, the reality is that autonomous systems are very much with us in the present day. We see it in the robots that are packing parcels in an online retailer’s warehouse, or welding car parts on an assembly line. Autonomous systems are at work in virtually all vertical industries and more prevalent than we’d imagine in our everyday lives. They are being developed and deployed more rapidly than ever before.
It is well understood that security is foundational to the reliability of autonomous systems. The end users or beneficiaries of those systems need to be able to trust that they are not at risk, or at minimal risk, of a malicious compromise. Developers are constantly being reminded that they need to think about security from the start of the development lifecycle. Yet there remains a tendency to focus on security as a feature or set of features , separate from the actual functioning of a system. We really need to think holistically about the development and the final system, and about incorporating dual-purpose technologies that optimize system performance, while simultaneously strengthening security. When we do that, security becomes integral to both the development process and the end product.
Consider, for example, the intersection of safety and security.If we think of system level services that an autonomous robot required, our first consideration is that autonomous robots are going to interact with humans, and therefore safety is paramount.
A secure system isn’t necessarily safety certified, but a safe system can never be insecure.
As we start designing in safety features, security is necessarily part of our thinking. Intrusions are not the only, but certainly a critical, factor which could cause a system mishap that might cause harm. In this case, we are not looking at security on a standalone basis, but rather in the context of what it takes to make the system safe.
Similarly, if we focus on figuring out ways to mitigate the risk of system failure, we will likely come up with solutions that improve security. When developing autonomous systems, we focus largely on the artificial intelligence and machine learning stacks and the underpinning software they execute on. Equally important are the sensors through which autonomous systems perceive the world around them – the equivalent of the eyes and ears that gather the sensory data that is then processed and interpreted through AI to inform system behavior. Often, we take these sensors for granted, yet any interference with their perception abilities, whether intentional or accidental, will impair the overall performance of the system.
We have all seen movies in which the bad guy manages to sneak past a security camera by putting a photo in front of it to dupe the guards. With autonomous systems that rely on sensors and cameras, we need to ask what is the digital equivalent of that act. We need to analyze and anticipate how someone or something might abuse or fool the system. This analysis will ultimately lead us to incorporate technologies that both improve system resilience and better secure the hardware and software combination.
Going a step further, with ML and AI, we can train systems to distinguish malicious fakery from accidental sensor impairments like splattered mud or scratches on a lens. For the operator of a fleet of future robots, knowing when and why a sensor or any part of a system is failing is extremely valuable for advanced diagnostics, prognostics and preventive maintenance; all leading to increased system uptime. What’s more, the very technologies that detect anomalies caused by wear and tear or the environment can also detect the anomalies of bad actors in the system.
All of these beneficial features and functions require the secure collection of data.
Just as you can incorporate features that have the dual advantage of enhancing performance and strengthening security, you can also derive benefits from security functionality that extend beyond security. Core to the development and operation of any autonomous system, for example, are identity management, key management and access control. It is just as important to secure the development lifecycle as it is the final product. Secure identities are allocated to every architect, developer, test engineer, or operator of the system; while keys are allocated to every development station, every server involved in software orchestration, and every microcontroller and CPU in the autonomous robot itself. With secure access control and software management solutions spanning the entire development lifecycle, as well as every single product, operators are informed of exactly which software is running in which hardware, when and where, as well as any issues it may be having.
In other words, the power of an identity and key management system isn’t limited to security. It has the added benefit of providing valuable operational and diagnostic data, which cumulatively creates the storehouse of data needed to train artificial intelligence and machine learning systems so as to improve uptime, preventive maintenance, and ultimately the efficiency of the entire system. Once this base platform in place, iterating on the existing product, and securely adding features, simplifies and accelerates adding value to end users.
We can expect development of autonomous systems to ramp up in 2020 and beyond. So let’s make a New Year’s resolution to stop thinking about security – more precisely, to stop thinking about security as something we add after we have the core functionality down. If we approach autonomous development projects holistically, and focus on optimizing system performance, reliability and manageability, we will build in features that help with security, and we will build in security that delivers added features.
By Matt Jones, General Manager of Automotive and Systems Architecture at Wind River