The Intel logo hangs over the company's stand at the 2016 CeBIT digital technology trade fair on March 14, 2016, in Hanover, Germany. Today’s columnist, Asmae Mhassni of Intel, offers nine principles driving zero-trust for microprocessors and silicon. (Photo by Sean Gallup/Getty Images)

Forrester Research coined the term zero-trust back in 2010, and over the last couple of years, it has become a hot topic in the security world. A new Ponemon Institute study found that organizations are just getting started when it comes to hardware security capabilities for zero-trust.

According to the survey, only 36% of respondents say their organizations use hardware-assisted security solutions. Of that 36%, only 32% say they have implemented a zero-trust infrastructure strategy (with 51% saying hardware security capabilities have been incorporated in their zero-trust strategies). Yet 85% say hardware and/or firmware-based security has become a high or very high priority.

Driven heavily by the rise of hybrid work, zero-trust refers to a proactive and pervasive approach to network security that’s designed to minimize uncertainty by shifting trust from physical connectivity or proximity to trust based on authenticating every access. Simply put, it means that the company doesn’t trust anyone by default from inside or outside the network, and users must verify access to all network resources.  

In hardware, many security paradigms are still based on physical connectivity. That means when access gets asserted on a hardware bus, it’s often assumed legitimate. Hardware designers are typically not taught to question messages. They assume a bit or message to have come from the correct party. But as attackers get more sophisticated with physical attacks, we need to question those assumptions, just as the paradigm of authenticating once to the network has been questioned.

At the system level, hardware vendors have been contributing technology – such as DMTF’s Security Protocol and Data Model (SPDM) – that allows for the authentication and identification of hardware and the firmware it’s executing. These types of innovations help ensure secure collaboration between elements in hardware (similar to how TLS and HTTPS secure web-transactions).

As technologies such as SPDM are more widely adopted, components on these systems can mutually authenticate and establish secure connections. But zero-trust concepts shouldn’t stop at the network or system level. The entire industry needs to work to ensure these concepts are applied all the way down inside the silicon of devices.

Hardware development has different characteristics to software, but there are a lot of commonalities for security principles. Let’s explore what security professionals should consider and look at nine principles driving zero-trust in microprocessors and silicon:

  • Fail safely and securely: It’s important to ensure that error conditions don’t leave secrets lying around. For example, the classic anti-pattern in hardware is the so-called cold-boot attack, where secrets are left in memory after a reboot. Rather than assume data will disappear when the system gets turned off, explicitly zero out the memory contents. The persistence of data also means that we need to use encryption to protect the data in case it’s not deleted at the expected time.
  • Complete mediation: Check every single access to confirm legitimacy. In hardware, this might mean making memory access go through appropriate memory management checks from the path from application to memory and back. Security teams can do this with proper authentication of memory access commands or authenticated encryption.
  • Rule of least privilege: Minimize the privileges any hardware agent has. In hardware, it’s often appealing to give additional privileges to agents just in case. Usually, companies justify this move by saving the cost of a mistake that causes a new fabrication. However, it’s important to minimize privilege “creep.” Security teams can do this in hardware by implementing access control models to manage access between semiconductor intellectual property (IP) blocks and protect privileged resources. Mapping assets to access rights and categorizing entities permitted to observe (read) or change (write or reset) the state of an asset. For example, a power management controller should only have access to power management controls and not security resources such as security keys.
  • Separation of duty: Agents should have their own purpose on the designs. Since hardware real-estate costs money, it’s often appealing to overload an agent with multiple duties. This complicates validating and reasoning about the security posture. Security teams can avoid this by isolating the different processors, memories, and devices. Separating security from non-security functions and workloads.
  • Least common mechanism: Security teams should separate out security functions from others. For instance, it’s a common design pattern to have a shared utility bus that transports sideband messages across designs, given the expense of on-die wires. If that same bus carries user messages and secrets, it’s an attack point. It’s important to account for every shared mechanism in the threat model and design it with care, so it doesn’t unintentionally compromise security.
  • Secure the weakest link: Protect the design’s weakest part. In hardware it’s almost always debug. Given the evolution of functional debug and structural testing in the industry, hardware debug features often want access to almost every single transistor on a design. That’s at odds with security mechanisms that want no access to part, or all of the design.
  • Defense-in-Depth: Build multiple walls. This can mean blocking access to a resource even if it seems like it should be open. For example, if the team needs to debug a chip, then production keys are often inaccessible. Eliminate this problem by providing multiple layers of protection. If any one layer gets bypassed, others are still in effect.
  • Simplicity: Invent simpler architectures. Simpler mechanisms are harder to come up with, but easier to implement, validate and secure. It’s a universal truth for hardware and software. Create custom crypto protocols and algorithms instead of using verified and industry standard crypto, which can add unnecessary trustworthiness issues to hardware designs. Another example: minimize the trusted components, or TCB (Trusted Computing Base). It’s easier to maintain and verify a simple and minimal TCB.
  • Psychological acceptability: Make security mechanisms easy to use. If the security architecture becomes too onerous, then it’s tempting for hardware designers to give hardware blocks super-user-like powers. Designing for usability and seamless security from the get-go makes hardware easier to integrate and minimizes the burden on users. If features are not user-friendly, people will find a way around them or bypass these features altogether. For example, it’s easy to generate a hash of the code, but it’s often not obvious to users why a random looking string of digits (hash value) is good versus another.

While just a starting point, these principles are very important when developing robust technologies that improve security and support a zero-trust infrastructure. To learn more about zero-trust in hardware, check out the Common Weakness Enumeration for Hardware (HW CWE SIG) or the Distributed Management Task Force.