Code-signing has become critical in software supply chains because it delivers confidence in software updates prior to installation and, crucially, offers cryptographic guarantees using public and private keys.
Here’s how it works: The software company signs every piece of code with the private key, and recipients check the validity with the public key.
Unfortunately, once hackers have access to a code-signing key, either through theft or by gaining access to a build server and tricking the system, they can easily disguise their malware. They only need one piece of signed code to gain back-door access to networks, and there’s a vast range of potential victims. So, any organization that receives software updates remains vulnerable, as are all the organizations that regularly deliver them.
To avoid becoming a hub for an attack on customers and partners, any business supplying software updates must adopt more advanced approaches to code-signing security. Too frequently, the private code-signing keys for software updates by suppliers are managed manually and independently by development teams, a complicated, inefficient, and potentially dangerous process. There’s no central oversight of how keys are used, and poor audits restrict the ability to spot vulnerabilities or learn from mistakes.
This must change. We have to require central visibility into all code-signing keys, including the ability to view who can use them, enforce global security policies, and obtain a central audit log of all operations. Many vendors place keys inside a hardware security module (HSM) or in a cloud-based key management system (KMS), so developers use the same solution to ensure only valid code gets signed. However, companies find it more challenging than ever to rely on hardware-based security in an increasingly virtualized world, while having keys with the same entity holding the data, as in KMSs, doesn’t make sense. Furthermore, HSMs and general KMSs only protect against key theft, but not key misuse. It’s important in code signing because rather than steal the key, it suffices to obtain a single fraudulent signature and the attacker can globally deploy malware under the guise of a valid update.
Secure multiparty computation (MPC) can help, a sub-field of cryptography that has been researched for decades in academia and of late has been widely-deployed commercially. This technology lets a secret key be split or shared into two or more pieces and placed on different servers and devices. Crucially, the keys are never assembled at any time, even during key generation and use. As all the pieces are required to obtain any vital information about the key, but are ultimately not assembled, hackers must breach all the servers and devices to access the system.
Strong separation between devices (for example, different administrator credentials and environments), offers a very high level of key protection, removing the single point of failure and reducing the threats of misuse from insiders. By further splitting the key, it’s possible to cryptographically enforce multiple checks in the code verification and signing cycle. Thus, users can only sign code after it’s scanned for malware, and checked against signing policies. For example, one could define that developers can only sign code during weekdays since it’s unlikely they will make a release will be made at 3 a.m. on a Saturday. With MPC, the policy is validated by multiple machines, making it very hard to bypass. It’s even possible to define quorums so that three of five of the team managing the release must approve before any signing takes place. In general, using MPC, organizations can set up strict maker/checker workflows, enforce policies and gain a high level of protection against attackers injecting malicious components into software.
The growing sophistication of supply chain attacks also demands that enterprises look very closely at who they buy software from. First, they should adopt zero-trust security policies and implement vendor controls to curtail third-party movement inside networks. Second, they should insist software vendors offer security assurances demonstrating they follow best practice for detection, response and mitigation and have a genuinely secure code-signing system. Rather than implementing a vendor checklist, they should meet with the CISO or their equivalent and evaluate whether it’s vendor that really cares about security or will just try to do the minimum. (If they have 1,000 employees and no CISO, that already says a lot about them.) Finally, they should have incident response plans ready because attacks inevitably happen.
Ultimately, whether supplying or purchasing software, organizations must take supply chain attacks seriously, and this includes a close look at the code signing process and the way that keys are managed and protected. Legacy measures that focus on preventing key theft alone have proven glaringly insufficient and supply chain attacks are so potentially costly that organizations can no longer afford to keep doing the basics alone.
Yehuda Lindell, chief executive officer, Unbound Security