The European Union has proposed new regulations to strengthen vulnerability disclosure programs (VDPs) for organizations as part of the EU Cyber Resilience Act (CRA). The modernization of cyber policy always offers some hope for progress, as does the inclusion of VDPs and the implied acceptance of those who hack in good-faith as a fundamental component of European cyber-resilience. The CRA, however, isn't without its flaws.
The CRA requires software manufacturers to notify the European Union Agency for Cybersecurity (ENISA) of vulnerabilities being actively exploited in the wild within 24 hours of observing them and for ENISA to distribute this information to all European CSIRTs and market surveillance authorities. In software development, 24 hours is not a lot of time to fix, test, and deploy software fixes, meaning many flaws that find themselves in the scope of the Act would remain unpatched at the time of notification. This would mean that dozens of government agencies could gain access to a real-time database of software with unmitigated vulnerabilities. Rushing the disclosure process in this way could lead to widespread knowledge about unmitigated vulnerabilities and create a tempting target for malicious actors.
To bring attention to this problem, I joined a diverse group of concerned cybersecurity experts who sent an open letter on October 3 to voice our concerns to officials at the EU Commission and European Parliament. The letter urged the commissioners to reconsider the VDP requirements because of several potential risks, including:
- Risk of exposure to malicious actors: The breaches and subsequent misuse of government-held vulnerabilities are not a theoretical threat but have happened to some of the best protected entities in the world. The CRA does not require organizations to disclose a full technical assessment, but even the knowledge of a vulnerability's existence is often enough for a skilled person to reconstruct it.
- Chilling effect on good faith researchers: Disclosing vulnerabilities prematurely may interfere with the coordination and collaboration between software publishers and security researchers, who often need more time to verify, test, and patch vulnerabilities before making them public. As a result, the CRA may reduce the receptivity of manufacturers to vulnerability disclosures from security researchers and may discourage researchers from reporting vulnerabilities if each disclosure triggers a wave of government notifications.
- Misuse of vulnerability information: The absence of restrictions on offensive uses of vulnerabilities disclosed through the CRA and the absence of transparent oversight mechanisms in almost all EU Member States can open the doors to potential misuse.
Our coalition of security experts has advocated for a responsible and coordinated disclosure process that balances the need for transparency with the need for security. We recommended that the CRA adopt a risk-based approach to vulnerability disclosure, taking into account factors such as the severity of the vulnerability, the availability of mitigations, the potential impact on users, and the likelihood of broader exploitation.
The letter warns that agencies should explicitly be prohibited from using or sharing vulnerabilities disclosed through the CRA for intelligence, surveillance, or offensive purposes. We also advised authorities to extend the reporting of vulnerabilities within 72 hours after mitigating software patches are made available.
Lastly, we said that the CRA “should not require reporting of vulnerabilities that are exploited through good faith security research. In contrast to malicious exploitation of a vulnerability, good faith security research does not pose a security threat.”
Ethical hacking has emerged as a critical component of the overall strategy for vulnerability disclosures worldwide. Putting restrictions on the security community by requiring premature disclosure can only serve to weaken the security posture for businesses, governments, and consumers.
Over the past decade, ethical security researchers have protected us against cybercrimes by offering visibility into the security of software, services, and infrastructure. It would set a dangerous precedent to require premature notifications, and doing so would impose unacceptable risks to our privacy and data security.
Unfortunately, as it’s now written, these new rules could handcuff the security community by reducing access for ethical security researchers to conduct VDP tests while giving rise to nefarious attacks on exposed vulnerabilities.
Casey Ellis, co-founder and CTO, Bugcrowd