Almost exactly one year ago, the world experienced two destructive cyberattacks in which offensive cyber tools developed by the National Security Agency (NSA) were stolen and shared with the public. In May 2017, the WannaCry ransomware hit over 300,000 computers in 150 countries. One month later, the NotPetya attack hit the computer systems of companies and governmental entities across the globe causing millions of dollars in damages. These attacks exploited numerous vulnerabilities, and have subsequently exposed the slow response time of targeted countries and the lack of effective information sharing mechanisms between responsible agencies, something which could have mitigated the severe damage caused by the attacks.
The interesting feature of these attacks is that those responsible—North Korea and Russia—used the leaked offensive tools originally developed by the NSA. The investigation into WannaCry ultimately revealed that the attackers had exploited a security vulnerability called EternalBlue, originally developed by the NSA. NotPetya used a variant of the same vulnerability, which is still wreaking havoc a year later. For example, in February 2018, security researchers at Symantec reported that an Iran-based hacking group had used EternalBlue as part of its operations.
This situation whereby technologically-advanced countries are investing efforts in developing offensive cyber capabilities only to have these very tools stolen and reused raises three critical questions of urgent policy relevance and backdoors.
First, are states going to start reusing each other’s leaked cyber tools as a matter of course? The ability to reuse stolen cyber tools may signal the beginning of a shift in the distribution of international cyber power, as weaker actors (including non-state actors) become increasingly able to use sophisticated malware to cause global damage and possibly target the cyber weapons’ original designers. Countries that are less technologically advanced and less vulnerable to cyberattacks might find the reuse of stolen vulnerabilities appealing for their own offensive activity and against their own citizens.
Second, is it possible to prevent the leaking of cyber tools from occurring in the first place? There aren’t many reasons to be optimistic. First, there’s the insider threat problem—a particularly thorny issue given the extensive use of contractors and the risk that they steal or mishandle sensitive information that they were exposed to during their service, or rival agencies posing as contractors. A second and possibly more problematic reason is that it is cheaper to use stolen vulnerabilities than finding new ones. As new vulnerabilities like EternalBlue get exposed, the costs of using stolen cyber vulnerabilities and conducting attacks are being consistently lowered while benefits remain high. States with offensive capabilities know that putting their hands on unique vulnerabilities developed by their adversaries will enable them to more easily launch sophisticated attacks without the need pursue a lengthy and costly R&D process. This makes the reuse of cyber tools especially appealing and may motivate different actors to concentrate their efforts in this direction. As long as the benefits of using the stolen vulnerabilities are higher than the costs, these vulnerabilities will remain an attractive target.
A third question for policymakers is whether the theft and reuse of cyber vulnerabilities change the way states handle these vulnerabilities. States should be aware of the risk that vulnerabilities leak into the open and develop information-sharing mechanisms to address it. These information sharing mechanisms should exist between the intelligence agencies themselves and between the intelligence community and the tech industry. Once it has been recognized that a vulnerability, exploit or tool has been stolen, the relevant agency should immediately share the specific data with all the agencies and firms that might get affected by it, just like what the NSA is believed to have done. To minimize security concerns and to not expose sensitive sources and methods, there is no need to share the precise reasons for giving such warnings. Yet it is crucial to share this information in order to patch the vulnerabilities as soon as possible and lower future risks.
The U.S. government has its own vulnerabilities equities process (VEP) policy which sets out how the U.S. government discloses computer vulnerabilities it detects or acquires to other vendors. Countries across Europe have their own similar VEP mechanism, and an EU-wide VEP is being considered. But as so many countries around the globe are developing offensive cyber capabilities, and in light global damage costs of the WannaCry and NotPetya, there is an urgent need to create an international mechanism for vulnerability disclosures akin to a global VEP.
Formulating an effective response to this growing type of cyber weapon proliferation is clearly the responsibility of national governments. Quoting the words of Microsoft President Brad Smith: “The governments of the world […] need to take a different approach and adhere in cyberspace to the same rules applied to weapons in the physical world… This is one reason we called for a new ‘Digital Geneva Convention‘ to govern these issues, including a new requirement for governments to report vulnerabilities to vendors, rather than stockpile, sell, or exploit them…” Alongside the development of these offensive capabilities, and in light of the success of recent attacks, the theft and leak of offensive cyber weapons and their subsequent use is only likely to increase, potentially creating new international tensions between governments and between them and the tech industry.
Authored by: Gil Baram who is the head of research at the Yuval Nee’man Workshop for Science, Technology and Security, and a research fellow at the Blavatnik Interdisciplinary Cyber Research Center.