IBM Security unveiled an open-source toolkit at RSA 2018 that will allow the cyber community to test their AI-based security defenses against a strong and complex opponent in order to help build resilience and dependability into their systems.
The toolkit, called the Adversarial Robustness Toolbox, goes beyond the usual collection of attacks used to test an AI's ability, Sridhar Muppidi, IBM Fellow, VP and CTO IBM Security told SC Media at RSA this week. The toolbox has been released on Github and is available for download.
“So far, most libraries that have attempted to test or harden AI systems have only offered collections of attacks. While useful, developers and researchers still need to apply the appropriate defenses to actually improve their systems,” he said.
The toolbox uses multiple attacks against an AI system and then the security team tasked with increasing the AI's effectiveness can choose the most effective defense. The way it does is to try and trick an AI with intentionally modified external data. Muppidi said the data sent against the AI is made “fuzzy” causing the AI to misclassify the data.
In a blog post earlier this year, Brad Harris, a security researcher with IBM X-Force, the force attacking the AI, or the generator, tries to pull out of the AI defense the correct by essentially playing a game of 20 questions.
“When the discriminator rejects an example produced by the generator, the generator learns a little more about what the good example looks like. With each attempt, the discriminator sends a signal back to the generator to tell it how close it is to an actual example. In other words, the discriminator leaks information about just how close the generator was and how it should proceed to get closer. In an ideal situation, the generator will eventually produce examples that are as good as the discriminator is at distinguishing between the real and generated examples,” he wrote.
Muppidi believes having an open-source toolkit is extremely important and that the cybersecurity industry has to work together knowing that collaborative defense is the only way for security teams and developers to get ahead of the adversarial AI threat.