Google’s deep learning project researchers demonstrated that neural networks could develop their own encryption methods without having been “taught” cryptographic algorithms.
Google’s deep learning project researchers demonstrated that neural networks could develop their own encryption methods without having been “taught” cryptographic algorithms.

A team of Google Brain researchers used neural networks to develop artificial intelligence-generated (AI) encryption of information processed by the networks.

In a research paper entitled “Learning to Protect Communications with Adversarial Neural Cryptography,” Martín Abadi and David G. Andersen of Google's deep learning project demonstrated that neural networks could develop their own encryption methods without having been “taught” cryptographic algorithms.

Two of the neural networks, code-named Alice and Bob, developed an ability to prevent a third network, named Eve, from “eavesdropping” on their communication. The neural networks were able to complete increasingly complex tasks, such as generating realistic images and solving multiagent problems.

AI and machine learning, once primarily the domain of marketing and predictive modeling professionals, have increasingly been utilized to herald security advances such as a platform developed by researchers at the Massachusetts Institute of Technology (MIT) in May.