Experts in machine learning, the law and the ramifications of security explained the current state of legal protections for machine learning. (HarvardCyberLaw Clinic)

Most network defenders have come to terms with the laws that govern tampering with computers. But as more and more computer processes move to machine learning, they may be surprised to find out that the legal protections will not follow.

Machine learning is vulnerable to attacks endemic to the frailties of learning. Most things that can be taught can also be tricked. When learning is based on user-submitted data, users can submit bad data.

At a Wednesday Black Hat talk, experts in ML, the law and the ramifications of security explained the current state of legal protections for machine learning. Their conclusion: Policies that may protect enterprises are not purpose-built for the cause and are not a snug fit.

"Adversarial machine learning attacks are interesting because they don't require getting access to any area of a computer. It's much more like playing Go Fish, than it is like hacking a computer," Kendra Albert, a clinical instructor at the Harvard Cyberlaw Clinic and one of the presenters, told SC Media.

Albert's co-presenters included Microsoft Azure's Ram Shankar Siva Kumar, Harvard Cyberlaw Clinic instructor Kendra Albert, infosec author Bruce Schneier and York University assistant professor of law Jonathon Penney.

There are four key methods of adversarial attacks on machine learning systems, which wildly vary in potential illegality. There is evasion, where an attacker tries to trick a machine, like putting tape on a stop sign to confuse an autonomous car into keeping on driving. There is model inversion or stealing, where an attacker recreates the data used to train a system or the machine learning system itself through carefully targeted queries. And there is data poisoning, where attackers flood a system with bad data.

Those attacks might cross a few different hacking laws. The Computer Fraud and Abuse Act, the primary hacking law, requires either exceeding authorized access by breaching a technological barrier to data or causing damage. it is unclear at present if reverse engineering data or a model through legal queries would be considered crossing the access threshold, and only the poisoning attack would cause damage.

Defenders might have better luck pursuing standard contract law, where a well-crafted agreement — including terms of service agreement — could be applied to any kind of attack. Copyright law, particularly the prohibition of circumventing any digital protection of copyright, could conceivably apply to attacks meant to reverse engineer machine learning systems or manipulate an algorithm. And reverse engineering stands a good chance of running afoul of trade secrets laws.

"There is this perception among folks who work on machine learning systems that the law will just protect systems — that adversarial machine learning attacks must be illegal," said Albert. "And there are certainly contexts in which there's legal risk associated with adversarial machine learning research and adversarial machine learning attacks, but it's not a set of straightforward questions."

There is time to figure out the answer to these questions. Co-presenter Ram Shankar Siva Kumar told SC Media "The first wave is not necessarily going to be like somebody trying to scribble something on a stop sign or somebody trying to crop and rotate. There's a lot of, like, traditional vulnerabilities and machinery and systems that can be exploited today."

But answering legal questions about machine learning will, in part, be limited by public understanding that the systems are vulnerable to a unique array of attacks.

"We did a survey asking organizations 'How do y'all think about building, like, safe, reliable, trustworthy and secure machine learning systems?' And they mostly answered 'What do you mean machine learning systems can be attacked?"