Threat Management

Are You Confident in Enterprise Artificial Intelligence?

By Vijay Dheap

Simply mentioning Artificial Intelligence (AI) sparks our collective imagination. Over the past few years, a number of major breakthroughs have brought AI back into mainstream consciousness. While the Alan Turing-inspired pursuit of a machine with human-level intelligence that can solve any problem continues unabated, AI research is now being applied in engineering. Today, practical applications of AI are focused on solving specific problems within a given domain. Self-driving cars, medical diagnoses, high-frequency financial trading, and facial recognition are just a few examples of solutions being enabled with task-specific AI.

The branch of AI that is making much of this possible is machine learning. In machine learning, a system learns to perform a task from available data rather than programmed instructions. A machine learning model is the derived output of learning algorithms applied to a training data set. For supervised learning, the training data set is required to be correctly labeled with accurate answers, and when employing reinforcement learning, the system learns iteratively from feedback it receives on the results it generates. In the security space, one of the earliest examples of machine learning was spam detection; data collected over time from users' email systems denoted specific emails as "junk," and in turn, machine learning-powered email filters could more accurately distinguish between legitimate email and spam than email filters using rule-based approaches.

The application—and risks—of AI in today's enterprise

In today's enterprise, AI applications powered by machine learning are being created and deployed. The excitement around machine learning is encouraging a broader set of software developers and data scientists to acquire the necessary skills to create AI applications. Skills development is facilitated by cost-effective cloud platforms that offer environments to create and run machine learning models, availability of developer tools with associated training materials, and greater access to historical data.

It is safe to assume that many mission-critical business processes developed by software developers and data scientists will soon rely on data-driven decision making made possible by machine learning if they are not already. To illustrate the point, consider that in healthcare machine learning is already being applied to establish better treatment plans for patients. In insurance, AI performs real-time assessments of customers' sentiment and risk profiles then develop recommended policies which agents and customer service representatives can provide to individual customers.

If you are in your organization's security group, you are probably already considering the risks these new solutions expose. It is typical for creativity to outpace security considerations when it comes to emerging technologies, especially if innovation is contributing business value in the form of speed, efficiency, improved customer service, cost savings, or a "coolness" factor. Ironically, the initial applications of machine learning in cybersecurity were with the intention of improving protection and detection mechanisms for traditional IT systems and processes. Anomaly detection, user behavior analysis, and malware classification are examples of domains that have evolved as a result of machine learning. However, while this innovation is occurring, security researchers are demonstrating that AI applications are surprisingly easy to compromise—now is the time to begin developing security strategies to defend the AI applications themselves.

The risks of compromised AI applications are real and the implications significant. Attacks could result in fraudulent activity costing organizations reputational or financial damage. Malicious activity can disrupt business operations. Weak security controls could also result in policy or regulatory non-compliance.

Revisiting the lifecycle of a machine learning solution will help us consider the necessary security controls.

Getting started with machine learning

Getting started with a new machine learning project requires establishing the goal or objective at the onset. Machine learning models are often applied to classify the input or to make a prediction based on an observed pattern. The quality of the model is based on its accuracy. Whether it is natural language processing to infer a voice command or a model to optimize seat assignment on an airline flight, the scope of the goal, acceptable inputs, and range of outputs need to be well defined. (Not only does this improve the model itself, but it also helps set the foundation for quality and security testing.)

The next step in any machine learning project is curating the training dataset. The training data set should be representative of the expected inputs during operation. Since the expected inputs during operation can be quite varied, the amount of training data required can be significant. Oftentimes, the available training data will not be sufficient to accurately train the machine learning model on the full set of inputs. This results in a model that will be highly confident on one subset of inputs and ineffective on another subset of inputs. Such a model can be improved over time by using fresh data to advance its effectiveness.

Given the potentially onerous process of compiling a training data set of sufficient size, there is a strong incentive for sourcing training data sets from external public or private sources. While many university research labs and organizations that share their data sets are legitimate, the data used to train AI applications should always be identified, reviewed, and cataloged. Since training data may include labels defining the correct answer, the labels should also be reviewed. Doing so will mitigate the risk of malicious actors embedding specific sets of training data that allow them to anticipate the behavior of the model generated. In the case of reinforcement learning, where manual review of the machine learning model's outputs can influence its future behavior, it is important to institute processes that prevent tampering or bias.

{tweetme}Check out this interesting read on enterprise artificial intelligence. What's your take? #InfoSecInsider #infosec{/tweetme}

 

One method that has been successfully employed is to have multiple individuals review the result and use the collective feedback. Alternatively, specific authorized users can be nominated to review subsets of data to establish accountability. To emphasize the risk associated with not validating training data, it is worth noting the cautionary tale of how Microsoft's Tay.ai bot's behavior was compromised then used to spew hate speech and offensive statements due to lax controls. In some industries, the risk could be even greater. For example, in insurance, the use of biased data in computing risk for underwriting an insurance policy can result in discrimination leading to legal consequences.

Once the training data set has been collated, machine learning algorithms can be chosen and used to derive the desired model. Instead of creating and tuning models from scratch, developers may employ transfer learning techniques that leverage pre-trained models as a starting point. This has proven effective for addressing problems that lack a sufficient amount of training data. As the ecosystem around machine learning and AI applications continue to grow, the reuse of machine learning models is expected to increase, given that models are typically more portable than the data used to train them. The pre-built models need to be identified, tested and cataloged to cut down on the risk that newly created AI applications will inherit the vulnerabilities and biases of the models used to construct it. Once the model is derived, testing needs to not only be quality-focused but also security-oriented. Testing should center on how the model performs under a variety of potential inputs and how gracefully it handles unknown inputs.

Learning from AI

In the past year, adversarial machine learning has garnered the attention of researchers and malicious actors alike. Adversarial machine learning models are created to compromise the integrity of a machine learning model. The adversary (or "adversary," if the model is being used for testing) attempts to weaken the accuracy of the AI application or influence it to generate specific results for a given set of inputs. Adversarial AI threats can be classified as either black-box or white-box attacks. In a black-box attack, the malicious actor does not have visibility into the details of the machine learning model powering the AI application. Therefore, they exercise the machine learning model to observe its results based on various inputs. Using this information, they attempt to either distort the input minutely or provide inputs that the model is untrained to handle. In a white-box attack, the malicious actor not only knows the machine learning model employed but also internal configuration details and potential defensive mechanisms. An insider is the most capable to perpetrate a white-box attack.

Adversarial networks can be used to generate a broader range of testing and training data to harden the original machine learning model. In addition, adversarial networks can train the machine learning model to differentiate between malicious input from valid input allowing for differentiated handling. By building a test bed of potential adversarial attacks on a machine learning model, an organization can better defend the integrity of the machine learning models it creates.

AI applications continue to evolve after they are deployed and attackers will continue to innovate, therefore it is important to institute runtime security protocols. Continuous monitoring of AI applications is of paramount importance; anomalies in access or utilization can then be scrutinized. It is desirable to design machine learning models that enumerate their decision-making process so that their behavior can also be reviewed. It is also recommended practice for business-critical solutions to include an escalation process for human review and override when malicious input or operational anomalies are detected.

The future of AI

AI holds tremendous potential for delivering business value to organizations. Security need not contribute pause or hinder creativity; rather, awareness and understanding of security considerations will allow enterprises to pursue opportunities with confidence. While specific methods have been proposed to defend AI applications, it should be noted that they build on the principles of application security and defense-in-depth security protocols. It is a matter of extending the organization's security hygiene to continually address this emerging domain.

Looking to learn more about this topic and other similar ones? Mark your calendars for next year's highly-anticipated InfoSec World Conference & Expo! 

Alvaro Reyes

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.