AI benefits/risks, Generative AI

Embrace AI and stay competitive, or watch the business fall behind  

Embrace AI

Since the introduction of ChatGPT in November 2022, generative artificial intelligence (AI) has taken the world by storm. This new era of AI uses large language models (LLM) to translate human language into useful machine results – and the outcomes are powerful.

With generative AI, organizations can accelerate an employee’s ability to gather, organize, and communicate information. They can deliver greater automation for language-related and mundane tasks, freeing employees to focus on initiatives that offer business value. And, they can optimize processes and use AI’s valuable insights for better decision making.

These capabilities are only the tip of the iceberg, so it’s no wonder that a recent survey found that 73% of IT and security leaders say their employees use generative AI tools or LLMs at work. The business benefits are undeniable. Unfortunately, the same survey respondents also admit they aren’t sure how to address security risks associated with the technology.

Understand the risks

The confusion around securing generative AI isn’t all that surprising. We saw a similar pattern with other tech trends, such as the internet, and mobile and cloud, where adoption outpaced security. Today, to stay competitive, many organizations are rushing to use generative AI without considering the risks, leaving security as an afterthought. But, with AI, this approach can have potentially catastrophic results. Here are just a few of the risks organizations face by using generative AI without the proper security guardrails in place:

  • An expanded and unprotected attack surface.
  • Potential IP and data loss from sharing sensitive information with a third party.
  • Accuracy problems that are hard to detect and require engineering to mitigate.
  • Widespread use of “shadow AI” by employees – which may or may not align with company policy.
  • Limited detection and response – as, in most cases, there’s minimal transparency for generative AI apps. 

Additionally, generative AI has significantly lowered the barrier-to-entry for threat actors. Using a generative AI model, even those with limited cybersecurity background and technical skills can execute attacks. This type of AI also makes it significantly easier for cybercriminals to write malicious code, scan and penetrate networks, and craft believable phishing emails. As a result, it’s becoming more difficult for organizations to prevent AI-powered attacks and employees from distinguishing between legitimate and fraudulent emails.

No matter the potential value of generative AI from a business perspective, organizations cannot ignore these security concerns.

Put guardrails in place for safe AI usage

Organizations can best take advantage of AI’s benefits by prioritizing security from the start. Here are a six steps teams can take immediately:

  • Conduct a readiness assessment: Perform a thorough review of the organization’s cyber readiness for deploying AI and facing AI-enabled attackers. Use these assessments to identify and remediate security gaps.
  • Implement security controls and governance: The pace of AI innovation tends to move significantly faster than the pace of regulation, so organizations shouldn’t wait for government standards around AI. Instead, establish internal policies and controls for AI usage and protection within the organization. Develop policies and controls in coordination with departments across data governance, privacy, risk, legal, IT and business stakeholders, and cover topics including use cases, ethics, data handling, privacy and legality. Once established, create governance processes to ensure employees follow documented security guardrails. Just because employees have good intentions at the start doesn’t mean compliance won’t drift. It's important to quickly detect when this happens to mitigate risk with AI usage. Frameworks such as the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework and MITRE’s ATLAS framework are great starting points for self-governance. 
  • Hold AI products to high standards: Treat new AI technologies with the same rigor as any other technology. Adopt a posture of distrust for new entries into an environment until they are proven secure. Enthusiasm for new AI products and services has become a highly-vulnerable attack vector that attackers have exploited to bypass security. Assess and consider using a “Red Team” for any new additions from a cybersecurity, privacy, compliance and risk perspective.
  • Prioritize monitoring: Along with AI usage, teams should also monitor AI models. Log every prompt and response for review and threat hunting. Monitoring and logging can help teams understand how employees use AI, detect patterns of misuse or indicators of risk, and ensure AI ethics and fairness.
  • Educate end users: As with any area of security, risk mitigation starts with employees. Therefore, effectively preparing for AI threats includes end-user education. This should include training about what kinds of information to share with public AI models and guidance around best practices to avoid blindly accepting untrue statements. It's essential to cultivate an environment of healthy skepticism about AI’s use and equip employees with the skills to recognize and respond to potential threats.
  • Stay informed about the latest developments in AI and cybersecurity: AI enhancements are moving at unprecedented speeds. The policies teams use today are often outdated tomorrow. Staying up-to-date on the AI and security landscapes will ensure security guardrails evolve alongside changing threats so nothing can slip through the cracks.

When it comes to generative AI, we can opt to stay afraid of it and try to ban its usage, or embrace its potential and find ways to adapt to it securely. Expect AI to disrupt just about every industry, and when it does, choosing to embrace AI will help organizations remain competitive.

While it's often overwhelming to tackle the risk ramifications of AI, the roadmap follows the same core tenets of any other security program – focusing on people, processes and technology to cover risk from every aspect. Following these six steps, security teams can help their organizations use generative AI securely to gain the business benefits while minimizing any associated risks.

Randy Lariar, AI security leader, Optiv

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.