AI/ML, Governance, Risk and Compliance, AI benefits/risks

What is ‘AI washing?’ Companies pay $400K to SEC for inflated claims

The United States Securities and Exchange Commission (SEC) charged two companies for falsely exaggerating the use of artificial intelligence in their products, marking one of the first-ever enforcement actions against “AI washing.”

AI washing is the use of deceptive and inaccurate claims about a company’s use of AI or machine-learning capabilities to capitalize on the hype surrounding the technology.

SEC Chair Gary Gensler previously warned against AI washing in statements at a conference in December, according to the Wall Street Journal, comparing the practice to “greenwashing,” or the inflation of claims about environmental sustainability.   

“Marketing can be aggressive which often leads some to jump on the latest buzz words to help position their messaging towards the cutting edge,” said Wayne Schepens, founder and managing director of LaunchTech Communications, and chief cyber market analyst at SC Media’s parent company CyberRisk Alliance.

The SEC said in a press release Monday that investment advisors Delphia USA and Global Predictions made “false and misleading statements about their purported use of artificial intelligence” in violation of securities laws, including the Advisers Act and Marketing Rule.

Delphia claimed its AI solution could “predict which companies and trends are about to make it big and invest in them before everyone else,” which the SEC says was inaccurate to the company’s actual AI capabilities.

Global Predictions called itself the “first regulated AI financial advisor” and said its platform provided “[e]xpert AI-driven forecasts,” statements which were also called out as false by the SEC.

Delphia and Global Predictions ultimately settled the charges with the SEC, paying civil penalties of $225,000 and $175,000, respectively.

“Public issuers making claims about their AI adoption must also remain vigilant about similar misstatements that may be material to individuals' investing decisions,” Gurbir S. Grewal, director of the SEC’s Division of Enforcement, said in a statement.   

Is the cybersecurity industry susceptible to AI washing?

Cybersecurity companies of all different sizes and stages are increasingly turning their focus to “AI-powered” solutions, although the industry has already been ahead of the curve in adopting AI/ML pre-ChatGPT, Schepens noted.

Recent developments in the world of AI cybersecurity include a collaboration between CrowdStrike and Nvidia to integrate Nvidia’s AI expertise into CrowdStrike’s extended detection and response (XDR) platform, and a $20 million Series A funding round by AI-centered cybersecurity startup Reach Security.  

Even before AI was as mainstream as it is today, Schepens told SC Media there were “definite cases of AI washing,” as many venture capitalists prioritized companies promoting AI/ML capabilities.

“Fortunately, there was a ton of pushback early on when some companies were ‘called to carpet’ resulting in the reins being pulled back. While there are certainly some exceptions, most founders and marketing teams today take the use of these terms very seriously,” Schepens said.

The temptation to join in on the AI revolution is enormous: AI funding in the U.S. jumped up 14% in 2023, according to CB Insights, and research by BlackBerry in early 2023 found 82% of IT decision-makers planned to invest in AI-driven cybersecurity within the next two years.

“The push from the industry to ‘AI-ify’ everything, coupled with pressure from the investment community, is likely driving companies to exaggerate the capabilities of their offerings. This, at best, manifests in the way of embellishing a product’s capabilities. At its worse, it involves outright misrepresentation of the AI integration within the product,” Ben Bernstein, CEO and co-founder of Gutsy, told SC Media.

Bernstein, who is also a previous venture partner and security investment pod lead at ICONiQ Capital, said the recent enforcement action by the SEC should give companies pause in considering the way they represent the capabilities of their AI solutions.

“Vendors should ensure that their marketing claims align with the actual capabilities of their solutions. Cybersecurity vendors should provide transparency by clearly articulating product capabilities, demonstrate efficacy by backing up claims with evidence from independent testing or customer case studies, and avoid exaggerated claims by focusing on tangible benefits and outcomes,” Bernstein said.

One difference between the cybersecurity industry and many other industries hopping on the AI train is the degree of vetting that goes into products tasked with safeguarding critical systems and defending sensitive data.   

“As a startup, there can be pressure to fit your product into a particular investment profile; however, in my experience, the industry is pretty good at self-policing. Meaning, if your product capabilities are exaggerated, it will be discovered in due diligence and will not likely result in a positive outcome,” Schepens said. “In our industry buyers require ‘proof of concepts’ (PoC), and products go through rigorous scrutiny by the industry analysts community, which keeps everyone on their toes.”

Bernstein said buyers of cybersecurity solutions should inspect the AI claims made by vendors and ask whether “throwing AI at every problem” is really the answer.

“While it’s tempting to adopt the latest and greatest to get ahead of customers and create efficiency, the promise likely outpaces the reality in terms of outcomes. Seeking independent validation through third-party evaluations, analysts, reviews, or certifications can help verify the effectiveness of the AI features claimed by the vendor,” Bernstein said.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.