Security Program Controls/Technologies, Threat Management

IBM’s Watson rebooted as a secure AI alternative

IBM’s Watson rebooted as a secure AI alternative

IBM jumped into the generative artificial intelligence rumble last week with the roll-out of watsonx. The platform is a far cry from Big Blue’s Watson “cognitive computing” platform, first launched over a decade ago. Watson’s prodigy is now being billed as a “secure” and “trusted” AI development platform.

IBM hopes its watsonx studio will get it back into the AI conversation. It’s last splashy foray, Watson, was once billed as AI breakthrough for healthcare, accounting, cybersecurity and for beating Jeopardy! superstars like Ken Jennings. Today, as its lower-case “w” suggests, IBM’s moonshot ambitions have been scuttled, for now. 

"This is nothing like Watson with a capital ‘W’," Omdia chief analyst Bradley Shimmin said of watsonx. "It signals a new direction for IBM on how it approaches AI."

Despite a decade and $10 billion sunk into Watson’s development the supercomputer never managed to gain traction. Its crown jewel, Watson Health, never saw the success it hoped with oncologists and other health professionals. Watson’s cybersecurity ambitions met similar challenges, despite an interesting pilot program assisting the National Institute of Standards and Technology processing software bugs and assigning threat ratings. Instead, in Jan. 2022 IBM sold Watson Health for a reported $1 billion to private equity firm Francisco Partners.   

From moonshot to gravity reality check

IBM fell behind in the AI race for a number of reasons, including the near-impossible task of filling Watson with all the data necessary to be all things to all people. It’s one thing to train an AI system to answer trivia questions and another task entirely to ask it help diagnose cancers.

“The challenges turned out to be far more difficult and time-consuming than anticipated,” Manoj Saxena, a former general manager of the Watson business, told the New York Times.

Another reason IBM lost its lead was “probably because of their focus on business versus making a splash on the consumer side,” said John Todd, an analyst with Total Research Management.

In 2015, IBM’s then CEO Ginni Rometty boasted on the Charlie Rose Show that Watson Health was the company's “moonshot” and the machine garnered more than 100 mentions in annual reports of the time.

Seven years later, Watson merited just one passing reference as a helpful human resources gauge in IBM's 2022 annual report.

Last week IBM brushed the dust off the Watson brand with the launch of watsonx, which it bills as an enterprise-ready AI and data platform or foundation program. Watstonx is an AI development platform that helps companies build, train, scale and deploy their own AI platform. It’s akin to similar efforts such as NVIDIA’s AI Foundations program, announced earlier in the year.  

"Some technologies take time to mature," IBM chief executive Arvind Krishna told CNBC in May when the company announced the July launch of watsonx.

While the rollout doesn’t include a direct cybersecurity component to start, the platform does address many of the underlying security concerns that have nagged the AI darlings of the moment such as OpenAI (ChatGPT), Amazon (Bedrock) and Google (Bard).

Concerns over privacy, bias and accuracy linger and many cybersecurity professionals have been sounding alarms over related risks.

“AI-generated content can contain mistakes,” Microsoft wrote in March announcing the launch of Security Copilot, which uses OpenAI’s Chat GPT-4 technology. “As we continue to learn from these interactions, we are adjusting its responses to create more coherent, relevant and useful answers.”

Where IBM hopes to make a clear distinction with watstonx is with its decidedly guarded approach tapping into language models used as fodder for generative AI.

Watsonx provides developers language and data training models to build their own generative AI models based on their own data sets. They include tools like data lineage audits, meaning output or answers can clearly be reverse engineered to the data sources that created them. The focus won’t be on helping high schoolers point, click and auto-generate term papers or beating gameshow contestants. Rather, IBM intends to help businesses build language models on industry-specific databases and datasets.

Garbage in, garbage out

"There's a very high bar to clear for security," said Chris Meenan,  a vice president of product management in IBM's security division. 

IBM hopes to sidestep two of the major security concerns surrounding generative AI. One is tied to the integrity of an AI platform and the other is abuse of the technology by hackers.

On one hand, AI security concerns exist around the language models (or datasets) that platforms use to generate solutions, or answers. Taint the data AI is using to spit out answers and you can end up with unreliable outcomes. This is what some describe as generative AI’s “black box” problem. Researchers warn when developers can't reliably understand or explain how their AI system arrive at a particular conclusion, it leaves the company’s using the tech vulnerable to a new form of "black-hat keyword manipulation.”

That doesn't present a national security risk for lazy high school students. But when generative AI is synthesizing data culled from a security information and event management system, that is when accurate data is essential for spitting out business-critical security outcomes.

OpenAI, which trained its language model on internet datasets such as Reddit user posts, has been criticized for answers and outcomes that demonstrate language parsing anomalies that have delivered bizarre, incorrect and sometimes unsettling responses.

AI is being weaponized, said Zulfikar Ramzan, chief scientist at Aura Labs, which develops cybersecurity software. "The pace of online crimes has far surpassed that of the physical world,” he said. Adversarial AI will give criminals the same advantage as any enterprise — increase productivity for specific mundane tasks.

Threat actors are using the tech for crimes such as business email compromise campaigns. Other nefarious uses include using AI to build malware, generate deep fakes, bypass CAPTCHA puzzles, wardialing and craft more effective password dictionary attacks.

On the flip side, AI is also seen as a soon-to-be powerful tool for hardening cyber defenses around identity access management, as well as performing time consuming tasks such as SOC and SIEM analysis or delivering a supercharged extended detection and response (XDR) solution capable of automated instantaneous mitigation. 

At launch watsonx won’t be marketed as a defensive security solution, Meenan said.

"We have proof-of-concept security models," said Meenan. When it comes to a launch date for the security-focused version of watsonx, he declined comment.

Meenan insists AI-based cybersecurity industry is "at a pivot point." He believes the watsonx flavor of AI will be far more profitable, secure and trusted than the likes of public-facing services such as OpenAI’s.

So, what’s the difference?

Watson vs. watsonx vs Generative AI

In 2011, when Watson beat Ken Jennings at Jeopardy!, IBM used ten of its Power 750 servers to parse the language of Jeopardy questions and answers. Enlisted were hulking supercomputers storing 15 terabytes of data (200 million documents) to help answer questions delivered in the form of answers. Behind the scenes, Watson used hundreds of algorithms to process questions to create a weighted lists of possible answers. Then it used 6 million logic rules to spit out the most likely answer to be correct. No internet required.

OpenAI’s ChatGPT uses natural language processing (NPL) to turn questions into prompts. Then it scours generalized data culled from the public internet (prior to 2021) to generate responses. The NPL technology is used to understand, interpret and generate human language responses.

Ask ChatGPT the real Jeopardy! question “Her October 7, 1914, wedding day would prove to be a focal point of American politics” and it will reply “I cannot provide a detailed response”. (A. Who is Rose Fitzgerald Kennedy?)

On the flip side, can Watson pass a Turing test? No. Can ChatGPT pass a Turning test? It’s debatable, but many say yes.

Watsonx bridges both old Watson — that used vetted data sources — at the same time it follows the OpenAI generative model, which uses NLP to create conversational longform text as opposed to short Jeopardy! answers. The three pillars of watsonx consist of watsonx.ai, watsonx.data and watsonx.governance (expected availability in November), unlike old-school Watson, which is a platform studio, or foundation.

Can IBM regain its AI glory?

This approach to generative AI, IBM says, follows the similar calculated rollout of Watson over a decade ago. IBM faces fierce competition from first-to-market companies like OpenAI as well as those late to the AI party, like Elon Musk’s recently announced xAI.

In March, NVIDIA launched AI Foundations for enterprises, which is based on cloud services that let companies create large language and visual models based on proprietary datasets.

On the security side, Palo Alto Networks in February acquired Expanse, a generative AI company developing a method to map attack surfaces and threat detection. OpenAI, Darktrace, Cylance, Deep Instinct and Symantec also offer AI security products.

The market is set to catch fire. The generative AI market grew from about $8.2 billion in 2021 to a little more than $11 billion in 2022, Allied Market and Precedence Research analysts estimated. They project the market will reach $126.5 billion by 2031.

Can Big Blue work its Watson magic again? “Without updated information post-September 2021, I can't provide a detailed assessment of Watson's recent advancements or successes,” according to an excerpt from ChatGPT-4 when asked.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.