Security Program Controls/Technologies

AI not yet a game-changer for healthcare hackers 

ChatGPT chat bot

New research into recent hacks waged against the healthcare sector gave cause for concern about the rise of generative AI and the likely part it will play in future phishing attacks. 

Artificial intelligence and generative AI top the list of emerging and prominent threats to the healthcare industry highlighted in a report published Thursday by Trustwave's SpiderLabs team.  

Phishing remains the most common method for gaining an initial foothold in an organization, the report noted, indicating all it often takes for an intrusion to occur is a well-crafted email.  

Generative AI tools trained on large language models (LLMs) are designed to create content that mimics human behavior and language, such as the text of an email or computer code, for example.  

Yet despite its usefulness in conducting cyberattacks — as well as constant advancements in sophistication — AI has not significantly disrupted the current threat landscape, SpiderLabs reported.  

"While LLMs and other technologies categorized as AI seem to have matured at a near-miraculous rate over the past year, we don't have any indication that LLMs have 'changed the game' in any substantive way beyond the existing cat-and-mouse games we've always worked against in the security industry," reads part of the SpiderLabs report. 

Previously considered off-limits by some hackers, cyberattacks suffered by the healthcare sector can potentially result in dire consequences: numerous hospitals in the U.S. and abroad have endured ransomware attacks in recent years that temporarily rendered some of their computer systems inoperable, while data breaches within the industry have become regular occurrences.  

More than 28.5 million healthcare records were breached in 2022, or roughly double the number reported merely three years earlier, according to the U.S. Department of Health and Human Services. More recently, medical giant HCA Healthcare announced Monday that the personal data of about 11 million patients from 20 states may have been stolen in a newly discovered data breach.  

A more persuasive phishing email

Fruitful phishing attacks may yield myriad results, depending on factors such as the perpetrator, payload and target. In theory, a malicious actor might use AI to generate an email that convincingly asks the recipient to reply with sensitive information, instructs them to visit a malicious website or asks them to download and run a booby-trapped attachment.  

Indeed, SpiderLabs researchers cautioned that phishing schemes may soon appear more persuasive than ever if cybercriminals adopt generative AI tools like OpenAI's popular ChatGPT program.    

"Many of the red flags that we teach users to identify phishing emails include items like picking out misspellings, grammar mistakes and general clumsiness of writing that may indicate that the author is not a native speaker," the report stated. "The quick maturity and expanded use of LLM technology is making the crafting [of] these emails even easier, more compelling, highly personalized and harder to detect." 

SpiderLabs reported that some of the more common phishing emails sent to its healthcare sector clients in the last three months included subject lines such as "ENQUIRY MEDICAL DEVICES SUPPLIES," "REQUEST FOR MEDICAL EQUIPMENT SUPPLIES," "Purchase Order List of PO for Medical Supplies and Equipment," "March 2023 Medical Equipment Order," "Inquiry Order" and "Required Quotation For Medical Supplies." 

The most common exploits leveraged against those same healthcare targets included previously disclosed vulnerabilities affecting certain versions of Apache Log4j (CVE-2021-44228), Spring Core (CVE-2022-22965), HTSearch (CVE-2000-0208) and Jive Openfire (CVE-2008-6508), as well as HTTP Directory Traversal and HTTP SQL injection attacks, according to the SpiderLabs report.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.