ChatGPT and other generative AI use in the workplace has become a divisive topic as people see its possible benefits stacked against the likely risks. In a highly-regulated industry like healthcare, providers are often tempted to ban its use outright to avoid the compliance blowback seen with pixel tracking tools.
However, a ban would not only block the much-needed support to care services, it also wouldn’t solve the underlying issues that existed long before the proliferation of generative AI.
The risk of unauthorized disclosure of protected health information, personally identifiable information (PII), and other company data has long been an issue in healthcare. In fact, insider risk has remained the leading threat to healthcare organizations for several years, both accidentally and maliciously, according to the annual Verizon Data Breach Investigation Report.
ChatGPT and other AI services have merely made it easier for these risks to proliferate, given the rapid growth and popularity of chat AI large language models.
Generative AI has the potential to improve healthcare outcomes by identifying new treatments and predicting outcomes that are not always obvious to clinicians. Multiple scholarly articles, including a new report in Georgetown Journal of International Affairs, outline the many benefits of ChatGPT in healthcare and its oft over-burdened physicians.
From round-the-clock patient support to improving the speed and accuracy of care outcomes, ChatGPT has already found its place in the sector. Researchers have already demonstrated the potential for ChatGPT to support clinical decision making for breast cancer screening and breast pain imaging, although issues remain with the reliable accuracy of the output.
In an industry like healthcare, with stringent regulations on unauthorized data access under the Health Insurance Portability and Accountability Act (HIPAA), allowing use of ChatGPT without guardrails could risk a compliance disaster. The dozens of breach reports and lawsuits against providers that employed pixel tracking tools offer ample evidence of the need for informed use of innovative tech.
As seen in numerous reports from other sectors, the swift popularity of ChatGPT in the workplace has caused data breaches, inappropriate sharing, and a host of inaccuracies when used in certain fields, such as law firms.
A growing number of tech giants have even banned its use outright over fears employees may intentionally or inadvertently leak company information through the AI service. For one, Samsung Electronics, the ban came in May after an engineer uploaded sensitive company code into ChatGPT.
The issue: any data shared with ChatGPT could later get shared with other platform users. These tools save user chat histories by default, leveraging these conversations to train future models. ChatGPT does let users manually change the setting, but it may or may not retroactively delete data uploaded into the service before the settings update.
Other company risks posed by AI services, include:
- Improperly sharing PHI or PII, and/or company data.
- Attackers could hack the model systems. ChatGPT already reported a credential theft incident earlier this month.
- Any data leaked could get used to train AI models, which could then lead to future leaks.
- These tools may have biases on certain topics, which could lead to biases in patient care or access to healthcare. Data shows has already happened.
- Tools like ChatGPT are fallible: meaning, the information it returns to the user could be inaccurate or include misinformation. Employees in the legal field, for example, have already seen blowback from over-reliance on AI services.
- Threat actors have uploaded malicious apps and plugins into the play store. Users may download these or legitimate tools for AI use, which pose serious privacy and security risks.
AI LLMs such as OpenAI’s GPT-3 and Google’s Bard, built through advanced deep learning techniques and trained massive data troves, offer translation, code generation, answer questions, analyze sentiment, and other tasks. The positive use cases arguably outweigh these risks.
Companies should instead treat ChatGPT like any other potential enterprise threat and analyze the risk, set up effective governance, update tools and policies as needed, and ensure users are aware of all risks, while considering the described threats and possible uses for AI services within the enterprise and ensure that current policies defend against unauthorized disclosure.
Many companies already have processes in place to reduce these threats, but will need to review current policies to ensure their current governance standards are adequate for AI services, then address detected issues. Network defenders should:
- Setup or assign governance for the use of AI within the organization to an appropriate governance committee, or relevant team.
- Examine existing mitigations, security tools, and company policies to determine possible weaknesses. For example, would existing the data loss prevention (DLP) or other tools help protect the organization with these risks.
- Train employees on these company policies to reduce the risk of exposure, including not inputting company data or protected health information into ChatGPT or other AI services.
The Health Industry Cybersecurity Practices (HICP) includes detailed recommendations for safeguarding data within the healthcare environment, including classification of data to properly assign access controls, as well as usage expectations to reduce the number of individuals with access to sensitive information.
ChatGPT and other AI tools can deliver a host of benefits to healthcare operations. While the risks surrounding generative AI could raise the alarm on possible risks to company secrets or reputational harms, with effective, well-established governance controls, healthcare organizations can reap the benefits of the supportive tool with minimal risk.
Will Long, chief security officer, First Health Advisory