AI benefits/risks

How Big Tech can regain trust amid AI innovation

Winning the public's trust with AI

Earlier this year, Mayo Clinic researchers reported that a new AI model had successfully demonstrated the ability to detect early stage pancreatic cancer.  

With developments like this, AI will save thousands of lives and put demonstrable good into the world. And yet, the public has become more wary than ever of AI.

A July MITRE-Harris poll on AI found only 39% of respondents believe today’s AI is safe and secure, down from 48% in a November 2022 version of the same poll. Uncoincidentally, OpenAI launched ChatGPT in November 2022.

ChatGPT reportedly costs roughly $700,000 a day ($250 million annually) to run, and it’s a large language model (LLM), only a specific application of generative AI and not a more advanced version of what generative AI could do. It will cost a lot of money to develop and operate truly generative AI technology.

Cost will present a barrier-to-entry, meaning much of the major development in AI will come from Big Tech companies such as Google, Meta, and Microsoft, all of which are already deep into AI R&D.

Therein lies the problem: consumer trust in Big Tech has been declining for years. But society cannot let such concerns hinder AI innovation and development, because it has the power to bring about world-changing advancements.

That means Big Tech must regain public trust, and they can do so through improved data privacy practices, the very thing that eroded that trust in the first place.

Improve communication with users

The industry can start by treating privacy policies as dialogue instead of boilerplate. After the EU’s General Data Protection Regulation (GDPR) came into effect in 2018, nearly every website adapted to the law by putting up cookie banners and more public-facing privacy policies to quickly gain user consent to data processing.

This did not give rise to transparent privacy practices that concisely communicated to users what was actually going on. Instead, it spawned “cookie fatigue,” where most people automatically click through these notices without a second thought. With the average privacy policy sitting between 2,500 and 4,500 words and full of complicated legalese, who can blame someone for simply clicking through?

Companies working with AI need to publish shorter, punchier, and vastly more transparent notices and privacy policies. Bullet points, illustrations, and an easy opt-out button need to be presented to users to increase public awareness on how data feeds these systems and the inherent value each person’s data holds.

High-profile data breaches and revelations that Big Tech was selling user data damaged the perception of the internet, as the defining tech of the day had commoditized its users without their permission.

Big Tech did not mean to give way to potential privacy harms, but consumers were affected nevertheless. Meaningful communication with users would include informing them on what’s happening, and also about worst-case scenarios. This is reflected in the EU’s AI Act’s tiered approach to risk levels, and should be front & center in how companies market AI tools.

For example, does a product use biometric data? If yes, users should know that such data could lead to identity theft and unauthorized account access. How much sensitive data does an AI process? That answer alone could tell you how much bias a system has.

Whereas privacy policies should inform, AI positioning should acknowledge risks so people can give consent with full transparency before they use a potentially dangerous product, as the negative repercussions of AI could damage people’s lives.

Respect and complete user data rights requests

European regulators have been looking into ChatGPT amid complaints the company had refused to complete data subject requests (DSRs).

No matter how complex an AI system is or any potential business risk in publicizing how a system operates, companies must always respect and obey user DSRs. Data rights are fundamental to global data privacy regulations, and as data privacy will play a pivotal role in AI governance, no system can supersede consumer rights.

Privacy policies and risk awareness precede product usage, and that’s why they must maintain maximum transparency, but data rights are the last line of defense consumers have against data handling malpractice.

By ensuring individuals know about data rights and are free to exercise them, Big Tech covers the entire user journey. We have arrived at the next generation of innovation, and if AI products put these steps to use in good faith, they will win back consumer trust over time.

Gal Ringel, co-founder and CEO, Mine

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.