AI/ML, AI benefits/risks, Vulnerability Management

AI firm Hugging Face discloses leak of secrets on its Spaces platform

(Credit: Tada Images, Adobe Stock)

Artificial intelligence company Hugging Face disclosed that secrets from its Spaces platform may have been accessed without proper authorization last week.

The Hugging Face Spaces platform enables users and organizations to host interactive demos of its machine learning (ML) applications.

Hugging Face said in a post Friday that it detected the potential intrusion earlier last week, leading the company to discover that a “subset of Spaces’ secrets” may have been exposed to an unauthorized party.

The secrets leaked included Hugging Face tokens, which the company revoked after discovering the suspicious activity; affected users received an email prior to the Friday disclosure, according to the company.

The disclosure notice also noted several security changes made to the Spaces platform in response to the leak, including the removal of org tokens to improve traceability and auditing capabilities, and the implementation of a key management service (KMS) for Spaces secrets.

Hugging Face said it plans to deprecate traditional read and write tokens “in the near future,” replacing them with fine-grained access tokens, which are currently the default.

Spaces users are recommended to switch their Hugging Face tokens to fine-grained access tokens if they are not already using them, and refresh any key or token that may have been exposed.

The company brought in third-party cybersecurity forensic experts to help investigate the incident and help review security practices; the incident was also reported to law enforcement and data protection authorities.

Further details about the suspected unauthorized access were not provided, and Hugging Face did not immediately respond to inquiries from SC Media regarding the number of affected users and origin of the intrusion.

AI secrets at risk

Multiple cyberattacks, data leaks and vulnerabilities disclosed over the past six months have put sensitive AI data at risk of theft and misuse.

In December, Lasso Security discovered more than 1,600 Hugging Face API tokens were exposed on the platform and on GitHub, putting organizations including Microsoft and Google at risk of hacks and data theft.   

Research published in April by Wiz also showed how malicious AI models could be used to perform cross-tenant attacks and potentially compromise other models and projects. Wiz partnered with HuggingFace to mitigate the vulnerabilities.

A critical vulnerability in the open-source AI framework Ray, discovered late last year, has been targeted to compromise AI workloads, Oligo researchers reported in March. A critical RCE vulnerability in the open-source llama-cpp-python package was found to impact more than 6,000 AI models dependent on the package in May.

Hugging Face offers several security measures for AI models and projects hosted on the site, including malware scanning and scanning for unsecured secrets in app files.

Meanwhile, regulators have attempted to keep up with new security risks stemming from the AI boom; for example, CISA’s latest AI guidelines published in late April offer guidance for defending AI-powered critical infrastructure systems and developing secure AI systems, as well as preparing for AI-enabled attacks.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.