Pharmacy chain Rite-Aid’s recent abandonment of an eight-year-old facial recognition program aimed at curbing shoplifting underscores how organizations struggle to overcome associated security and privacy challenges – as well negative perceptions.
Faced with fallout after a recent Reuters expose, Rite-Aid torpedoed the program. But it is hardly the only company testing the facial recognition waters with an eye on embracing it full throttle. While the technology is most frequently discussed in terms of law enforcement, other industries adopting it include car manufacturers offering the ability to ensure only authorized drivers are behind the wheel. It’s often touted as a pro-consumer vehicle safety feature that can detect, for example, drowsiness.
“Businesses in America are setting their sights on facial recognition, the next bleeding-edge technology,” says Matt Gayford, principal consultant at the Crypsis Group. “While facial recognition offers many efficiencies, it also creates several privacy and security challenges.”
The technology is just the latest to show “the power of the unintended consequences of data collection,” comments Tom Pendergast, chief learning officer at MediaPro, a Seattle-based provider of cybersecurity and privacy education. “We place cameras everywhere and gather the most personal information of all – what is more ‘personal’ than your face?”
In Rite-Aid’s case, over the course of eight years, the pharmacy chain installed FR systems to 200 stores, including 33 of its 75 Manhattan locations frequented by minorities, who have been misidentified by such technology.
Pendergast believes organizations that implement FR often do not think about all that could possibly go wrong with such data collection – like the fallout from improper employee training or whether organizations’ data protection practices are actually state of the art. “There’s a hell of a lot that can go wrong,” the privacy advocate explained.
Gayford agrees those companies implementing FR are toeing their way through a potential privacy minefield and advises companies to focus on consumer consent to avoid running afoul of regulators, who, so far, are playing catch up.
He points to the “Commercial Facial Recognition Privacy Act” introduced in Congress. Although it has not been adopted, its language prohibits companies from collecting or sharing individuals’ data without explicit consent.
The California Consumer Privacy Act (CCPA) places biometric data in the same category as an individual’s personal information and demands the same protections. Under CCPA, California residents can access, delete and take with them their biometric data.
The European Union’s General Data Protection Regulation (GDPR) categorizes biometric data as sensitive information and explicitly states that biometric data cannot be used for identification unless the individual has consented or if specific obligations exist.
Steve Durbin, managing director of the Information Security Forum (ISF) noted public dissatisfaction with the way in which some governments have used facial recognition resulting in its withdrawal from use.
“For facial recognition systems to become an acceptable, broadly used means of authenticating that we are who we say we are, we need to confirm that the privacy rights of the individual are protected, that the data is responsibly collected, stored and managed and that its use is limited to the purpose for which it was initially taken,” Durbin said.
Noting protections in Europe, Jan Zaborsy, content expert at Innovatrics, believes there’s no getting around regulations in place.
“In our opinion, the technology must always be used responsibly and respect the regulations,” said Zaborsky, whose company’s facial recognition technology is used for visitor-ship analysis, in which faces are connected directly to a video stream captured by CCTV to estimate age and gender of each person.
“Matching visitors to an existing database created without explicit user consent would be illegal [under GDPR], Zaborsky pointed out.