Governance, Risk and Compliance, Identity, Privacy

IRS plans for facial recognition draw scrutiny from privacy, cybersecurity advocates

Members of Congress are turning up the heat on the IRS over its planned use of facial recognition for taxpayers seeking to access their IRS.gov accounts. Pictured: The IRS building on April 15, 2019, in Washington. (Photo by Zach Gibson/Getty Images)

01/26/2022: This story was updated to include new comments from ID.me CEO Blake Hall that contradict previous claims that the company does not conduct 1:many matching of facial images.

The IRS is pushing taxpayers to start using a login service that leverages facial recognition and requires users to send photos of themselves to a third-party company. The news, first flagged by independent investigative reporter Brian Krebs, has been met with incredulity by privacy and cybersecurity experts, who say the program will create numerous privacy, cybersecurity and accessibility concerns for taxpayers.

To start, it does not appear that a user will *need* to submit a photo of themselves to file taxes, but the IRS website presents it as the default option, with alternatives buried in the frequently asked questions section. In response to initial reports around the program, the agency sent a statement to Gizmodo that said in part that “taxpayers can pay or file their taxes without submitting a selfie or other information to a third-party identity verification company” and that payments “can be made from a bank account, by credit card or by other means without the use of facial recognition technology or registering for an account.” For those unable (or unwilling) to verify through ID.me, they can also request transcripts through phone or mail, but this process will take five to 10 days.

The company has staunchly defended itself from criticisms around its privacy and security, insisting that it relies on trusted and vetted technologies that are widely used in other services and that its selfies are no different than unlocking your smartphone with a facial image. It also claims its technologies do not fall victim to the accuracy gaps between different demographic groups that researchers have found plaguing many other facial recognition systems.

But experts in privacy and cybersecurity say that by making it the default option for users to create an account with ID.me and submit photos, the IRS is pushing potentially millions of taxpayers into using a risky technology with a spotty track record, one that isn’t even owned by the government.

Caitlin Seeley George, a campaign director for Fight for the Future, a non-profit tech policy and civil liberties non-profit, told SC Media that because nearly every American relies on the IRS website to file or gather information about their taxes, the program would represent “a massive expansion of the federal government using facial recognition on the public at large.”

“We know that the federal government has access to various facial recognition databases that are built up of mug shots and, in some states, drivers licenses and things like that. But I think this is creating a new database at a wider level that will impact far more people in the country than previous [programs],” George said. “It is a database that can be shared with law enforcement, with ICE, with the FBI, and I think ID.me’s privacy policy is kind of vague on other partners that it may share its data with.”

First, even before this news, the federal government already had a long (and controversial) track record when it comes to collecting and storing images of Americans and others for use in facial recognition systems.

Multiple government entities — including U.S. Customs and Border Patrol, the Transportation Security Administration and commercial airlines — use facial recognition or other biometric technologies to capture the images of international travelers at the border, airports and other ports of entry. Often, these systems are linked to other state and federal databases and lack robust or meaningful restrictions on how they can be used, leading to scenarios where images collected at your local DMV to create a driver’s license can be accessed and used by Immigrations and Customs Enforcement for deportations, or the FBI for criminal investigations.

In 2019, a proposed regulation by CBP that would have swept all travelers arriving or leaving the United States — including citizens — into a biometric screening program that included facial recognition was walked back by the agency after a public outcry. The proposal came less than a year after news that Perceptics, a subcontractor for CBP, had been hacked, resulting in tens of thousands of photos of travelers, their cars and license plate numbers being leaked to the public.

George believes beyond there are both cybersecurity and privacy implications to the plan, including how the data will be used other agencies, and questioned why the agency needs to use ID.me, a private identity verification company, to collect and store taxpayer photos. The Perceptics hack demonstrates how easily this kind of information can be exposed when put in the hands of third-party providers. Officials at CBP told Congress that Perceptics database was not linked to the government's, and the company was not following rules specified in its contract when employees physically removed the photos from cameras and stored them on an internal network.

"These [private] databases are not actually secure, this information can be hacked and we just constantly say when this is your biometric information, it's not the same as a credit card that can be replaced," said George. "This is information that you can't replace, that can be used by bad actors to target people and harm them."

While ID.me has a "Privacy Bill of Rights" which states that users control their own data and that "explicit consent" is required before it is shared with other parties, the Electronic Frontier Foundation's Kurt Opsal noted that ID.me's privacy policy notes that it can share "whenever they think it's necessary to 'investigate, detect, prevent and address … other harmful … activity,' and well as 'requests' from law enforcement."

"Investigating any 'harmful' activity is the kind of exception that you write to be big enough to drive a truck full of [personally identifiable information] through," Opsal wrote.

A NIST study from 2019 evaluated 189 facial recognition software algorithms from 99 developers that included approximately 18 million images of more than 8 million people. For so called “one-to-one” matches, or situations where the technology is matching one photo of a person to another in their database, these technologies saw significantly higher false positives for photos of African-American, Asian-American and Native American persons relative to Caucasians. In instances where the search was “one-to-many” (or using a photo of a person to cross check against the entire database in order to find a match), they found similar rates of high false positives for African-American women.

In a lengthy response to questions sent to SC Media after publication, the company said it uses a number of vendors, including Paravision and iProov, for its Face Match and Face Liveliness features, that have been tested by the National Institute for Standards and Technology and other organizations for racial bias. ID.me users can verify their identify in three ways: automated self serve (taking and uploading a selfie), live video chats and at one of 650 in-person locations across the country. Self-service accounts for an estimated 90% of their users.

"Additionally, ID.me has conducted internal testing to compare empirical results to the NIST and laboratory tests. In all cases, there was no statistically significant difference in the propensity to pass or fail the Face Match step across demographic groups, including groups with different skin tones, as corroborated by NIST, ID.me, and a state government agency," a response from a public relations firm representing the company stated.

In an attached statement, CEO and founder Blake Hall cast the company's verification technologies and processes as trusted and vetted, noting that 30 . He also said ID.me does not rely on one-to-many matching, which he called "more complex and problematic."

"Our 1:1 Face Match is comparable to taking a selfie to unlock a smartphone. ID.me does not use 1:Many Facial Recognition, which is more complex and problematic. Further, privacy is core to our mission, and we do not sell the personal information of our users," he said.

However, just days later Hall backtracked on those claims, posting on LinkedIn Wednesday that the company does in fact use 1:many matching to monitor for potential identity theft.

 "ID.me uses a specific '1 to Many' check on selfies tied to government programs targeted by organized crime to prevent prolific identity thieves and members of organized crime from stealing the identities of innocent victims en masse," Hall wrote.

In congressional hearings over the past few years, officials from law enforcement agencies like the FBI and DHS component agencies like the Transportation Security Administration have often cited decades-old federal and state laws to justify their facial recognition programs — statutes that lawmakers have said were never written with biometric scanning technologies in mind.

The move by the IRS is already drawing scrutiny from members of Congress. Sen. Ron Wyden, D-Ore., a fierce advocate of digital civil liberties and restrictions on biometric data collection, said he was “disturbed” by the news and planned to follow up with the agency.

“I’m very disturbed that Americans may have to submit to a facial recognition system, wait on hold for hours, or both, to access personal data on the IRS website,” Wyden wrote on Twitter Thursday. “While e-filing returns remain unaffected, I’m pushing the IRS for greater transparency on this plan.”

Additionally, other federal agencies like the National Institute for Standards and Technology have raised major concerns in the past about the accuracy of facial recognition technology and the government’s reliance on it. In particular: they often struggle to successfully identify or match people with darker skin hues.

Derek B. Johnson

Derek is a senior editor and reporter at SC Media, where he has spent the past three years providing award-winning coverage of cybersecurity news across the public and private sectors. Prior to that, he was a senior reporter covering cybersecurity policy at Federal Computer Week. Derek has a bachelor’s degree in print journalism from Hofstra University in New York and a master’s degree in public policy from George Mason University in Virginia.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.