House lawmakers Wednesday grilled Facebook’s vice president of global policy management, Monika Bickert, over the company’s ban of “deepfakes,” saying that the policy fell short.

“Big Tech failed to respond to the grave threats posed by deepfakes, as evidenced by Facebook scrambling to announce a new policy that strikes me as wholly inadequate,” said Rep. Jan Schakowsky, D-Ill., who heads the House Energy and Commerce Subcommittee on Consumer Protection and Commerce.

Bickert appeared before the panel after penning a blog post detailing Facebook’s policy and the two criteria the company will use to determine is misleading manipulated media should be banned.

Videos will be subject to the ban if “it has been edited or synthesized – beyond adjustments for clarity or quality – in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say,” Bickert wrote, and if “it is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.”

Under the criteria, though, manipulated media like two widely viewed videos – one of a “drunk” House Speaker Nancy Pelosi, D-Calif., and the other of a “white nationalist-leaning” former Vice President Joe Biden – wouldn’t get the boot because they were faked using commonly available editing software like iMovie.

Biden’s spokesman Bill Russo pegged the ban as falling short. “Facebook’s announcement today is not a policy meant to fix the very real problem of disinformation that is undermining faith in our electoral process, but is instead an illusion of progress,” Russo said in a statement posted by CNN. “Facebook’s policy does not get to the core issue of how their platform is being used to spread disinformation, but rather how professionally that disinformation is created. Banning deepfakes should be an incredibly low floor in combating disinformation.”

But Robert Prigge, CEO of Jumio, called the Facebook policy “a step in the right direction, but there’s a lot of work to be done.

As the company flags and reviews deepfakes, its “data set will grow, making it easier to train its machine learning models to determine if media has been manipulated and needs to be removed,” said Prigge.

The Facebook policy change comes as TikTok parent ByteDance has created a deepfake-making app.