Security and trust executives from social media platforms Facebook and Twitter said at a RSA 2019 keynote panel this week that their companies would welcome additional transparency regulations as a countermeasure against the weaponization of the internet by foreign adversaries.

In addition, other experts on the panel suggested regulations that would require the identification of any bots posing as humans, and also encourage collaboration between the public and private sectors.

Nathaniel Gleicher, head of cybersecurity at Facebook, suggested the company would embrace more legislation along the lines of the proposed Honest Ads Act. Introduced in 2017, the bill was designed to improve the transparency of online political ads and curb foreign actor interference by tracking who is spending on such ads. Gleicher maintained Facebook’s position that it has not waited for the legislation to pass and has actually gone on to implement the policies contained in the bill.

“One of the things we learned is that we can’t just wait for regulation, so we went ahead and implemented all the requirements of the Honest Ads Act, plus more, ahead of it,” he said. “Another example: We’re actually working now actively with the French government, thinking through and trying to provide them them with enough information that they can develop a legal framework that works in that context as well.”

Del Harvey, VP of trust and safety and Twitter, agreed that transparency is a key issue for building trust with the community — adding, however, that disclosure must actually be meaningful and beneficial, rather than arriving as a massive data dump that might be difficult to interpret.

Harvey noted that Twitter already has twice disclosed content associated with fake accounts tied to state-sponsored operations. “I think that type of transparency and the conversation that can take place coming out of that is a really meaningful one in terms of helping evolve the approaches that we’re taking to try to combat bad actors,” said Harvey, adding that social media platforms might also consider sharing with the public how their algorithms work.

But is that enough? Not for fellow panelist Peter Warren Singer, strategist at think tank New America. “I think what we’re seeing the [social media] companies go through is the stages of grief at what happened to their babies, to their creations,” said Singer. “So first it was denial and now in we’re the stage of bargaining: ‘I’m doing x,y and z, so that you don’t have to intervene and force me to do these other actions.’ So there have been a series of actions that are great… but there’s still a way to go.”

Singer said he’d like to see social media companies be even more transparent about whom they can share user data with. And he would like to see a less stringent variation of the so-called “Blade Runner law imposed. His version of the law would not forbid the use of robots posing as humans, or technologies such as deepfake videos. However, such instances would be clearly labeled so that the user community knows they are interacting with non-human personas or viewing something inauthentic.

Finally, Robert Joyce, senior advisor at the National Security Agency, said he’d like to see the enactment of policies that facilitate collaboration.

“…What we’re in right now is a zone where the intel community gets to see and watch those manipulators planning and attempting overseas. The platforms themselves have an amazing insight into how the real users operate and create accounts. And then you’ve got these intermediaries where we’ve got DHS and FBI who have the responsibilities for the protection and the pursuit of criminality,” said Joyce. “And I want to make sure that that chain, from the people who are looking at the bad actors, to the people who have the ability to act, and those with the authority and responsibility to protect us all, work at speed, scale and efficiency — the kind of things you’re seeing the adversaries able to do.”