More nation-state actors are poised to develop realistic deepfakes using artificial intelligence algorithms for intelligence and military operations, according to The Register. Such sophisticated deepfakes are enabled by stable diffusion models and could be leveraged for disinformation campaigns, propaganda, and fake news distribution, a report from Northwestern University and Brookings Institute revealed. "The result will be a cat-and-mouse game similar to that seen with malware: When cybersecurity firms discover a new kind of malware and develop signatures to detect it, malware developers make 'tweaks' to evade the detector," the report said. Deepfake advancements should prompt global regulation, said researchers. "In the long run, we need a global agreement on the use of deepfakes by defense and intelligence agencies. Getting such an agreement will be hard, especially from veto-wielding nation states. Even if such an agreement is reached, some countries will likely break it. Such an agreement therefore needs to include a sanctions mechanism to deter and punish violators," said report co-author V.S. Subrahmanian.