
The International Telecommunication Union (ITU), a United Nations agency, issued a strong call today for global companies and platforms to adopt advanced detection tools to combat AI-generated ‘deepfake’ content, warning of growing threats to election integrity and financial security.
In a report unveiled at the AI for Good Summit in Geneva, the ITU stressed the urgent need to address the misuse of generative AI, including fabricated images, videos, and audio designed to mimic real individuals with striking realism, reports news agencies.
“Deepfakes are eroding trust and increasing the risk of fraud and manipulation,” the report said.
The ITU report recommended that social media platforms and content distributors implement digital verification tools to authenticate images and videos before distribution and strong international standards be developed to combat manipulated media and false narratives.
“Trust in social media has declined significantly because people don’t know what’s real and what’s fake,” said Bilal Jamousi, Head of the Study Groups Department at ITU’s Telecommunication Standardization Bureau. “Combating deepfakes is a major challenge because generative AI can create incredibly convincing content.”
As a response, the ITU is working on establishing global standards for video watermarking—targeting a medium that constitutes 80% of internet traffic.
These standards aim to include source metadata, creator identity, and timestamps to help verify the authenticity of digital content.
The UN agency believes that a coordinated global effort is essential to prevent AI tools from being weaponized to spread misinformation, undermine public trust, or manipulate democratic processes.