Russian Tech Push Aims to Verify Images, Combat Deepfakes

No time to read?
Get a summary

In Russia, both the private sector and government bodies are actively advancing technologies designed to verify the authenticity of digital files and curb deepfake fraud. Reports from RIA News indicate that researchers and policymakers are collaborating to build reliable methods for distinguishing genuine imagery from manipulated material, a challenge that grows more pressing as synthetic media becomes increasingly sophisticated.

Alexander Shoitov, a vice president within the department overseeing these efforts, indicated in a recent briefing that the outcomes of the ongoing program could become clear within the current year. He emphasized that the work covers two fundamental tasks: first, testing whether a given image truly represents what it purports to show or if it has been reconstructed or altered; and second, creating a mechanism to attach a verifiable signature to original images so that viewers can confirm their authenticity. Shoitov noted that both lines of development are progressing and that setting formal normative criteria may follow once practical results have been established.

Asked about the developers behind these technologies, Shoitov replied that the work is being conducted by collaborations between multiple actors, spanning both industry and state institutions. He also mentioned that there are already concrete approaches in this domain and that some findings could emerge as soon as this year. He cautioned that deepfakes pose a pressing challenge because artificial intelligence can produce convincing virtual videos that imitate reality. Yet he stressed the importance of drawing a clear boundary between legitimate creative work and content that could mislead or infringe upon citizens’ rights, underscoring a focus on safeguarding public trust while enabling legitimate expression.

Previously, Roskomnadzor announced tougher penalties for disseminating deepfakes, highlighting the risk when manipulated materials concern politically or socially significant public speeches and statements by officials. The evolving policy landscape reflects a broader intent to deter the spread of false or misleading digital content while supporting legitimate information sharing. Deepfake technology typically involves highly realistic substitutions of photos, videos, and audio streams, and the authorities are taking steps to mitigate its potential harm through a combination of technical solutions and regulatory measures. In this context, the ongoing experiments aim to create reliable indicators of authenticity and to establish standards that can be applied across media platforms, helping both creators and consumers navigate an increasingly complex digital environment. This balanced approach seeks to preserve freedom of expression while protecting the integrity of public discourse, a goal that remains central as media technology continues to evolve.

No time to read?
Get a summary
Previous Article

Spain’s EU presidency priorities examined through a European lens

Next Article

Public Attitudes Toward AI in Cars: Russian Survey Insights