Rewrite of deepfake legislation coverage

No time to read?
Get a summary

The government’s Legislative Actions Commission did not back a bill that would establish separate criminal liability for the distribution of deepfakes or other AI-generated fake media. Reports attributed this stance to coverage by TASS, referencing the draft Cabinet review of the document.

The initiative was authored by deputy Sergey Leonov from the LDPR faction. The draft proposal sought to amend the Criminal Code of the Russian Federation to punish the illegal creation or dissemination of information about individuals without their consent when artificial intelligence is used to replace faces or voices. The measure aimed to target the manipulation of a person’s appearance and speech without authorization, a tactic increasingly used to mislead or defame. This point is noted in coverage of the draft review and reflects the sponsor’s intent to add a specific provision addressing AI-based impersonation (source: TASS).

According to the text of the document, the proposal did not treat the creation of AI-generated materials that reveal information about a person’s private life and are not of clear public danger as a criminal act, unless there is an intent to distribute them. In essence, the draft argued that certain AI-produced materials would fall outside the proposed scope if they lack public risk or the purpose of dissemination. Critics and supporters alike described the legal lines as ambiguous, potentially leaving some harmful uses unpunished under the new clause (source: draft materials cited by news outlets).

The draft also notes that the unauthorized distribution of a person’s personal data, including AI-facilitated cases, falls under Article 137 of the Criminal Code, which concerns violations of privacy. In addition, the document did not present evidence indicating a gap in current legislation or the need for a new framework to address AI-driven privacy breaches (source: document excerpts cited in coverage).

Maria Zakharova, who previously served as spokesperson for the Russian Foreign Ministry, argued that deepfake problems require regulation at the legislative level. She urged readiness to mobilize eyewitnesses who can counter deepfake materials and help verify authenticity in real time. This viewpoint underscores the broader push within some government circles to build a rapid-response apparatus for fact-checking in the digital age (source: Zakharova statements reported by news agencies).

Subsequent reporting indicated involvement from Roskomnadzor, Russia’s telecommunications regulator, in exploring how AI might be employed to safeguard information security. The discussions centered on automatic detection of false information across online platforms, aiming to strengthen defenses against misrepresentation and misinformation while maintaining reasonable checks on technological innovation (source: Roskomnadzor briefings and public statements).

No time to read?
Get a summary
Previous Article

France strengthens checks on social benefits paid to people abroad

Next Article

Expanded update on Belgorod region shelling and civilian displacement