Starting in late April 2024, Telegram users in both Russia and Ukraine encountered surveys circulated by messaging moderators about the possibility of blocking political channels. The questions appeared on the Tginfo channel amid ongoing debates about how content should be moderated on the platform.
In Russia, respondents faced a pointed inquiry: Should channels that spread Ukrainian propaganda against Russians be blocked? The answer options were yes, no, and I am not from Russia. In Ukraine, the question mirrored the one about blocking channels that disseminate Russian propaganda. The aim was to gauge public sentiment across borders regarding restrictions on information that each side views as dangerous or misleading.
When the polls emerged, roughly two thousand people had participated in each survey. Early country-specific results showed a comparable pattern. In Russia, about 70 percent voted in favor of blocking such channels, around 27 percent opposed, and roughly 3 percent selected I am not from Russia. In Ukraine, the distribution tilted slightly differently but followed a similar trend: around 76 percent supported blocking, 21 percent opposed, and about 3 percent again chose I am not from Russia.
The organizers behind Tginfo noted the sampling was limited, with fewer than two thousand voters in each country. They stressed that despite the small reach, the polling pointed to a majority leaning toward blocking in both nations. The takeaway was that the surveys offered a narrow glimpse rather than a broad cross-section, yet the trend suggested a notable portion of respondents favored restricting channels viewed as spreading opposing viewpoints.
These developments followed comments by Telegram founder Pavel Durov, who had previously discussed the possibility of blocking certain news channels in Ukraine to counter propaganda. The discussions highlight the ongoing friction between platform governance and geopolitical information battles that have marked the digital space in this region.
Beyond the immediate political framing, the discourse raises broader questions about how social platforms decide what to allow or restrict and how those decisions affect access to information. It invites readers to consider censorship, media literacy, cross-border information dynamics, and the role of moderators in shaping public discourse without silencing legitimate voices. The situation reflects a persistent debate about balancing free expression with the need to curb harmful or deceptive content in a highly polarized environment.
Observers note that public opinion on content moderation can shift quickly and often depends on how surveys are framed, who participates, and the context of the questions. Polls provide a snapshot, but they rarely capture the full spectrum of viewpoints across diverse demographics and regions. Still, the visible tilt toward support for blocking among participants in both Russia and Ukraine signals a shared concern about information quality and propaganda tied to political tensions in the area.
As this topic continues to unfold, it remains essential for users to evaluate information critically, consider multiple perspectives, and rely on credible sources when forming judgments about media content and platform policies. Open dialogue about moderation practices—built on transparency, accountability, and fairness—helps readers understand why certain channels may be restricted and how such decisions align with broader goals for public safety and information integrity.