Roskomnadzor has earmarked 58 million rubles to develop an automated system named Oculus, designed to search for prohibited content online. The initiative relies on a neural network to monitor video, images, and text across websites, social platforms, and even instant messaging services.
The goal is to train Oculus to identify calls for extremism or terrorism, as well as gatherings and propaganda promoting non-traditional relationships, using artificial intelligence. The project is slated to begin on December 12 of this year, with a claimed capacity of processing about 200,000 images per day, roughly two frames per second. Industry observers note that implementing such a system would demand more than 40 servers equipped with graphics accelerators.
Still, experts express skepticism about whether the allocated funds and the timeline will be sufficient to realize the system. The conversation around Oculus touches on the broader challenge of scaling AI-powered content moderation at scale, especially in a landscape where legal and ethical safeguards must be balanced with surveillance capabilities.
In this context, the discussion extends to how such technology might influence freedom of information, the responsibilities of platform operators, and the potential implications for users who rely on online services for communication and expression.
The initiative reflects ongoing efforts by regulatory authorities to adopt advanced analytics and automation to monitor and curb prohibited content across digital networks, while also raising questions about transparency, accountability, and the continuous evolution of content governance in the digital era.