A health authority in the Nizhny Novgorod region has restricted access to neural network tools for medical professionals. A Telegram channel cited a decision by the Regional Ministry of Health as the basis for this restriction.
As described by the source, clinicians aiming to streamline administrative tasks with ChatGPT and other AI chatbots would lose access to those technologies while serving in the region. The policy bars all doctors in active duty from using neural networks during their tenure, effectively pausing their use of such tools.
The ministry expressed concerns that patient data and other sensitive work information could be exposed through AI systems, posing potential leaks. It was also noted that several doctors handling substantial documentation faced accusations of unethical practices, reinforcing the caution taken by officials. Lawmakers argued that an autonomous bot should not be viewed as a trusted authority for patients, emphasizing the limitations of AI in clinical judgment and record handling.
Earlier, students at a Moscow university were reportedly permitted to use neural networks while performing their tasks, illustrating a contrasting approach to AI adoption in education and professional settings.
The public discourse in Russian-speaking regions has intensified as fraud schemes rise with the emergence of AI tools. Instances of misuse, such as the WormGPT variant used by hackers to automate fraudulent activities, have heightened calls for strict oversight of AI technologies in healthcare and related sectors.
Experts note that the tension between innovation and risk management is common across healthcare systems worldwide. While AI can support retrieval of medical information, drafting of routine notes, and decision-aid workflows, safeguards around privacy, data handling, and clinical accountability remain central to policy decisions. The Nizhny Novgorod policy reflects a broader caution: patient trust and data integrity are paramount, and any deployment of AI in clinical settings must be accompanied by robust governance, clear liability frameworks, and transparent workflows.
For practitioners, the current stance means prioritizing traditional tools and human oversight. Training in data privacy, secure handling of patient records, and adherence to regional regulations continues to be essential. Institutions are encouraged to develop internal guidelines that balance the benefits of AI-enabled efficiency with the imperative to protect patient confidentiality and uphold ethical standards.
Looking ahead, health authorities may revisit the scope of AI access, potentially introducing controlled pilots that emphasize audits, access controls, and accountability. Stakeholders—clinicians, administrators, and patients—are urged to participate in ongoing discussions about how artificial intelligence can be integrated safely into medical practice, ensuring that technology serves as a complement to, rather than a replacement for, professional judgment.