Based on the experiment, researchers from the Russian University of Technology, also known as RTU MIREA, examined whether neural networks can generate genuinely original scientific or creative work. More than 200 students participated, with half of them assigned to produce a scientific and practical paper using artificial intelligence as a tool and partner in the writing process. The study aimed to see if machines could contribute meaningful innovation or merely assemble preexisting ideas into new combinations. The question at the heart of the investigation was not just whether AI can imitate human writing, but whether it can add independent insight that stands up as real contribution to a field.
The evaluation panel, consisting of instructors and researchers, had the task of discerning which submissions were authored by the students themselves and which were generated by the neural networks. The results showed that the vast majority of AI-generated texts, about 96 percent, were correctly identified as machine-produced. In the remaining four percent, the authors had intervened extensively, making manual edits to the content produced by the computer. This finding highlights the current balance between machine output and human refinement in academic writing, underscoring the ongoing need for critical review and scholarly judgment when AI tools are used in educational settings.
RTU MIREA’s formal explanation framed the observed outputs as a synthesis of existing abstracts and recommendations drawn from prior scientific papers and studies. The texts rarely demonstrated original reasoning, novel conclusions, or distinctive interpretations. Instead, they tended to repackage established ideas without adding new, independent insight. This characterization emphasizes a key limitation of contemporary neural networks: they excel at aggregating and reorganizing information but struggle to generate genuinely novel arguments or breakthroughs without human direction.
Experts overseeing the study noted that in more than 72 percent of the examined AI-generated submissions, significant factual inaccuracies and logical inconsistencies appeared. This rate of error raises questions about reliability, especially in disciplines where precise data, rigorous methodology, and careful reasoning are essential. The findings point to the necessity of robust verification, cross-checking sources, and clear attribution when using AI to support research, analysis, or writing. It also suggests that AI should function as a supplement rather than a replacement for human expertise, particularly in academic and professional contexts where credibility matters greatly.
Even though the question of AI replacing professionals like doctors remains a topic of debate, the study notes clear limitations. While AI can assist with information gathering, pattern recognition, and preliminary drafting, it does not yet consistently deliver the nuanced judgment, ethical consideration, and patient-centered insight that human practitioners provide. The takeaway is cautious: artificial intelligence can augment clinical workflows by handling routine tasks and data synthesis, but it cannot (at present) supplant the complex, empathetic, and highly contextual decision-making that physicians perform every day.