AI-Based Sentiment Analysis Aims to Identify PTSD Signals in Copywriters During Telemedicine Sessions
Artificial intelligence has been trained to help diagnose post-traumatic stress disorder in copywriters by analyzing how they express themselves during therapeutic conversations. In a study reported by Boundaries in Psychiatry, researchers explored how machine-driven sentiment analysis can reveal patterns of emotion in spoken or written material collected during psychotherapy sessions.
In this Canadian study, a team of researchers from a major university examined a large set of texts produced during psychotherapeutic interviews. Sentiment analysis, a field at the intersection of linguistics and computer science, measures the emotional load of language. The algorithm counts how often statements convey positive feelings, negative feelings, or neutral stances, providing a numerical map of someone’s emotional expression across the conversation. The goal was to determine whether these emotional signals could help distinguish the lasting effects of post-traumatic stress disorder from simple variations in mood or expressive style that can occur in anyone undergoing therapy.
During the sessions, a virtual interviewer appeared on a video conference screen to interact with participants. Among the 275 individuals involved, 188 carried a PTSD diagnosis, while 87 did not. The researchers found clear differences in linguistic patterns between the two groups. People with PTSD tended to use more neutral or negative expressions overall, while certain moments revealed stronger fluctuations toward negative emotions or, at times, restrained emotional talk as a coping mechanism. This aligns with existing scientific findings showing that trauma can shape both what is said and how it is said in a therapeutic setting.
The team interpreted these linguistic cues through a machine learning model designed to classify participants based on their expressed sentiment. The resulting algorithm achieved an accuracy rate of around 80 percent in identifying PTSD within this particular interview framework. While the researchers emphasize that this is an initial step, the performance is promising for the development of low-cost, scalable diagnostic tools that could support telemedicine services, especially in areas with limited access to traditional clinical evaluations. The approach could offer clinicians a supplementary data stream to corroborate clinical judgments, monitor changes over time, and tailor interventions to the emotional trajectories observed in therapy sessions.
As telemedicine becomes more widespread, such AI-enabled tools may play a growing role in mental health care by providing objective indicators derived from natural language use. The investigators caution that any diagnostic use would require careful integration with comprehensive clinical assessments, ethical safeguards, and continued validation across diverse populations. Nonetheless, the research contributes to a broader conversation about how computational methods can augment clinicians’ understanding of trauma-related conditions while keeping patient well-being at the center of care. The study invites ongoing collaboration between technologists and mental health professionals to ensure that machine-driven insights enhance, rather than replace, the human elements essential to healing. Marked citations point to the publication in Boundaries in Psychiatry as a reference for further exploration of this emerging field.