A collaborative effort by scientists from Stanford University in the United States and the University of Toronto in Canada has produced a technology that shows strong potential for detecting alcohol intoxication from a person’s voice. The work appears in the Journal of Alcohol and Drug Research (JSAD), a peer‑reviewed scientific publication.
The researchers conducted a controlled study with 18 volunteers. Each participant received a measured dose of alcohol and was asked to read a nursery rhyme aloud while a smartphone’s voice recording function was kept close by. The procedure was repeated seven times over the course of seven hours to capture changes in speech as intoxication progressed. This longitudinal design aimed to map how voice traits shift with varying alcohol levels.
Using machine learning, the team analyzed the audio recordings to extract features related to voice production. They looked at frequency patterns, pitch, tempo, articulation, and other acoustic markers that shift when alcohol affects the nervous system and muscles used for speech. The result was a model that could classify intoxication with a high degree of accuracy, achieving around 98 percent in their tests when identifying intoxicated versus sober states based on voice characteristics.
Lead investigators reported being pleasantly surprised by the model’s performance. They emphasized that while prior work has explored speech changes during intoxication, the strength of their results stems from advances in signal processing, richer acoustic feature sets, and more powerful machine learning techniques. These elements together seem to offer a clearer signal of impairment present in a speaker’s voice.
The scientists describe the technology as a tool for timely intervention to reduce accidents caused by alcohol impairment. They suggest that everyday devices such as smartphones and smart speakers could serve as accessible monitoring platforms, especially in scenarios where immediate detection could prompt safer decisions. The aim is to develop a practical system that users would accept and rely upon, ultimately helping to prevent injuries and save lives by identifying impairment early.
Beyond this study, the team has previously explored related approaches. Earlier work demonstrated the feasibility of assessing drunkenness from gait patterns using a smartphone. Taken together, these efforts illustrate a broader research program focused on noninvasive, real‑time assessment of impairment from everyday sounds and movement data, with potential applications in road safety, workplace safety, and public health monitoring.