A neural network that can detect drunkenness by sound has been developed by Australian scientists from La Trobe University. The research was published in the journal alcohol.
To train the neural network, the researchers used more than 12,000 audio recordings of sober and drunk people. ADLAIA was able to detect drunkenness with almost 70% accuracy in speakers with a BAC of ‰ 0.5 or higher. The algorithm showed a higher efficiency – almost 76% – in detecting drunkenness in speakers with an alcohol concentration above ‰ 1.2. A 12-second recording was sufficient for evaluation.
Researchers expect ADLAIA to be integrated into mobile apps and used, for example, in bars and sports stadiums to get instant results on people’s levels of drunkenness.
“Being able to detect people who are intoxicated just by their speech would be a much cheaper alternative to existing systems. In these places, breath alcohol testing is expensive and often unreliable.