American engineers from Ohio and Indiana universities have developed a new artificial intelligence (AI) training model to improve sound quality. To achieve the desired effect, experts used not the technical parameters of sound files, but the subjective human perception of the purity of what is heard. The study was published in the scientific journal magazine Institute of Electrical and Electronics Engineers (IEEE).
The model was trained on two data sets from a previous study that included recordings of human conversations. In some cases, background noise such as television or music can drown out speech. Listeners rated the speech quality of each recording on a scale of 1 to 100.
For their project, the scientists used a custom speech enhancement language engine with predictive capabilities that can predict the average rating of a noisy signal from the perspective of live listeners.
As a result, the new model surpasses standard approaches to cleaning audio tracks from noise based on separating the desired signal from side sounds.
According to the authors of the development, improving sound quality is critical to creating more effective hearing aids, public address systems, speech recognition programs and other systems.
Previously in Russia was created Powerful artificial intelligence that works like the human brain.