The work of neural networks to create images has been accelerated by almost 30 times

No time to read?
Get a summary

Artificial intelligence (AI) researchers from the Massachusetts Institute of Technology in the United States have found a way to greatly speed up the operation of neural networks that generate images from text queries. The article was published on: portal scientific publications arXiv.

Scientists have developed a method called distribution matching distillation (DMD) that trains new AI models to mimic existing image generators known as diffusion models, such as DALL-E 3, Midjourney, and Stable Diffusion.

This framework enables more compact AI models that can render images much faster without loss of quality.

Diffusion models create images through a multi-step process involving up to 100 steps. Researchers were able to reduce the number of operations to one. As a result, the AI ​​spent only 90 milliseconds to complete the task instead of the standard 2.59 seconds, which is 28.8 times faster.

DMD consists of two components that work together to reduce the number of iterations a model needs before producing a usable image.

“Reducing the number of iterations has been the Holy Grail of diffusion models since their inception,” said co-author and professor of electrical engineering and computer science Fedro Duran.

The new approach significantly reduces the computing power required for AI graph renderers, the scientists said.

Russian scientists before managed Speed ​​up neural network training by 20x.

No time to read?
Get a summary
Previous Article

Ovechkin’s representative disagrees that pursuing Gretzky will have a negative impact on the club

Next Article

Francophonie festival film screenings postponed