Engineers from the University of Vigo have developed a system for detecting images created with artificial intelligence

Inability to distinguish created images from real ones, Threats posed by artificial intelligence to today’s society. Rapid technological developments in this field pose a challenge for researchers, but a group of engineers from the Atlantic Center have developed a system based on neural networks, one of the artificial intelligence methods. Distinguish real photos from created ones. And somehow they did it accuracy more than 95%.

“Programs that allow you to create images from texts are already working quite well and also quickly. This is a very hot topic, so the question arose as to whether there was a way to use AI itself to distinguish a real photo from an image we created ourselves with applications like DALL E, Stable Diffusion or OpenArt. We have been working on neural networks and artificial intelligence systems that distinguish images from other subjects for some time, and the idea of ​​​​taking advantage of this experience emerged,” explains Fernando Martín from the High Frequency Devices group and author of the study, together with his colleague Mónica Fernández, the team’s coordinator and also recruited under the Investigo youth employment program With Rocío García.

Martín and Fernández, from the School of Telecommunications Engineering, have previous work on fingerprinting, which would allow them to determine the origin of actual photos and videos. They discovered that interesting results can be obtained by applying this technique, called photoresponse non-uniformity (PRNU), to images created by Artificial Intelligence.

All Digital photos have some imperfectionspractically imperceptible to the human eyebut they are unique and each, even those of the same model, creates a different pattern, allowing them to be associated with the camera they were made from.

“Actually, the fingerprint is an error made by the camera sensor due to manufacturing defects, and we calculate it from the image. At first, we believed that we would not find these errors in products produced with artificial intelligence. But they exist. We always get a result. Probably because the apps are trained with real photos. In a way, they are the heirs of those images. What happens is that instead of one original room, there are hundreds, even thousands of rooms, and they are mixed up,” explains Fernando Martín.

The researchers also tested the effectiveness of a second technique, error level analysis (ELA), whose primary application was to detect editing or tampering in forensic imaging systems. It allows us to determine which parts have been replaced and which parts have been replaced when it comes to actual images. Created by artificial intelligence, warns they are fully regulated.

“We concluded that both types of footprinting work and produce good results. But the option we recommend is to use both as double checks. If any of these determine that it is a synthetic image, it most likely is. However, there is a certain percentage of error when the result is negative, and this is greatly reduced by using the two together,” the researcher emphasizes.

Authors of the work published in the magazine sensorsused more than a thousand real images and artificial intelligence-generated visuals to train convolutional neural network systems. In addition to previous experience, they also used the framework developed by chemical engineer Rocío García for a method to aid diagnosis from mammograms. “The radiologist classifies them as suspicious before examining them, and that’s what worked best for us in this case,” says Martín.

“There are already systems to detect deepfakesThis allows someone to put someone’s face on another person without innocent intentions. Ours works with 100% synthetic images. “New applications are emerging practically every day, and most likely they will be effective as well because they are all similar, but they may not be as effective,” he adds, describing the limitations and complexity that comes with being up-to-date in this technological field.

Other methods are already working to detect images created entirely by artificial intelligence and where nothing is real, but The solution developed by experts of the Atlantic center constitutes a promising candidate that will lead to a commercial solution: “There is even a website where you can upload images and it will ask you if it is correct because this allows it to enrich the database and retrain the system. “If it made a splash, ours would have to keep doing it.”

Inability to distinguish created images from real ones, Threats posed by artificial intelligence to today’s society. Rapid technological developments in this field pose a challenge for researchers, but a group of engineers from the Atlantic Center have developed a system based on neural networks, one of the artificial intelligence methods. Distinguish real photos from created ones. And somehow they did it accuracy more than 95%.

“Programs that allow you to create images from texts are already working quite well and also quickly. This is a very hot topic, so the question arose as to whether there was a way to use AI itself to distinguish a real photo from an image we created ourselves with applications like DALL E, Stable Diffusion or OpenArt. We have been working on neural networks and artificial intelligence systems that distinguish images from other subjects for some time, and the idea of ​​​​taking advantage of this experience emerged,” explains Fernando Martín from the High Frequency Devices group and Mónica Fernández, the coordinator of the team with his colleague, author of the study, and Rocío, who was also recruited under the Investigo youth employment program With García.

Martín and Fernández, from the School of Telecommunications Engineering, have previous work on fingerprinting, which would allow them to determine the origin of actual photos and videos. They discovered that interesting results can be obtained by applying this technique, called photoresponse non-uniformity (PRNU), to images created by Artificial Intelligence.

All Digital photos have some imperfectionspractically imperceptible to the human eyebut they are unique and each, even those of the same model, creates a different pattern, allowing them to be associated with the camera they were made from.

“Actually, the fingerprint is an error made by the camera sensor due to manufacturing defects, and we calculate it from the image. At first, we believed that we would not find these errors in products produced with artificial intelligence. But they exist. We always get a result. Probably because the apps are trained with real photos. In a way, they are the heirs of those images. What happens is that instead of one original room, there are hundreds, even thousands of rooms, and they are mixed up,” explains Fernando Martín.

The researchers also tested the effectiveness of a second technique, error level analysis (ELA), whose primary application was to detect editing or tampering in forensic imaging systems. It allows us to determine which parts have been replaced and which parts have been replaced when it comes to actual images. Created by artificial intelligence, warns they are fully regulated.

“We concluded that both types of footprinting work and produce good results. But the option we recommend is to use both as double checks. If any of these determine that it is a synthetic image, it most likely is. However, there is a certain percentage of error when the result is negative, and this is greatly reduced by using the two together,” the researcher emphasizes.

Authors of the work published in the magazine sensorsused more than a thousand real images and artificial intelligence-generated visuals to train convolutional neural network systems. In addition to previous experience, they also used the framework developed by chemical engineer Rocío García for a method to aid diagnosis from mammograms. “The radiologist classifies them as suspicious before examining them, and that’s what worked best for us in this case,” says Martín.

“There are already systems to detect deepfakesThis allows someone to put someone’s face on another person without innocent intentions. Ours works with 100% synthetic images. “New applications are emerging practically every day, and it will most likely be effective with them because they are all similar, but they cannot be effective either,” he adds, describing the limitations and complexity that comes with being up-to-date in this technological field.

Other methods are already working to detect images created entirely by artificial intelligence and where nothing is real, but The solution developed by experts of the Atlantic center constitutes a promising candidate that will lead to a commercial solution: “There is even a website where you can upload images and it will ask you if it is correct because this allows it to enrich the database and retrain the system. “If it made a splash, ours would have to keep doing it.”

Source: Informacion

Popular

More from author

Published personnel of a suspect in the General Moscalic initiative 20:10

Russia FSB, RF Armed Forces Deputy Chief of General Staff Deputy General Manager General Manager General Garroslav Moskalik'nin dedicated to the deputy of a...

Estonia Released Tanker Kiwala 20:28

Estonian officials allowed the country to leave the country after eliminating the specified violations of the Kiwala tanker. This was reported by television and...

Under the US air strikes, Russian sailors were injured by Yemen 20:16

During the American coup in the port of Ras-Isa, three Russian sailors were injured in Yemen on April 25th. announced In the formed movement...

“Good night, children” and “abvgdiki” player from Svetlana Harlap 19:24

The Soviet and the Russian actress died in the 85th anniversary of her life, the master of voice of Svetlana Harlap. This was reported...