Detecting AI-Created Images: A Neural Network Approach to Real Photo Verification

No time to read?
Get a summary

Distinguishing created images from real photographs is a growing concern as artificial intelligence advances rapidly in society. A team from the Atlantic Center has developed a neural network based system to identify synthetic images, achieving accuracy above 95 percent in distinguishing real photos from those generated by AI.

Researchers note that text-to-image programs are already delivering convincing results at a fast pace. This has sparked the idea of using AI itself to tell real photographs apart from images produced with tools like DALL E, Stable Diffusion, or OpenArt. For some time, the team has worked on neural networks and AI systems focused on differentiating images, and the concept of applying this experience to the problem emerged.

Fernando Martín, part of the High Frequency Devices group, explains that the team collaborated with Mónica Fernández, the project coordinator, and Rocío García, who joined the effort through an employment program for young professionals. The study explores how past work on image origin tracing can help in the modern context of AI generated visuals.

The project draws on fingerprinting methods that determine the origin of real photos and videos. The team discovered that applying a technique known as photoresponse non-uniformity, or PRNU, to AI created images yields interesting results.

All digital photographs carry imperfections that are almost imperceptible to the eye, yet each image possesses a unique pattern. Even photos from the same camera model reveal distinct fingerprints that can be linked to the original device.

The fingerprint arises from sensor manufacturing defects, and it can be computed directly from the image. At first, the researchers doubted that AI generated outputs would contain such errors. Yet these signals exist because AI apps are trained on real photographs. In essence, the real images form a pool of training material that leaves residual traces. Instead of a single original scene, many versions exist and mingle, complicating attribution.

The scientists also evaluated error level analysis, or ELA, a method traditionally used to detect edits or tampering in forensic imaging. ELA helps identify which parts of an image have been altered and which areas remain consistent, whether the image is real or AI produced. The finding indicates that images created by artificial intelligence exhibit detectable patterns that can be analyzed.

The conclusion is that both fingerprinting through PRNU and ELA produce reliable signals. The team recommends using both techniques as complementary checks. If either method suggests a synthetic origin, the image is likely AI generated. However, there is still a margin of error when a single method returns a negative result, and this is significantly reduced when both methods are used together.

The study, published in Sensors, trained convolutional neural networks on more than a thousand real and AI generated visuals. In addition to their own experience, the researchers leveraged a framework originally developed for diagnosing medical images, such as mammograms. This cross discipline helped orient the classifier toward recognizing suspicious features before human review, a strategy that proved effective in this context as well.

The team notes that there are already systems designed to detect deepfakes, which place a person’s face onto another. Their approach, however, targets fully synthetic imagery. As new AI applications appear daily, their usefulness is expected to persist because many methods share underlying similarities, even if some prove less effective over time due to rapid changes in technology.

Beyond the current findings, other approaches exist to detect AI created content where nothing is real. The Atlantic Center’s solution stands as a promising candidate toward a practical commercial product. A live example mentions a site where users can upload images and receive a confirmation about authenticity, with ongoing improvements to the database that retrains the system. If this approach gains traction, the new method would need to stay ahead by continuing to evolve.

In summary, distinguishing AI generated imagery from real photographs remains a dynamic challenge. The Atlantic Center’s neural network approach, supported by PRNU and ELA analyses, offers a robust path forward for credible detection and potential commercial deployment. The collaboration highlights the role of fingerprinting techniques in adapting to AI progress and the importance of combining multiple signals to reach reliable conclusions.

No time to read?
Get a summary
Previous Article

Rarity in Mineral Records and Kyawthuite

Next Article

Updated Aragon Show by Amaral: Mozota Performance and El Bosque Sonoro Details