“There was an explosion at the Pentagon.” We explain why it’s fake The explosion in the Pentagon

No time to read?
Get a summary

Around 17:00 Moscow time, a number of Telegram channels and media reported a powerful explosion roaring near the Pentagon building in Washington. Accompanied by a smoke blown photograph. After checking it turned out to be fake.

Initially, the smoky picture began to spread on conservative American Facebook pages (its owner, Meta, was considered extremist in Russia and was banned). Then the message was received by Russian military correspondents, as well as other Telegram news channels, including “Before everyone, almost, almost”, which has about 1.5 million subscribers. About 20 minutes later, some journalists and channels wrote that the image was fake and most likely from 2001.

To confirm this, Washington Post reporter Yunus Paksoy posted a photo taken on Northwest Avenue, Pennsylvania, next to the International Trade Center. Ronald Reagan. The painting shows the center’s characteristic dome, the Washington Monument, and a panorama of the right bank of the Potomac River, where the Pentagon building is located. There are no signs of explosion in the photo.

The RIA Novosti reporter also released a video containing a circular panorama of the Pentagon perimeter with no evidence of an explosion or fire.

A few minutes later, the US Department of Defense security service officially announced that it had not recorded any incidents near the ministry building.

Journalists, who carefully studied the original footage with puffs of smoke, suggested that it was created by a neural network based on a real photo of the attack on the Pentagon on September 11, 2001, when one of the hijacked planes crashed into the ministry building.

Several elements point to the photo’s artificiality: a street lamp being displaced from its base, part of the fence continuing on the sidewalk. Such artifacts are often seen in images taken with 360-degree cameras – we can see them in street view on online maps. Or where the image is produced by a neural network.

Fake news featured in (or on behalf of) major publications is increasingly causing public concern and instability.

One of the latest examples of this kind of stuffing is on the Twitter account of the British newspaper The Guardian, on September 8, 2022, when Queen II. It was the message about Elizabeth’s death. The publication account turned out to be fake, there were no articles about the death of the monarch on the real pages of this and other British newspapers. The Queen did indeed die that day, but at the time of the fake broadcast, Buckingham Palace had not yet made an official statement, only that Elizabeth was not feeling well.

A few days later it was learned that the Queen died at 15:10 local time, the public was officially informed of this at 18:30. The post about the death on The Guardian’s fake account appeared exactly during this time period, so the message was actually not wrong.

In recent months, world politicians, media and public opinion have repeatedly raised the issue of forgery created by artificial intelligence or based on content produced by it. For example, in March 2023, the Midjourney neural network created photographs of “the devastating flood of 2001 that hit the West Coast of the United States” made in interview format and posted on the Reddit forum.

Fake pictures depicting not only the destruction, but also real people (including the then president of the country, George W. Bush), quickly dispersed in the American part of the Internet. They caused a lot of comments on social networks: young people born after 2001 were horrified by the extent of the devastation, and the older generation was stunned that they did not remember such a flood.

Then, in March, fake photos of the arrest of former US President Donald Trump circulated the network. In them, a politician flees a crowd of police who eventually turn him harshly. In connection with the distribution of the images generated by the neural network, the NYPD had to make an official statement that they did not arrest Trump.

In April, German photographer Boris Eldagsen, who won the 2023 Sony World Photography Awards in the Free Creation category, admitted that a photograph called “Pseudo-Amnesia: The Electrician” was produced entirely by artificial intelligence. The organizing committee later confirmed that they knew that a neural network was used to create the framework, but that they believed it was a collaborative effort of a living author and artificial intelligence. Eldagsen himself immediately refused the award, explaining that the work of neural networks is not a photograph. And he applied for the competition precisely to draw attention to the problem of using artificial intelligence in creative fields.

No time to read?
Get a summary
Previous Article

Atento River: Flamengo wants De la Cruz and there was already a first rapprochement

Next Article

The person who attacked a Formentera taxi driver is a professional mixed martial arts fighter from the UFC circuit.