Google’s research team has introduced an innovative gaming neuroengine called GameNGen that analyzes keystrokes in real time and generates Doom gameplay frame by frame. Although visually it looks like the user is playing Doom, all the environment and events on the screen are generated by a neural network, reports “Habr” portal.
To implement the project, the researchers first trained the agent to play Doom, which allowed them to automate data collection for the main neural network dataset. Using humans for this task would have significantly delayed the process, the researchers said.
The collected data was used to train the Stable Diffusion 1.4 model, which learns to generate the next frame of the game based on the current and player actions. The developers also paid attention to the stability of the generated frames so that objects maintain their positions and characters do not change appearance.
Ultimately, GameNGen was shown to be able to interactively simulate classic Doom at over 20 frames per second while running on a single Google Tensor Processing Unit (TPU).
To assess the quality of the generation, a group of 10 experts was included and shown 130 videos in which the simulation was played simultaneously with the game. The duration of the videos was 1.6 and 3.2 seconds. The experts were asked to identify the recording of a real game. The share of correct answers was 58% for videos with a duration of 1.6 seconds and 60% for 3.2 seconds.
Previously reportedUnlocking the bootloader on the Pixel 9 smartphone “breaks” neural network functions.
What are you thinking?
Source: Gazeta

Jackson Ruhl is a tech and sci-fi expert, who writes for “Social Bites”. He brings his readers the latest news and developments from the world of technology and science fiction.