The DALL-E 2 neural network from OpenAI has sparked discussion by appearing to invent its own symbolic system. This AI can generate images based on textual prompts, and its testing began in April this year.
An example of an image generated by a neural networkAn example of an image generated by a neural network
What’s intriguing is that users have noticed the platform can produce images that come with subtitles and captions. At first, these strings looked like random letter groups, but it turns out they carry meaningful associations. When the same sequences are fed into the model, the system tends to classify the output into distinct categories.
For instance, the prompt “Apoploe vesrreaitais” tends to yield images of birds, the phrase “Contarra ccetnxniams luryca tanniounons” leans toward pests, and the longer combination “Apoploe vesrreaitais eating Contarra csetnxniams luryca tanniounons” guides the network to create scenes of birds feeding on insects.
When a query such as “Two whales talk about food” is submitted, the result often includes a caption like “Wa ch zod rea.” This pattern suggests the presence of a kind of internal language that helps the model process data more efficiently. Caution is advised as researchers explore how these language-like cues influence output, interpretation, and safety considerations.