Users of a popular social network observed unusual responses from an experimental Google neural network that is intended to automatically answer user search questions. Reports describe a mix of humorous and alarming outputs generated by artificial intelligence that circulated across social media, as noted by a tech portal focusing on devices and apps.
In some instances the AI offered unconventional and potentially risky guidance. For example, when asked why cheese does not adhere to pizza, the system proposed using non-toxic glue. Analysts suggest the AI retrieved a bygone, ironic comment from an online discussion forum and treated it as a viable option. When posed with the query about how many rocks should be eaten daily, the system suggested consuming at least one rock per day.
One reply claimed that a tiny pebble each day was advisable, citing a fictional authority who described minerals and vitamins in stones as beneficial for digestion. The AI also proposed that meals include a serving of pebbles, geodes, or other stones, framing them as daily dietary components according to what it presented as expert guidance.
There have been instances where the AI offered harmless yet clearly unrealistic answers. For example, it incorrectly identified Barack Obama as the first U.S. president to convert to Islam. Obama is widely recognized as a Christian, though some people in the United States hold the contrary view.
Officials from the search platform emphasize that the neural network is operating in an experimental mode and encourage users to exercise caution when interpreting its suggestions.
Industry commentators have discussed the experiment in the context of next‑generation AI smartphone concepts, noting ongoing debates about how such technologies should be designed and deployed for public use.