ESP explores universal animal language translation with AI

No time to read?
Get a summary

A team of researchers from the Earth Species Project (ESP), based in California, United States, is planning to apply machine learning and artificial intelligence to decipher the languages of animals and translate them into human terms, with the goal of understanding communication across the entire animal kingdom.

Though it reads like science fiction, the plan has roots in a long history of curiosity about how animals vocalize and signal. The idea echoes big milestones from the past, yet it remains firmly anchored in modern data science and biology.

We know, for instance, that primates issue warning sounds that vary with the predator, that dolphins use distinctive whistles, and that some songbirds can rearrange their calls to convey different meanings. Yet despite these clues, researchers have traditionally resisted claiming a universal language of animal communication because no single system neatly mirrors human language in every regard. For generations, decoding relied on careful, time-consuming observation.

Recent reports in The Guardian describe a growing use of machine learning to handle the vast data streams generated by contemporary animal sensors, enabling more scalable analysis than ever before.

The project notes emphasize the breadth of species involved, with imagery from nature and science across the spectrum. pixabay

Elodiw Briefer, an associate professor who studies vocal communication in animals and mammals, points out that machine learning is increasingly used to interpret animal signals. In one example, Briefer applied an algorithm to pig grunts and could determine whether the animal appeared to be happy or sad.

Other efforts include DeepSqueak, which analyzes ultrasonic sounds to assess stress levels in mice, and CETI, the Cetacean Translation Initiative, which aims to translate sperm whale communication using machine learning.

Understanding all genres with one algorithm

The California-based nonprofit ESP, founded in 2017 with the support of Silicon Valley investors, emphasizes a broader aim. Its approach seeks to unravel communication not just for a single species but across many, seeking universal patterns rather than a species-by-species solution.

ESP intends to first decode animal communication through machine learning and then share its discoveries with the world. Aza Raskin, co-founder and president of ESP, envisions using artificial intelligence to strengthen human bonds with other living beings by deciphering non-human communication.

“We are species-free,” Raskin explained to The Guardian, noting that the translation algorithms being developed are designed to operate across biology from microscopic worms to mighty whales.

As described, dolphin trainers often communicate by moving in concert with the dolphins, shaping signals with hand movements and body cues. Before a performance, trained dolphins exchange sounds, turn, raise the tail, and perform new tricks, illustrating the close relationship between human-guided cues and animal responses.

The founder stressed that confirming a true language would be a major step; even if direct linguistic equivalence remains elusive, access to robust non-human communication tools would be valuable in itself.

panda bear — synchronization —

Raskin notes that animals, like humans, possess multiple nonverbal channels. Bees, for example, communicate through a distinctive dance that signals when and where to perch on a nectar-rich flower.

Despite formidable hurdles, the project has made tangible progress, such as an experimental algorithm capable of identifying which particular individual within a noisy group is the one primarily engaging in communication by default.

A second algorithm aims to generate simulated animal calls to converse with them directly. “Even if the exact meaning remains uncertain, the AI appears to speak in the animals’ language,” Raskin remarked to The Guardian.

Nevertheless, the promise of this research invites debate. Some observers question the practical value of AI-powered studies of vocal meaning.

For instance, Robert Seyfarth of the University of Pennsylvania acknowledges potential benefits in mapping vocal repertoires but remains cautious about broader claims regarding the meaning and function of vocalizations.

Time will tell whether the ESP effort reaches its ambitions. What is clear is that artificial intelligence and machine learning are playing an enlarging role in scientific inquiry.

“These tools allow us to remove human biases and better understand all communication systems,” said Raskin.

….

Environment department contact address: [redacted]

No time to read?
Get a summary
Previous Article

Steam adds pre-download library option; discounts remain regionally varied in North America

Next Article

Lexus IS: A Strong Premium Used-Car Pick in the 1.5M Ruble Range