A New Era in Facial Profiling and Micro-Movement Analysis

No time to read?
Get a summary

A revolution in the profiling world

A team at Moscow State University, Lomonosov, has developed a computer program that reads micromovements on a person’s face without relying on emotions in the analysis. The goal of the digital profiler is to detect all changes on the face, including very brief ones, that accompany different states such as anxiety, fear, contempt, and surprise. According to researchers, this approach, rather than subjective emotion labels, helps reveal underlying thoughts. The program can be used with a polygraph device, which is commonly used to assess truthfulness in investigations.

At Moscow State University, a professor from the Department of Personality Psychology and a team member describe the driving idea. They explain that the world lacks robust facial analysis algorithms, so they built original computer vision methods. These methods examine how light falls on the face and then translate those changes into meaningful signals. The work relies on layered logical rules created by the research group, totalling six levels of rules.

Another team member developed an algorithm that analyzes how light distributes across the face. For analysis, the right and left halves of the face are divided into 14 zones, with each zone further broken into microzones, amounting to more than 300 segments.

In each region, the system searches for facial skin movements in line with the international Facial Action Coding System, FACS, established by a renowned psychologist and consultant on a popular television show. The work also aligns with the archetype from popular culture of how such profiling procedures are depicted. The system stands as the only widely recognized approach for identifying facial movements in real time today.

Callous

A key point in the digital profiler project is the deliberate separation of emotion from facial analysis. The aim is to identify observable movement patterns rather than emotional labels.

Experts explain that the focus is on eyebrow shifts, squints, and mouth movements. They describe 22 basic motor units that can form any facial expression, including emotional ones. The profiling software collects expressions by tracking these mimic movements and assembling them into a broader picture.

FACS motor units are individually identifiable minimal facial movements such as lifting parts of the eyebrows, forming nasolabial folds, or stretching certain parts of the mouth.

The program aggregates expressions by selecting these movements, which can occur very rapidly, often in a fraction of a second. Some neural networks used in emotion detection miss these fast dynamics because they are trained on static images rather than video frames.

Mixed feelings

Why not train neural networks with video data instead of still images? It turns out that identifying anger can require hundreds of thousands of video samples, and the supply of qualified experts who understand facial expressions is limited. One expert can evaluate only a small amount of footage in a given time, making large video datasets costly and slow to assemble.

Scientists note that neural network approaches have shown limited effectiveness in replacing human coders for facial analysis. Genuine emotional displays on a person’s face are not constant and can be rare in interviews or Q&A settings. The remainder often shows fragmented expressions. When neural networks provide results, they may produce percentages like 10 percent anger or 20 percent contempt, which can be hard to interpret without context.

In many cases, the analysis must cover the full spectrum of facial movements, including micro expressions. After applying the digital profiler, outcomes resemble those from a polygraph, indicating stress or tension in response to questions. Yet in a neutral environment, there is no anticipation of concurrent facial movement analysis.

The Future of Digital Profiler

The system outputs hundreds of data points per participant, which makes manual organization impractical. A computer processes the data to identify patterns and relationships. Researchers are also exploring audio cues, using voice analysis to infer hidden states by examining dozens of physical parameters. This layered voice analysis technology adds another dimension to lie detection via sound.

Currently, digital profiler projects involve private sector work. The technology shows promise across medical fields for screening mental health issues, evaluating cardiovascular patients, and comparing aesthetic medicine options for facial corrections.

Experts believe the profiler will remain valuable for researchers and HR professionals. The team envisions diverse applications in personnel selection and forensic psychology. The system may also help protect against deep fake content by analyzing a person’s individual facial motion profile, which is highly distinctive and difficult to imitate.

The technology can extend to animation and robotics, enabling more natural facial expressions for character models without relying solely on motion capture. Despite rapid progress, the underlying challenge of reliably interpreting human facial expressions persists for both machines and humans. A trend toward reducing reliance on neural networks for emotion analysis is evident in several profiling efforts around the world.

Researchers in Japan and England are pursuing similar paths, advancing the global effort to understand facial dynamics beyond emotion alone.

No time to read?
Get a summary
Previous Article

El Clásico: Barcelona 2-1 Real Madrid — late Kessié strikes seal dramatic win

Next Article

Family Turns 3D-Printed Tourniquets Into On-Scene Medical Aid for SVO Operations