Researchers at the Open University of Catalonia have developed an algorithm capable of distinguishing social media users who show signs of unhappiness by analyzing their text and images. The team shares this tool with the aim of helping identify potential mental health concerns early and more reliably.
The UOC study also notes regional differences in how people express difficulties. In particular, Spanish speaking users tend to mention problems in their relationships more often when they are feeling down, compared with English speakers.
The algorithm learns from social media activity on platforms like Instagram, Facebook, and Twitter. It builds on William Glasser’s choice theory, which identifies five core needs that shape human behavior: survival, power, freedom, belonging, and fun. These needs are thought to influence the types of images people choose to post and how they present themselves online.
Experts say the way individuals present themselves on social networks can reveal patterns in behavior, personal perspectives, motives, and needs. The researchers emphasize that what is shared publicly can offer insights into mental health and well being when interpreted with care and context.
Over two years, the project refined a deep learning model that integrates the five Glasser needs with multimodal data. This data includes not only images but also accompanying text, user bios, and occasional geolocation signals. The approach seeks to map online expressions to underlying psychological drivers while respecting privacy safeguards and analytical limits.
The study, published in a reputable journal, examined 86 Instagram profiles written in Spanish and Persian. The researchers used older neural network techniques alongside larger databases to train the model. They aimed to identify content in images and categorize text into labels that psychologists find meaningful, then compared those labels against a broad dataset containing more than 30 000 images, captions, and comments to gauge accuracy and generalizability.
To illustrate the concept, one sample scenario involves a cyclist reaching a mountain summit with a selfie versus a group photo. In one case, the choice of a solitary image may signal a desire for power, while the alternative could reflect a pursuit of belonging and shared joy. The interpretation of such choices depends on context, accompanying text, and patterns across a user’s profile.
Researchers note that selfie choices can convey different motivational signals than group photos or scenic shots. They caution against overgeneralization, and they highlight that multiple factors shape posting behavior, including mood, social intentions, and cultural background. The analysis is designed to capture these nuances rather than reduce a profile to a single motive.
Another finding indicates that Spanish speaking users were more likely to discuss relationship problems when feeling low, compared with their English speaking counterparts. The researchers stress that language and culture play a major role in how emotional experiences are communicated online, and this should inform how mental health tools are built for diverse populations.
The researchers argue that examining social media data across linguistic communities can help create inclusive and diverse models. Such models could better address mental health concerns by accounting for cultural and linguistic differences rather than relying on a one size fits all approach. This inclusivity is seen as essential in creating preventive resources and support mechanisms that are accessible to a wide range of users.
Ultimately, the authors believe the work holds promise for improving early detection and treatment strategies. By identifying signs early and tailoring outreach or interventions with sensitivity to individual needs, the research aims to support better outcomes for people facing mental health challenges while also guiding professionals in the design of preventative and therapeutic tools. The study notes that future work will continue to validate findings in real world settings and refine methods to protect privacy and empower users through responsible analytics. [IEEE Affective Computing, 2023] [AI for Human Health Initiative, UOC] [imBody Research Lab, Madrid]