Wednesday, June 14, 2023

Collaborative AI to shine a light on YouTube mental health rabbit holes

YouTube is an incredible platform that has revolutionized the way we consume content. Its recommendation algorithm is so precise that it can provide users with videos that are tailored to their interests. However, YouTube's algorithms give more weight to likes than dislikes, which can lead to a narrow range of content being recommended to users. Other platforms such as TikTok and Facebook are rivals in the social media market, particularly among young people. However, YouTube is unique because of its vast user-generated video content which range from up to 60 seconds for Shorts or up to 12 hours for long videos from a verified account. It also has functions to comment and interact with other users or to follow content creators in channels. Despite its advantages, a recent study has shown that excessive YouTube use may have more of a negative impact on mental health, particularly among young people aged up to 29.

YouTube, the world’s leading video sharing platform used by more than 2.6 billion people monthly, has invested in building a mental health legacy through algorithm changes, content moderation, content creator and user psychoeducation, mental health and crisis resource panels, self-harm and suicide content warnings, and parental controls and settings. It has regulations that have helped to reduce borderline consumption by 70% and remove violative content immediately. However, the study by Dr. Luke Balcombe and Emeritus Prof. Diego De Leo of the School of Applied Psychology at Griffith University in Australia, identified a gap in YouTube's duty of care to users. It was found that high-frequency YouTube use can lead to increased loneliness, anxiety, and depression. Although the mix of included studies means the cause is unknown, high to saturated use of YouTube (2 to 5+ hours a day) may indicate a problem. There could also be a narrow range of viewed content which may exacerbate pre-existing psychological symptoms.

The global increase in loneliness, mental illness and suicide makes the negative mental health impacts of YouTube a complex phenomenon. There is potential for YouTube user-focused strategies to counter these impacts. Firstly, there could be better use of YouTube's red flag function whereby users help in reporting inappropriate or disturbing content. Secondly, AI-powered chatbots may provide support and referral to verified mental health care after engagement with a mental health/crisis resource panel. Currently, panels show after searches or below videos whose content is about mental health, suicide and self-harm. This educational and emotionally supportive content features with prompts to get help if needed. This is where chatbot support could help users with getting connected to mental health/crisis helplines or online chats.

The lack of transparency in how YouTube’s system works stems from its recommendation algorithms being focused on marketing strategies. YouTube’s aim is to keep users satisfied through a combination of factors such as search and watch history, viewing time and behavior, as well as how they use the watch page and recommendations. YouTube will want to avoid being labelled an AI subliminal system, which uses machine learning algorithms to subconsciously or unconsciously influence the human mind. This is because subliminal AI can be associated with malicious intent. However, YouTube’s user satisfaction optimization has a diminishing value effect and opportunity cost for the users. In other words, at what point should YouTube intervene when there are indications of problematic use?

The recent integrative review in the Informatics journal showed that parasocial relationships between vloggers and viewers may negatively impact socialization, productivity and exacerbate loneliness, anxiety and depression. It is apparently the modern equivalent of warnings about not watching too much television and to instead live a fulfilling life. However, YouTube users may not be able to self-manage their consumption. They may be compulsively watching about other people's lives or find themselves in a mental health or suicide “rabbit hole”.

Common sense suggests replacing problematic YouTube consumption with other activities that promote positive mental health outcomes such as physical exercise, social interaction, and time spent outdoors. However, COVID-19 exacerbated already increasing sedentary behaviors, loneliness and mental health issues throughout the world. Social media and internet use exacerbated socio-cultural issues that have been developing since the 1980’s. Self-management, psychoeducation, monitoring, and using parental controls and settings were recommended by Dr Balcombe based on a synthesis of the literature. However, YouTube use may be difficult to monitor effectively because it can be used for a variety of purposes such as education, entertainment, as well as information seeking and sharing among peers.

It is important to recognize that YouTube use may be psychologically protective. However, high-frequency YouTube use, whether it be from prolonged short video consumption or watching longer videos about other people's lives, can be problematic if it's over 2 hours a day. YouTube is an eminent platform that has mental health as part of its mission and regulations. It is apparent that YouTube promote mental health awareness, education, and support. It has various channels for information seeking and sharing about mental health. YouTube appears to monitor search and watch history to predict when a mental health or crisis resource panel should appear. However, YouTube could extend further by recommending connections to vetted mental health care and crisis services according to data on age group and location. There could also be AI-human services that work with or independently of the platform such as:

1. Red flagging by an AI-powered plug-in that uses preset observations for detecting high-extreme content with a negative mental health context.

2. Real-time fact-checker for videos to alert to misinformation about mental health.

The recent study suggested the design and development of a recommendation algorithm that uses natural language processing (NLP) techniques to determine whether a video is positive, negative, or neutral. This sentiment analysis is in effect mining mental health data. AI-powered sentiment analysis could employ language models such as GPT-4 to find positive and negative words and phrases in different languages, from text such as titles, descriptions, comments, and subtitles. AI machine learning can be trained to detect and moderate inappropriate or harmful information or messages. Currently, it is possible to apply off-the-shelf solutions for NLP from AI-powered tools such as Glasp and Bing. However, a new tool could be codesigned and developed to automate a procedure.

There is an opportunity for collaborative AI to be used with YouTube whereby human-AI solutions are used in mental health screening, support, and referral to crisis services. However, there are ethical and legal issues to consider if a standard of care is not verified by psychiatry boards or psychology groups to determine the digital solution's capacity for efficiency or to identify and counter potential risks.

Many people are increasingly turning to technology for their mental health concerns. The glut of mental health platforms and apps available means many users and practitioners are uncertain about which ones are of good quality, usability and effectiveness. YouTube could renew its mission for mental health through collaborative AI. However, integrating AI-powered tools requires verified mental health expert input as well as connecting with vetted digital mental health platforms and interventions.

YouTube understands there is growing demand for mental health education, information, support and resources on its platform. It therefore supplies a medium where people can safely connect with others and share experiences, to learn and ask questions. However, there are concerns about the transparency of YouTube’s recommendation systems. For example, how does the public know whether subliminal techniques are involved or not? The European Union legislated the prohibition on AI systems that use subliminal techniques that change people's behavior in ways that are reasonably likely to cause harm. These advertising techniques have been around since the 1950s when experiments demonstrated the power of flashing images in movie theatres to boost sales for popcorn and Coca-Cola. With increased interest in AI, as well as regulatory compliance issues, comes the question of ‘what is practical and reasonable use of subliminal techniques in video sharing and social media platforms?’.

YouTube's leadership changed since Susan Wojcicki launched YouTube's mental health panels for depression and anxiety in July 2020. Three years ago, Digital Information World described how YouTube’s former CEO expressed that it is looking at ways to increase the well-being of its users. Then in November 2021, Digital Information World reported developments when YouTube stepped up its engagement with users by providing better accessibility of its crisis response panels, which were added to the watch page as well as to searches. Now, AI companies are recruiting talent that can show them how to "crack the code" in GPT prompt engineering. Chatbots and plug-ins may be in consideration for YouTube’s mental health and crisis resource panels. However, there are indications that decreasing the negative impact and increasing the positive impact of YouTube on loneliness and mental health may occur from outside the administration of YouTube.


Humans and AI systems have complementary strengths to harness, through combining inputs from a human and AI on a common task or goal. Collaborative intelligence is a challenge and opportunity for YouTube and/or the digital mental health and digital information communities. There needs to be a benefit from the collaborative performance such that it is above what AI and humans can do alone. There should be shared objectives and outcomes as well as sustained, mutually beneficial relationships. YouTube has shown it is willing to find new ways in mental health promotion, so it appears a win-win situation if citizen science and technology use the platform to show how new tools can assist users in combating mental health issues.

YouTube has the potential to use recent development in AI and machine learning to detect and moderate content as well as to screen, support, and refer users to mental health resources and interventions. However, there are ethical and legal considerations to address, and the platform must work with verified mental health experts and digital mental health platforms/interventions to ensure the safety, quality and effectiveness of its offerings. By engaging more transparently and inclusively with users and experts on mental health, YouTube may demonstrate consistent leadership in striving for eminence in this domain. Human- AI systems are a challenge and opportunity for YouTube as well as the digital mental health and digital information communities. It remains to be seen whether collaborative performance and sustained mutually beneficial relationships will emerge from shining a light on the dark depths of YouTube.

The full study, ‘The Impact of YouTube on Loneliness and Mental Health’, can be accessed for free online at https://www.mdpi.com/2227-9709/10/2/39.

Written by Dr. Luke Balcombe.

by Web Desk via Digital Information World

No comments:

Post a Comment