Wednesday, June 14, 2023

Collaborative AI to shine a light on YouTube mental health rabbit holes

YouTube is an incredible platform that has revolutionized the way we consume content. Its recommendation algorithm is so precise that it can provide users with videos that are tailored to their interests. However, YouTube's algorithms give more weight to likes than dislikes, which can lead to a narrow range of content being recommended to users. Other platforms such as TikTok and Facebook are rivals in the social media market, particularly among young people. However, YouTube is unique because of its vast user-generated video content which range from up to 60 seconds for Shorts or up to 12 hours for long videos from a verified account. It also has functions to comment and interact with other users or to follow content creators in channels. Despite its advantages, a recent study has shown that excessive YouTube use may have more of a negative impact on mental health, particularly among young people aged up to 29.

YouTube, the world’s leading video sharing platform used by more than 2.6 billion people monthly, has invested in building a mental health legacy through algorithm changes, content moderation, content creator and user psychoeducation, mental health and crisis resource panels, self-harm and suicide content warnings, and parental controls and settings. It has regulations that have helped to reduce borderline consumption by 70% and remove violative content immediately. However, the study by Dr. Luke Balcombe and Emeritus Prof. Diego De Leo of the School of Applied Psychology at Griffith University in Australia, identified a gap in YouTube's duty of care to users. It was found that high-frequency YouTube use can lead to increased loneliness, anxiety, and depression. Although the mix of included studies means the cause is unknown, high to saturated use of YouTube (2 to 5+ hours a day) may indicate a problem. There could also be a narrow range of viewed content which may exacerbate pre-existing psychological symptoms.

The global increase in loneliness, mental illness and suicide makes the negative mental health impacts of YouTube a complex phenomenon. There is potential for YouTube user-focused strategies to counter these impacts. Firstly, there could be better use of YouTube's red flag function whereby users help in reporting inappropriate or disturbing content. Secondly, AI-powered chatbots may provide support and referral to verified mental health care after engagement with a mental health/crisis resource panel. Currently, panels show after searches or below videos whose content is about mental health, suicide and self-harm. This educational and emotionally supportive content features with prompts to get help if needed. This is where chatbot support could help users with getting connected to mental health/crisis helplines or online chats.

The lack of transparency in how YouTube’s system works stems from its recommendation algorithms being focused on marketing strategies. YouTube’s aim is to keep users satisfied through a combination of factors such as search and watch history, viewing time and behavior, as well as how they use the watch page and recommendations. YouTube will want to avoid being labelled an AI subliminal system, which uses machine learning algorithms to subconsciously or unconsciously influence the human mind. This is because subliminal AI can be associated with malicious intent. However, YouTube’s user satisfaction optimization has a diminishing value effect and opportunity cost for the users. In other words, at what point should YouTube intervene when there are indications of problematic use?

The recent integrative review in the Informatics journal showed that parasocial relationships between vloggers and viewers may negatively impact socialization, productivity and exacerbate loneliness, anxiety and depression. It is apparently the modern equivalent of warnings about not watching too much television and to instead live a fulfilling life. However, YouTube users may not be able to self-manage their consumption. They may be compulsively watching about other people's lives or find themselves in a mental health or suicide “rabbit hole”.

Common sense suggests replacing problematic YouTube consumption with other activities that promote positive mental health outcomes such as physical exercise, social interaction, and time spent outdoors. However, COVID-19 exacerbated already increasing sedentary behaviors, loneliness and mental health issues throughout the world. Social media and internet use exacerbated socio-cultural issues that have been developing since the 1980’s. Self-management, psychoeducation, monitoring, and using parental controls and settings were recommended by Dr Balcombe based on a synthesis of the literature. However, YouTube use may be difficult to monitor effectively because it can be used for a variety of purposes such as education, entertainment, as well as information seeking and sharing among peers.

It is important to recognize that YouTube use may be psychologically protective. However, high-frequency YouTube use, whether it be from prolonged short video consumption or watching longer videos about other people's lives, can be problematic if it's over 2 hours a day. YouTube is an eminent platform that has mental health as part of its mission and regulations. It is apparent that YouTube promote mental health awareness, education, and support. It has various channels for information seeking and sharing about mental health. YouTube appears to monitor search and watch history to predict when a mental health or crisis resource panel should appear. However, YouTube could extend further by recommending connections to vetted mental health care and crisis services according to data on age group and location. There could also be AI-human services that work with or independently of the platform such as:

1. Red flagging by an AI-powered plug-in that uses preset observations for detecting high-extreme content with a negative mental health context.

2. Real-time fact-checker for videos to alert to misinformation about mental health.

The recent study suggested the design and development of a recommendation algorithm that uses natural language processing (NLP) techniques to determine whether a video is positive, negative, or neutral. This sentiment analysis is in effect mining mental health data. AI-powered sentiment analysis could employ language models such as GPT-4 to find positive and negative words and phrases in different languages, from text such as titles, descriptions, comments, and subtitles. AI machine learning can be trained to detect and moderate inappropriate or harmful information or messages. Currently, it is possible to apply off-the-shelf solutions for NLP from AI-powered tools such as Glasp and Bing. However, a new tool could be codesigned and developed to automate a procedure.

There is an opportunity for collaborative AI to be used with YouTube whereby human-AI solutions are used in mental health screening, support, and referral to crisis services. However, there are ethical and legal issues to consider if a standard of care is not verified by psychiatry boards or psychology groups to determine the digital solution's capacity for efficiency or to identify and counter potential risks.

Many people are increasingly turning to technology for their mental health concerns. The glut of mental health platforms and apps available means many users and practitioners are uncertain about which ones are of good quality, usability and effectiveness. YouTube could renew its mission for mental health through collaborative AI. However, integrating AI-powered tools requires verified mental health expert input as well as connecting with vetted digital mental health platforms and interventions.

YouTube understands there is growing demand for mental health education, information, support and resources on its platform. It therefore supplies a medium where people can safely connect with others and share experiences, to learn and ask questions. However, there are concerns about the transparency of YouTube’s recommendation systems. For example, how does the public know whether subliminal techniques are involved or not? The European Union legislated the prohibition on AI systems that use subliminal techniques that change people's behavior in ways that are reasonably likely to cause harm. These advertising techniques have been around since the 1950s when experiments demonstrated the power of flashing images in movie theatres to boost sales for popcorn and Coca-Cola. With increased interest in AI, as well as regulatory compliance issues, comes the question of ‘what is practical and reasonable use of subliminal techniques in video sharing and social media platforms?’.

YouTube's leadership changed since Susan Wojcicki launched YouTube's mental health panels for depression and anxiety in July 2020. Three years ago, Digital Information World described how YouTube’s former CEO expressed that it is looking at ways to increase the well-being of its users. Then in November 2021, Digital Information World reported developments when YouTube stepped up its engagement with users by providing better accessibility of its crisis response panels, which were added to the watch page as well as to searches. Now, AI companies are recruiting talent that can show them how to "crack the code" in GPT prompt engineering. Chatbots and plug-ins may be in consideration for YouTube’s mental health and crisis resource panels. However, there are indications that decreasing the negative impact and increasing the positive impact of YouTube on loneliness and mental health may occur from outside the administration of YouTube.


Humans and AI systems have complementary strengths to harness, through combining inputs from a human and AI on a common task or goal. Collaborative intelligence is a challenge and opportunity for YouTube and/or the digital mental health and digital information communities. There needs to be a benefit from the collaborative performance such that it is above what AI and humans can do alone. There should be shared objectives and outcomes as well as sustained, mutually beneficial relationships. YouTube has shown it is willing to find new ways in mental health promotion, so it appears a win-win situation if citizen science and technology use the platform to show how new tools can assist users in combating mental health issues.

YouTube has the potential to use recent development in AI and machine learning to detect and moderate content as well as to screen, support, and refer users to mental health resources and interventions. However, there are ethical and legal considerations to address, and the platform must work with verified mental health experts and digital mental health platforms/interventions to ensure the safety, quality and effectiveness of its offerings. By engaging more transparently and inclusively with users and experts on mental health, YouTube may demonstrate consistent leadership in striving for eminence in this domain. Human- AI systems are a challenge and opportunity for YouTube as well as the digital mental health and digital information communities. It remains to be seen whether collaborative performance and sustained mutually beneficial relationships will emerge from shining a light on the dark depths of YouTube.

The full study, ‘The Impact of YouTube on Loneliness and Mental Health’, can be accessed for free online at https://www.mdpi.com/2227-9709/10/2/39.

Written by Dr. Luke Balcombe.

by Web Desk via Digital Information World

Windows 11: To Upgrade or Not to Upgrade?

With Windows 10 nearing its end of life (EOL) and Microsoft selling Windows 11 hard to users, IT departments are under pressure to decide whether or not to upgrade. Two questions loom large: Are they ready and is it worth it?

Support for the older version of the OS will end in October, 2025, but rolling out an enterprise-wide upgrade is no easy task. It’s particularly difficult if it must be done manually. Windows 11 had more stringent hardware requirements. What’s more, updates are not guaranteed if these requirements aren’t met. That means that a first step to determining what machines to upgrade is to know what machines in the IT estate meet all the requirements. The larger the enterprise, the more daunting the task.

According to Lansweeper’s 2021 Windows 11 readiness research and data analysis, only 57.26% of workstations out there could successfully complete the upgrade at the time. While the majority of systems (92.85%) had enough RAM installed, 43% didn’t meet the CPU requirements, and nearly 15% didn’t have TPM installed or it wasn’t enabled. The outlook was even bleaker for virtual machine (VM) workstations. Only about half met the CPU requirement, and just 67.1% had enough RAM. Almost all VM workstations lacked TPM support.




Since the report was created, these numbers have improved slightly – 12% more devices now meet the CPU and TPM requirements. While that’s promising, it’s unlikely all Windows devices will be ready by the time Windows 10 support is no longer available.

In addition to a lack of readiness, new AI functionality embedded in Windows 11 may be inhibiting rather than aiding adoption, according to members of Reddit’s Windows 11 community. Even though it makes the AI-powered Bing search engine available in the taskbar, it could be causing other problems and interfering with the operating system’s basic functionality. Reports of slow memory speeds, error messages and desktop icons that vanish are adding to users’ hesitance to upgrade.

In light of these factors, it’s not surprising that as of May 5 this year, only 5.47% of users had upgraded to Windows 11, up a mere 2.86% since October 2022. Over 80% of Windows-based machines in enterprise environments are currently running this soon-to-be outdated software.

However, the reality is that the current version – Windows 10, 22H2 – is the final version. Once support is no longer available, outdated machines will be vulnerable to cybersecurity threats and malware.

The best path forward: IT discovery and automation

If you don’t know what machines in your IT estate meet the stringent hardware requirements of Windows 11, your first step is to find out. Consider these best practices:

Perform a Windows Readiness audit. A thorough audit can help uncover the work that must be accomplished for a successful rollout of Windows 11. This involves discovering and creating a complete and accurate inventory of all your IT assets, and supplementing the list with detailed and granular information about the type and location of devices, what systems are installed, the versions of those systems and more.

Implement a phased rollout: Consider deploying the Windows 11 update in a phased manner. Start with a smaller group of users or departments and gradually expand the rollout based on the feedback and performance during each phase.

Educate and train end-users: Windows 11 introduces new features and a different user interface compared to previous versions. Provide end-user training and documentation to help employees familiarize themselves with the changes and maximize productivity.

Monitor the update process closely. Make sure you have processes and technologies in place to track and address any issues that may arise. Establish channels for users to report problems, and have a support team ready to assist users during and after the update.

Don’t Wait Until It's Too Late

A single cybersecurity incident can be devastating to a business – and once the deadline hits and Windows 10 is no longer supportive, you’ll be at risk. Even if you don’t plan to upgrade right away, understanding the state of Windows 11 readiness in your organization and what it will take to execute the rollout will help you be better prepared when the time comes.

Read next: How To Get A Free US Number
by Web Desk via Digital Information World

Report Reveals Hong Kong Users Face Restrictions as Microsoft OpenAI and Google Limit Access to AI Chatbot Technology

Google, OpenAI, and Microsoft, three prominent tech giants, have recently imposed limitations on their chatbots powered by generative AI and related technology in Hong Kong. The motive behind this decision is rooted in concerns regarding the influence of China and its potential impact on upholding an open Internet environment within the region.

A recent article in The Wall Street Journal highlights an ongoing pattern in which companies based in the United States restrict user access from Hong Kong. Notably, OpenAI's decision aligns mainland China and Hong Kong with countries like Iran, Syria, and North Korea, where similar restrictions on accessing these technologies are in place.

Despite the lack of official disclosure from the companies regarding the rationale behind the imposed restrictions, experts in the industry speculate that the restrictions are a precautionary step taken to minimize potential legal liabilities. The concern revolves around the possibility of the AI chatbots generating content that inadvertently violates the nationally implemented security law of China, which came into effect approximately three years ago. This law specifically criminalizes different types of government criticism and expressions of dissent.

Apart from the restrictions placed on AI chatbots, several other companies have implemented measures to control the content accessible in Hong Kong. For instance, Apple made revisions to its internet browser privacy policy in the previous year, explicitly addressing the possibility of employing a tool created by the Chinese company Tencent. The primary objective of this tool is to alert users in Hong Kong about potential links that could be malicious in nature.

Disney has exercised caution in its approach by deciding against including two of the episodes of the show known as "The Simpsons", which is available on Hong Kong's streaming service. The reason behind this choice lies in the content of these episodes, which includes criticisms directed at the government of China.

Users in Hong Kong have raised concerns about instances where Tencent's tool has briefly prevented them from reaching authorized websites from Western countries These websites include Mastodon, a social media competitor to Twitter, as well as GitLab and Coinbase.

A survey conducted by the American Chamber of Commerce in Hong Kong in March 2023 shed light on the sentiment regarding Hong Kong's future internet access. The findings of the survey indicated that 38% of the participants held a positive outlook or were highly hopeful regarding Hong Kong's capacity to sustain unrestricted connectivity to the world wide web in the coming 3 years.

The findings of this survey hold notable importance in the context of the tense connection between China and the U.S. The ongoing disputes between the two nations have already had substantial impacts on over 80% of businesses operating in Hong Kong, with more than 59% of them expressing pessimism regarding the future of China and United States connections.

The stringent investment policies imposed by the United States on American Chamber of Commerce members investing in China have had far-reaching implications for businesses worldwide. This has prompted companies to reevaluate their long-term operations in China, as well as reconsider their supply chain strategies. The impact of these policies on businesses has been far-reaching and has further fueled concerns about the future trajectory of U.S.-China relations.


Read next: Research revealed biases in language models and identified the difference between AI and human opinions
by Ayesha Hasnain via Digital Information World

Meta Strikes A Chord With The Launch of Its Musical Version Of ChatGPT Called MusicGen

The concept of a musical version of ChatGPT is on the rise, all thanks to tech giant Meta.

Facebook’s parent firm is reportedly debuting an AI-based music learning model. And by the looks of it, you can alter how you produce both tunes as well as soundtracks.

MusicGen is the name and it’s getting created by the firm’s Audiocraft members. For those curious about how it functions, well, that’s related to putting a short text description of your music preference that you simply type in. After that, simply press on the button that states Generate, and within no time, you’ll see the AI model producing music that’s 12 seconds in duration as per rapid instructions.

For instance, you’ll find the LLM getting instructions to make a Lofi version of a soundtrack, and within no time, the final product would be unveiled and it’s going to be very similar to that found on YouTube. How cool is that?

You can even customize that AI music producer to put up hit tracks or a carefully curated version of your favorite tune too.

One developer who happened to be a part of this project ended up putting up samples of what sounds users can expect on their Twitter handle. And that just gave the perfect little example of what to expect from the revolutionary combination of music meets AI. It was a sublime blend of drum beats and some synths that gave a super upbeat version for this particular piece.

And for those who are curious to get their hands on this, well, you’re in luck. Simply go out and get your hands on it by heading over to the Hugging Face webpage of Meta. But one thing we’d like to mention, is don’t expect this to do vocals like Google’s AI music generator does. It is only doing instrumentals for now.


Read next: Meta Builds Its Own Internal AI Chatbot Called Metamate That’s Trained On Its Internal Data
by Dr. Hura Anwar via Digital Information World

Tuesday, June 13, 2023

Research revealed biases in language models and identified the difference between AI and human opinions

In today's modern world, our reliance on internet technologies has become paramount. Accessing the online world has become essential for leading a comfortable life. Adding to this technological landscape, AI (artificial intelligence) models have garnered immense popularity recently. However, it is crucial to recognize that machines, including AI models, still necessitate human involvement to function effectively. Thus, placing complete dependence on AI models can lead to misguidance, considering their reliance on various large language models (LLMs).

Recently, researchers from Stanford University conducted a study that sheds light on the biases inherent in language models such as ChatGPT and their divergence from the viewpoints of different demographic groups in America. The study reveals that these models often exhibit a tendency to under-represent certain groups, while concurrently amplifying the prevailing opinions of others. As a consequence, these models fail to accurately represent the nuances and variations in human opinions.

An approach called OpinionQA was created by the study team under the direction of Shibani Santurkar, a former postdoctoral researcher at Stanford, to assess bias in language models. To measure how well these models reflect the views of various demographic segments, OpinionQA compares their propensities to those found in public opinion surveys.

Although it would seem that language models, which forecast word sequences from text already in existence, would naturally represent the general consensus, Santurkar identifies two key reasons for their bias. First, updated models have been improved utilizing information gathered from human comments by businesses. These annotators, who are employed by the corporations, categorize model completions as "good" or "bad," which might lead to bias as their judgments’ and even those of the employers could affect the models.

The study serves as an example of the bias by showing how more recent models show that President Joe Biden has better than 99 percent support, despite public opinion surveys showing a less clear-cut picture. Additionally, the researchers discovered that the training data had an underrepresentation of several groups, including Mormons, widows, and those over the age of 65. To increase their credibility, the authors contend that language models should more accurately capture the subtleties, complexity, and more specific differences in public opinion.

Moreover, the study team used Pew study's American Trends Panels (ATP), a thorough public opinion poll that covers a wide range of issues, to evaluate the models. OpinionQA evaluated the opinion distributions of language models with the overall American population and at least 60 different demographic groupings that were identified by the ATP.

Three important measures of opinion alignment are computed by OpinionQA. First, a language model's representativeness is evaluated in relation to the overall population as well as the 60 demographic groupings. Second, steer-ability gauges how well the model can, when asked, reflect the views of particular subgroups. Finally, consistency measures how constant the model's beliefs are across time and across different topics.

Lastly, study's broad conclusions show that, depending on variables like money, age, and education, there are considerable differences in political inclinations and other opinions. Models developed primarily from online data frequently show biases towards conservative, lower-class, or less educated viewpoints. Newer models, which have been improved via curation of human feedback, on the other hand, frequently exhibit biases in favor of liberal, well-educated, and wealthy audiences.

Santurkar, emphasizes that the study does not categorize each bias as intrinsically good or harmful but instead tries to increase awareness among developers and consumers about the presence of these biases. The OpinionQA dataset should be used to discover and measure misalignments between language models and human opinion, the researcher’s advice, rather than as an optimization benchmark. They anticipate that by bringing language models closer to the public's perception, this study will encourage a wider discussion among experts in the area.


Read next: ChatGPT vs. Google Translate Comparison: Which AI Chatbot is The Best Language Translator
by Arooj Ahmed via Digital Information World

Data Brokers Are Silently Raising Lobbying Spending With A Massive Surge Seen In 2022

Those firms that collect the personal details of individuals and deal with them on a personal basis are known as data brokers. And a recently published report is shedding light on how they’re quietly raising their spending on lobbying.

It’s something that has gone over so many years and we’ve witnessed a massive surge in the behavior as per recently published stats from 2022. The figures for the US grew bigger than ever and it had so many data privacy endeavors being incorporated into law.

These individuals make money by collecting and then selling the data of US citizens. But making money from others’ privacy rights means saying hello to new legal action and having regulating bodies get more stringent with time.

The issue is that so many people who advocate for others’ privacy rights might be feeling like they’re doing a great job but they soon realize that the figures of lobbying are rising because data brokers continue with their lobbying actions.

A recently published study by Incogni shows researchers are taking such reports into consideration and seeing how data brokers are functioning. Moreover, this study totally depends upon the efforts of lobbying and how the American Senate and House of Representatives tackle this in the end.

The key findings of this report had to do with 140 firms that ended up lobbying on behalf of nearly 40 different data brokers. Moreover, researchers found how the figure rose from $37 million to $49 million in a span of one year and by last year, it was $56 million.

Around five of those firms are said to be in charge of the $86 million of the massive 143 million figure that was carried out for the sake of lobbying. In particular, the firm Oracle was called out for its massive $42 million that was related to lobbying.

The researchers also noticed how the figure for brokers taking part in lobbying behavior continued to rise with time, even during the period of research of this study. And that just goes to show that firms are getting more and more intrigued in lobbying behavior which is concerning.

Each year, the total figure that was being spent continued to increase and we’re talking figures taking place between two years, 2020 and 2022 to be more specific.

Now the question is who comes under the title of spending the most for such lobbying behavior? It was Oracle who made up the massive 29% of all the spending that arose. In second place, it was Accenture who came up with a figure that was nearly 4 times lower than Oracle.

Another question was linked to which type of issue are lobbyists supporting? And the answer is not straightforward.

It’s a mix of civil, and social matters, and some political disputes that lobbying addresses. Codes were allocated for the topics including postal, safety, and defense as well.

So we can conclude through such a study that data brokers continue to evolve with time but there happen to be more rules coming forward related to data protection. And that’s why there is a rise in lobbying efforts designed to protect so many individuals’ interests and design influence over those in charge.

Due to this behavior, we’re seeing a huge figure of funds getting invested in the world of lobbying. And such trends can carry on in the near future.

Clearly, the report is a huge eye-opener of what is going on in today’s day and age in terms of lobbying and how expert data brokers have spent more than $143 million in just a short span of three years.




Read next: Trust vs. Convenience Battle for Data Privacy Divides Social Media Users
by Dr. Hura Anwar via Digital Information World

Google Gears Up For EU’s Mega Antitrust Complaint Comprising Huge Fines And A Strike

The European Union is all set to hit tech giant Google with a formally created antitrust complaint that will reportedly comprise of huge fines and a mega strike linked to ads. The latter is what brings in more revenue for the American-based organization, sources have mentioned.

The huge list of charges would be unveiled very soon and is all set to attack the main parent firm Alphabet’s business model as confirmed by sources that are familiar with the matter.

For those who might not be aware, Google’s advertising ordeal is one of the most successful ones out there today and it accounts for nearly 80% of the overall revenue. Meanwhile, by the year 2022, the advertising sales domain accounted for nearly $225 billion.

This news is one of the most serious ones in the form of a complaint that has arisen in the EC’s five-year working. Remember, they’re the main regulatory body that looks after the EU and so this news is a huge escalation in terms of how close we all are to see the EU slam the firm with a fine that could cross the $8.6 billion figure mark.

Most fines that we’ve come across in recent times have gone about targeting a company’s worldwide sales by 10%. But this one seems to be much higher than that and that means a huge chunk of the firm’s earnings are getting involved.

This seems to be a way by which regulators aim to put more pressure on tech giants so they think twice before making any error-filled calls. Remember, 10% earnings is often a very small figure so that means little to no effect would arise.

The shares for Alphabet did increase slightly recently but the Android maker is yet to lay down comments despite requests.

For years, tech giant Google has really put up a leading position in terms of how data collection takes place and how advertisers end up targeting users for their ads. This includes how they’re willing to sell advertising spaces and offer the right form of technology to search for publishers so their space is sold.

The European Union started an investigation into Google’s advertising means in 2021 and since then, it’s been on the radar for obvious reasons.

In the same way, it was interesting to see how Google had even gotten into a contract with Meta for a program linked to Open Bidding. This was originally a part of this investigation but toward the latter part of 2022, it was left out.

The woes of Google did not stop there as it was constantly scrutinized by the UK’s authorities in terms of what sort of advertising practices had been involved and how it was conducting operations across the US.

Every firm has the right to protect itself against the growing number of regulatory bodies that keep on popping up with time. And that is exactly why tech giant Google is fighting off such fines from the European Union in a legal manner.


Read next: Google Provides Innovative Insights On Its Imagen Editor Tool For Text-Guided Photo Edits
by Dr. Hura Anwar via Digital Information World