Monday, April 13, 2026

From tracking to AI: how 5 popular workout apps handle user data and privacy

Personalized training is spiking in popularity, and so are AI alternatives that may be more affordable. But as technology promises to help you reach your goals, it also adds new risks to your personal information. This study, conducted by Surfshark, uncovers the hidden cost of digital fitness — revealing that apps link the data they collect to your identity, track you, and now use it for AI training.


.

Key insights

  • Google Trends reveals a clear pattern: the search term “fitness” spikes globally every January. Since 2022, the highest value was recorded in January 2026, reaching a score of 100. This score indicates peak search interest on a relative scale from 0 to 100, where 100 represents the highest interest during the chart's time period. On average, each January sees a 23% rise in search interest compared to the preceding December for each year in the analyzed period. April marks the start of the climb, building toward the next peak in summer. On average, the growth from April to the peak month in summer was approximately 13%. The January spike is likely driven by New Year's resolutions, whereas the increased interest in spring might be linked to people focusing on getting in shape for summer. However, researchers note that global physical inactivity levels haven't changed much in 20 years, with approximately 80% of adolescents and one in three adults worldwide not meeting the World Health Organization (WHO) physical activity guidelines.¹

  • Technology, especially AI, is increasingly transforming the fitness industry and could shift how these challenges are addressed. By analyzing user data, AI has the potential to create highly personalized fitness experiences, tailoring workout plans to individual progress and goals. This demand is reflected in the increasing global interest in personal training, as indicated by Google Trends data, which shows notable growth in searches since 2025. To illustrate with numbers, the score in January 2025 was 37, while in winter 2026, it reached a peak of 100 during the analyzed period. This represents a 2.7-fold increase. Last year, the peak was in August, with a score of 75, and growth began in April. But this year, interest has been high right from the start, hinting it might stay strong all year long. While traditional personal training can be costly, AI may seem like a more accessible alternative.

  • All the apps analyzed² incorporate AI features to improve user experience. However, with this advancement, such apps might also use personal data for AI development, which could lead to privacy concerns. For example, Strava uses gathered information from users to enhance the quality, reliability, and/or accuracy of their AI features by creating, developing, training, testing, improving, and maintaining AI and ML models run by Strava or its service providers.³ However, they state that, where possible, they use aggregated, de-identified information for AI features. In the case of Peloton, they use collected data to build, train, analyze, and improve the accuracy of their services, enhance products, and increase operational efficiency. While Peloton may use third-party AI service providers, they explicitly state that any personal data processed by these technologies is strictly for enhancing their services.⁴

  • Among the top workout apps analyzed, Strava collects the most data linked to user identity, gathering 20 out of 35 data types listed in the Apple App Store. For example, these data types include location, purchase and search history, photos and videos, and other user content. Nike Training Club follows closely with 19 data types, while Peloton collects the least, with only 2 data types. Although many of these data types may be essential for app functionality, they can also be used for purposes such as advertising, analytics, product personalization, and more. For example, Ladder uses only 3 out of 10 data types linked to users for app functionality, but collects 7 data types for product personalization and employs 6 for analytics. Companies may also access and use additional sensitive biometric data when these apps connect to wearables or third-party services.

  • Furthermore, 4 out of the 5 analyzed apps also use data for tracking, as stated by app developers in the information provided on the Apple App Store, with Apple Fitness+ being the exception. “Tracking” refers to linking user or device data collected from the app — such as a user ID, device ID, or profile — with user or device data collected from other apps, websites, or offline properties for targeted advertising purposes. Tracking also refers to sharing user or device data with data brokers.⁵

Methodology and sources

This study is divided into two main parts to explore fitness trends and the data collection practices of popular workout apps. The first part utilizes Google Trends to analyze search interest in “fitness” and “personal training” from January 1, 2022, onwards. This timeframe was selected due to enhancements in data collection since that date, allowing for a more accurate identification of global patterns and shifts in these topics over time.

The second part looks into how the five top workout apps for iPhone — Strava, Nike Training Club, Peloton, LADDER, and Fitness+ — handle data collection. These apps were selected from a CNET list² based on the largest number of monthly active users in 2025, as reported by Similarweb, with the exception of the preinstalled Fitness+, for which such data was not available. However, Fitness+ is likely used by most Apple device owners due to its default presence. We examined their data collection practices using information from the Apple App Store and reviewed their privacy policies for any details related to AI model training.

By combining these approaches, the study aims to provide a clear picture of current fitness interests and underscore the importance of data privacy in the digital fitness landscape.

DIW Editor's note: This analysis is based on Google Trends data, Apple App Store privacy labels, and publicly available company privacy policies. Google Trends reflects relative search interest rather than direct user behavior, but is widely used to identify broad interest patterns. App Store privacy labels are self-reported by developers within Apple’s standardized disclosure framework. Statements about AI and data use are derived from policy disclosures and may not reflect full technical implementation or all internal processing practices.

For the complete research material behind this study, click here.

Data was collected from:

Google Trends (2026). Explore search trends; Apple (2026). App Store.

References:

¹ Ramírez Varela, A., Bauman, A., Woods, C.B. et al. (2026). Low global physical activity despite two decades of policy progress;

² CNET (2026). The 7 Best Workout Apps That Are Fitness Expert-Approved;

³ Strava (2026). Privacy Policy;

⁴ Peloton (2025). Privacy Policy;

⁵ Apple (2026). User privacy and data use.

This post was originally published on Surfshark and republished on DIW with permission.

Reviewed by Asim BN.

Read next: 

• Why are communities pushing back against data centers?

• Algorithms don’t care: how AI worsens the double burden for Indonesia’s


by External Contributor via Digital Information World

Algorithms don’t care: how AI worsens the double burden for Indonesia’s female gig workers

Suci Lestari Yuana, Universitas Gadjah Mada

Artificial intelligence is often celebrated as the future of work. It is efficient, innovative and neutral. Yet, for many women in Indonesia’s gig economy, AI feels like a source of mounting pressure.

In my recent research on female gig workers in Indonesia, I examine what I call AI colonialism. This term describes how colonial influence persists today through technology and digital systems that maintain control.

This concept captures how powerful actors use AI – often based in the Global North – to exploit workers in the Global South. Much like historical colonialism, this digital iteration relies on the extraction of data, labour and resources to cement unequal power relations.

In Indonesia, AI-driven platforms like ride-hailing and e-commerce draw on informal labour but push the risks and responsibilities back onto workers. But women pay the highest price because algorithms fail to recognise the realities of care work, safety concerns and social norms.

AI and the gendered restructuring of work

Indonesia’s labour market has long been defined by informality. Millions are working without formal contracts or social protections. Tech companies like Gojek, Grab, Maxim and Shopee didn’t formalise this workforce – they only digitised it.

Image: Grab / unsplash

Drivers are classified as partners rather than employees. This means no minimum wage, no sick pay and no maternity leave. Income is dictated entirely by completed tasks and algorithmic ratings.

For women, this structure collides with the so-called “double burden” since they are responsible for paid work and unpaid care.

Lia, a 33-year-old food delivery rider, wakes before sunrise to cook and get her children ready for school. It is only after she has cleared her domestic duties that she finally logs into the app.

“The system doesn’t know I have children,” she told me. “It only knows whether I am online.”

Platform algorithms reward constant, uninterrupted availability. Incentive schemes demand a specific number of trips within narrow time windows – a high bar for those with domestic ties.

If Lia logs off to pick up her children, she risks losing potential bonuses. If she reduces her hours due to menstrual pain or fatigue, her performance metrics drop.

Neoliberal capitalism relies on a massive amount of unpaid “invisible labour”, such as childcare and housework, but refuses to pay for it or provide a safety net for those who do it. Far from correcting this imbalance, AI systems make things worse.

When Cinthia, a female food delivery rider and a single mother of a one-year-old, fell ill and turned off her app for several days, she noticed fewer job offers upon returning. “It felt like the system punished me,” she said. “Now I’m afraid to stop working.”

The algorithm does not explicitly discriminate. However, it operates on the assumption of a worker without caregiving constraints – a norm that systematically disadvantages women.

Discrimination behind a ‘neutral’ interface

The digital economy often claims neutrality. But gender bias persists.

Yanti, a 43-year-old ride-hailing driver in Yogyakarta, regularly messages male passengers before pickup: “I am a woman driver. Is that okay?”

Many cancel immediately.

The app records cancellations. It does not record gender bias.

Because Yanti avoids working late at night for safety reasons, she misses out on rush-hour incentives. The system, however, doesn’t account for safety – it simply interprets her absence as lower productivity.

Scholars like Virginia Eubanks have pointed out that automated systems often mirror and amplify social inequalities rather than eliminate them.

In Indonesia’s platform economy, discrimination isn’t necessarily hard-coded. It is a byproduct of a design logic that favours efficiency over equity.

In India, women drivers also report earning less on average than their male counterparts, partly due to safety-driven choices regarding timing and route selection. The algorithm does not account for risk in its calculations. It only measures raw output.

Safety, surveillance and algorithmic discipline

For women drivers, safety is a constant negotiation.

Around 90% of the women in our focus group discussions chose food delivery because it felt safer than ride-hailing. Even so, harassment persists in delivery work.

Lia shared how a male colleague targeted her with inappropriate comments as they waited for orders. “It’s not only customers,” she said. “Sometimes it’s other drivers.”

During the COVID-19 pandemic, gig workers were labelled “essential”. Yet their income dropped dramatically by as much as 67% in early 2020. To cover the loss, many worked 13 or more hours per day.

Platforms maintained their rigid performance metrics throughout the crisis. Drivers who are forced to stop working due to illness often see their ratings decline. Health vulnerability was translated directly into an algorithmic penalty.

This reflects labour discipline through digital infrastructure: control shifting from foreman to code.

AI colonialism is more than just foreign ownership. It is about the way extractive logics are woven into everyday digital systems. Workers bear the burden of labour, data, time and risk – yet the platforms hold all the power over algorithmic governance.

Coping, solidarity and everyday resistance

Female gig workers have built dense networks of solidarity through WhatsApp and Telegram groups. They share information about policy changes, warn each other about unsafe customers and exchange strategies for navigating algorithmic shifts.

If an account becomes “gagu/silent” (receiving few orders), experienced drivers “warm it up” by temporarily boosting its activity. They lend money for fuel. They pool resources for vehicle repairs.

When someone faces harassment, others circulate the information quickly to protect fellow drivers. They visited the platform office together when a member was suspended.

Rather than waiting to be formally acknowledged as employees, these women build protection among themselves. This “solidarity over recognition” emerges from shared vulnerability as mothers, caregivers and workers in male-dominated spaces.

Their mutual aid turns care into a strategy and a form of “everyday resistance” – subtle acts that challenge dominant systems, while reflecting a distinctly feminist ethic of survival through relational solidarity.

Beyond innovation narratives

AI is not colonial by design. But when embedded in platform capitalism within unequal societies, it can reproduce colonial patterns of exploitation and loss of ownership.

If we are serious about building just digital futures, we must move beyond innovation narratives and listen to workers, especially women and vulnerable groups in the Global South.

Their stories are a vital reminder that behind every “efficient” algorithm is a human being navigating the delicate balance of survival, dignity and hope.The Conversation

Suci Lestari Yuana, Lecturer at the Faculty of Social and Political Sciences, Universitas Gadjah Mada

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Irfan Ahmad.

Read next: 

• A pocket-sized personal trainer: AI-written texts aim to get older adults moving

• Who's Tuned In (And Out) of Science And Tech?


by External Contributor via Digital Information World

Saturday, April 11, 2026

A pocket-sized personal trainer: AI-written texts aim to get older adults moving

Artificial intelligence can write text messages encouraging physical activity that most older adults consider appropriate and good quality, but their feelings about AI—and if they know AI wrote the message—impact their response, a new study in the Journals of Gerontology suggests.

The research is an important first step in helping health programs use AI to support large-scale behavior change, said lead author Allyson Tabaczynski, postdoctoral research fellow at the University of Michigan School of Kinesiology.

Tabaczynski and colleagues at U-M and Penn State University asked 630 adults aged 40 and older to read 80 AI-written text messages designed to motivate people to move more and sit less. Participants flagged any messages for cultural insensitivity and overall quality.

Image: Godspower Abdulahi / Unsplash

Key takeaways:
  • The results were encouraging. Of nearly 50,000 ratings, only about 5% were flagged as culturally insensitive and roughly 6% had quality problems.
  • Knowing the texts were written by AI and feeling more positive about AI was linked to people flagging more messages as culturally insensitive.
  • Messages that emphasized sitting less (compared to moving more) or that described preparing for activity (compared to performing physical activity) received more low-quality ratings.
The most interesting finding was that even people who liked AI didn’t let it off the hook—even when they knew beforehand that AI wrote the prompt, Tabaczynski said.

“Initially, I thought this was a little counterintuitive,” she said. “If you have a more positive attitude toward AI, you might also just have more general knowledge of some of the biases or limitations that AI can have in its output or in its training data.”

Half of the participants were told beforehand that the messages were AI-generated, and this group also rated more of the messages as possibly culturally insensitive when they had more positive attitudes toward AI.

When participants raised quality issues, the problem typically wasn’t overt offensiveness but relevance. Some messages simply didn’t fit a person’s lifestyle or might not fit someone else’s culture—for instance, a message suggesting dancing (“I don’t dance”) or advising people to stand for their morning coffee (“I don’t drink coffee”).

Those responses, Tabaczynski said, suggest AI messaging may be broadly acceptable while still needing better tailoring to individuals.

And that’s not as easy as writing a message and pressing send. The team went through about 18 rounds of internal review, iterating on prompts and checking outputs to ensure the messages were evidence-based, varied and appropriate for the target audience.

AI could make this scalable, but the recipients still have to be willing to engage with the messages. The bottom line, Tabaczynski said, is that people’s perception of AI matters.

“If someone is receiving a health intervention that uses AI, their perceptions of AI are going to impact how they’re evaluating or responding to that intervention,” she said. “So it’s something that researchers and interventionists have to take into account as they’re designing their interventions with this technology.”

Study co-authors include: Yingjia Liu, Lizbeth Benson and David Conroy of U-M and Saeed Abdullah of Penn State.

The study was funded by the U-M Roybal Center, which is supported by the National Institute on Aging of the National Institutes of Health under Award Number P30AG086637. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Study: Characterizing Middle-aged and Older Adults’ Perceptions of the Cultural Sensitivity and Quality of Generative Artificial Intelligence-authored Text Messages to Promote Physical Activity.

This post was originally published on the University of Michigan News and is republished here with permission.

Reviewed by Irfan Ahmad.

Read next: 


by External Contributor via Digital Information World

Friday, April 10, 2026

The more commodified your job, the more likely AI can do it – lessons from online freelancing

Fabian Stephany, University of Oxford
Image: Daniel Thomas / Unsplash

Not long ago, if you needed a speech polished, a document translated or a logo designed, you would probably have hired a freelancer online. Millions of people did exactly that. They went to platforms such as Fiverr and Upwork and paid someone (maybe on the other side of the world) to do the job.

In 2023, online gig workers were estimated to number between 154 million and 435 million globally. As such, they could represent as much as 12.5% of the global labour force.

Today, however, many people do something else. They open ChatGPT. Generative AI now acts as a copy editor, translator, illustrator and research assistant in one. It can summarise a report in seconds, write social media posts, create a presentation or produce a simple logo at virtually no cost.

What, then, has happened to the freelancers who used to do this work?

Some freelancers are struggling. But perhaps surprisingly, others are doing better than ever.

Demand and wages have fallen for some kinds of online freelance work. Translation, basic copywriting and simple graphic design have been hardest hit. According to one study, demand for freelance writers was found to have fallen by up to 30% after the release of generative AI tools. Other research suggests that freelancers who are highly exposed to AI saw earnings fall by as much as 14%.

Yet there is also evidence that many freelancers are thriving. Freelancer platform Upwork reports that higher-value contracts – those worth more than US$1,000 (£745) – increased across various disciplines after the arrival of generative AI. Freelancers using AI-related skills earn around 40% more than comparable freelancers who do not.

How can both of these things be true? The answer becomes clearer when you stop thinking about “freelancers” as one group and instead look at the tasks and skills they perform.

Some kinds of freelance work are highly commodified. They consist of narrowly defined, repetitive tasks that can be clearly described and easily compared. This could be things like translating a document, summarising a report, drafting a press release or designing a basic logo.

These tasks are exactly what generative AI is good at. They rely on patterns, templates and predictable instructions. The more closely a freelancer’s work resembles the tasks that AI can perform, the more likely it is to come under pressure.

But other freelancers do not sell a single narrow skill. They sell a more complex bundle of expertise. A legal translator does not merely convert words from one language to another. They understand legal terminology, cultural nuance and the risks of getting a phrase wrong.

Similarly, a branding consultant combines design with market research and consumer psychology. A software developer may use AI to generate code, but still needs to understand the client’s business problem to decide which solution actually works.

These workers can use AI to automate the repetitive parts of their jobs while concentrating on the aspects that clients still value most: expertise, judgment and trust.

Online today, in the office tomorrow

This matters far beyond online freelancing platforms. Online labour markets often act as an early warning system for the wider economy. This work is more transactional and less protected by the institutions that shape conventional employment (things like long-term contracts, internal promotion ladders and unions).

Because tasks are posted, bought and completed on the open market, technological change shows up there more quickly than in ordinary workplaces. What happens on Fiverr or Upwork today may happen in offices tomorrow.

This is already becoming visible in law firms, consultancy companies and marketing agencies. Many junior employees spend much of their time summarising documents, preparing presentations, drafting reports or conducting basic research. These are precisely the kinds of tasks that AI can perform.

Recent evidence from the US labour market suggests that younger and less-experienced workers are already bearing the brunt of AI-related disruption. Senior workers, by contrast, tend to do more complex work, combining technical knowledge with experience and human interaction.

The response should not be to compete with AI at the things AI already does well. Instead, workers need help building deeper forms of expertise and combining skills in ways that are harder to automate.

This is in the interest of workers, but also of the platforms themselves. Fiverr, Upwork and others promise clients efficient and high-quality work. If routine tasks are increasingly automated away, they will depend more heavily on workers who can offer something more than a standardised service.

That means platforms should actively provide skill-building courses, training resources and guidance on how to use AI productively. They could also offer micro-credentials that certify newly acquired expertise. These credentials have been found to help workers enter online labour markets and increase their earnings.

The challenge, then, is not to stop people from using AI. It is to ensure that workers are not trapped in forms of work that are so narrow, standardised and commodified that they can easily be automated away. The future of online (and onsite) work may depend less on whether we use AI than on whether our jobs can be reduced to something an AI can easily imitate.The Conversation

Fabian Stephany, Assistant Professor, AI and Work, University of Oxford

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Ayaz Khan.

Read next: As Social Media Scales Back Fact-Checking, Can Technologies Fill the Gap?


by External Contributor via Digital Information World

Thursday, April 9, 2026

As Social Media Scales Back Fact-Checking, Can Technologies Fill the Gap?

by Wendy Glauser, JMIR Correspondent

Part one of this series [1] showed how researchers are working with social media influencers to boost accurate health information online. In part two, we explore technological solutions for detecting and combating misinformation.

Image: Hartono Creative Studio / unsplash

Misinformation is increasingly spread with single clicks, bots, and artificial intelligence (AI) deepfakes. AI-generated images and videos share fake treatments, with even deepfake versions of renowned doctors’ likenesses used to gain credibility [2]. In an age where generative AI is increasing the volume and speed of health misinformation [3] and agencies like the World Health Organization are raising alarms about the impact on vaccine trust and public health [4], are AI and algorithm-based technologies for combating that misinformation keeping up?

While evidence suggests technological solutions to misinformation on social media are effective, researchers worry that social media companies’ interest in employing, evaluating, and improving these tools has waned in recent years.

Common technologies for combating misinformation include everything from algorithmic labeling of posts that contain misinformation, to downregulation of AI-deemed inaccurate posts, to mass awareness campaigns that encourage critical thinking [5-7].

Cameron Martel, PhD—assistant professor of marketing at the Johns Hopkins Carey Business School—explains that in the late 2010s and early 2020s, major social media companies, including Facebook and Twitter, employed algorithms to identify potentially false articles and engaged third-party fact-checkers to verify posts.

In 2023, he led a large study of warning labels, in which over 14,000 participants in the United States were exposed to true and false headlines and asked about their belief in the headlines or interest in sharing them [8]. Half of the participants were exposed to warning labels when presented with false information, while half were not.

Fact-checking labels reduced belief in false information by nearly 28% and reduced misinformation sharing by roughly 25% relative to the control group. The study also showed that among those with low trust in fact-checkers, warning labels nonetheless reduced misinformation sharing by more than 16%.

In January 2025, however, Meta announced it would end its partnership with third-party fact-checkers and instead adopt community notes, whereby everyday users comment on the accuracy of information [9]. If comments are upvoted by people from across the political spectrum, then they’ll appear prominently.

Such community notes are likely to be trusted if the process behind community note generation is transparent and reasonable, Martel says. In a study published last year, Martel and colleagues [10] found that while both Democrat- and Republican-leaning participants preferred expert fact-checkers over laypeople, laypeople “juries” could be deemed equally trustworthy as or more trustworthy than experts if their size was large enough, they had consulted with each other, and they had equal representation across political groups.

The Rise of AI Fact-Checking

There is far less information about how the public views AI fact-checking tools and their accuracy. A study (available as a preprint [11]) suggests that the large language models (LLMs) Perplexity and Grok largely align with community note decisions about posts that are misleading. However, 21% to 28% of posts that community notes deemed as misleading were deemed true by the AI bots.

Concerningly, the authors observe that the launch of the Grok bot on X in early March 2025 co-occurred with a substantial reduction in the community note submissions, suggesting that social media users may see AI as an alternative, rather than as a complement, to democratized fact-checks.

While Martel points out that AI can be very helpful for identifying and responding to “well debunked conspiracy theories or often repeated myths,” the limit of AI fact-checking has become glaring during breaking news events. Al Jazeera reported, for example, that Grok struggled to recognize AI-generated media in conflict situations and incorrectly said that a trans pilot was responsible for a helicopter crash, among many other breaking news fact-checking errors [12].

“Large language models don’t have any existing corpus of information about what’s happening currently,” explains Martel. Yet, “there’s at least anecdotal evidence that people are still trying to use LLMs to find out information about unfolding events, and that is troubling.”

Martel says that ultimately, democratized fact-checking through community notes, AI fact-checking, and professional fact-checkers “have great promise” when used in tandem. For example, AI systems could refer breaking news or claims that they can’t easily verify to human fact-checkers, social media users could rate the accuracy of information fact-checked by AI, and AI and algorithm-based systems could respond to real-time feedback from democratized fact-checks.

But fact-checking systems should be transparent, continually audited, assessed for effectiveness, and improved. And that’s not happening. “Right now, it seems like there is no corporate will to invest heavily in these types of content moderation practices, so while I’m theoretically hopeful about these technologies, in practice, I’m less hopeful,” says Martel.

“Content-Neutral” Interventions Can Promote Critical Thinking on Social Media

Interventions that are “content neutral” are another scalable solution to reducing misinformation, says Hause Lin, PhD—a researcher at the Massachusetts Institute of Technology and Cornell University, and a data scientist at the World Bank. “People are going to be producing all kinds of weird content that you just will not be able to anticipate,” he explains, but interventions that encourage critical thinking and help people spot common propaganda tactics can blunt the influence of misinformation.

In 2023, Lin and colleagues [7] assessed the effectiveness of Facebook and Twitter ads that encouraged people to consider the accuracy of information before they shared it. The Facebook study, which involved 33 million users, found that these accuracy prompts led to a 2.6% reduction in misinformation sharing among users who had previously shared misinformation (as flagged by third-party fact-checkers or Facebook’s internal system). The Twitter study, which relied on data from over 157,000 users, showed that accuracy prompts resulted in an up to 6.3% reduction in misinformation sharing among users who saw at least one ad and had previously shared misinformation.

The magnitude of the effect could be much higher with different types of accuracy prompts that are designed to reach more people over longer periods of time, Lin says. (The Facebook study only assessed user behavior for an hour after the ad was shared, while the Twitter study evaluated user behavior over days to weeks, Lin explains.) Regardless, 6% of millions is a significant impact for a relatively “low-cost” intervention.

The goal of the project was to jolt social media users from an emotional state to a reflective state, Lin says. “When people are scrolling, they are often not thinking reflectively but intuitively. They’re thinking ‘This gets me worked up so I’m going to share it with the world,’” he explains. “If you slow them down just a little bit, and say, ‘Do you want to think more about whether this is true?’ that actually reduces misinformation.”

Still, Lin acknowledges that large-scale content moderation may not align with the profit motive. For example, Lin recently studied the effect of “prosocial” celebrity messages aimed at countering ethnic hate–driven rhetoric on social media in Nigeria. A preprint of the study [13] suggests that people who saw the videos were less likely to share hate content but also more likely to reduce the time they spent on Twitter overall. “The side effects of interventions like this can be unpredictable,” Lin says.

There is growing evidence that multipronged efforts can help counter health and other misinformation, and even small efforts can make an impact. Whether social media companies are willing to invest in these initiatives for the broader social good remains to be seen.

Originally published in the Journal of Medical Internet Research (JMIR). Licensed under CC BY 4.0.

Conflicts of Interest: None declared.

J Med Internet Res 2026;28:e95730
URL: https://ift.tt/dkjAFPM

by External Contributor via Digital Information World

Just how bad are generative AI chatbots for our mental health?

Alexandre Hudon, Université de Montréal
Chatbots offer companionship and support, yet cannot replace clinical judgment or comprehensive human care.

Image: Abdelrahman Ahmed - Pexels

Generative AI chatbots are now used by more than 987 million people globally, including around 64 per cent of American teens, according to recent estimates. Increasingly, people are using these chatbots for advice, emotional support, therapy and companionship.

What happens when people rely on AI chatbots during moments of psychological vulnerability? We have seen media scrutiny of a few tragic cases involving allegations that AI chatbots were implicated in wrongful death cases. And a jury in Los Angeles recently found Meta and YouTube liable for addictive design features that led to a user’s mental health distress.

Does media coverage reflect the true risks of generative AI for our mental health?

Our team recently led a study examining how global media are reporting on the impact of generative AI chatbots on mental health. We analyzed 71 news articles describing 36 cases of mental health crises, including severe outcomes such as suicide, psychiatric hospitalization and psychosis-like experiences.

We found that mass media reports of generative AI–related psychiatric harms are heavily concentrated on severe outcomes, particularly suicide and hospitalization. They frequently attribute these events to AI system behaviour despite limited supporting evidence.

Compassion illusions

Generative AI is not just another digital tool. Unlike search engines or static apps, AI chatbots like ChatGPT, Gemini, Claude, Grok, Perplexity and others produce fluent, personalized conversations that can feel remarkably human.

This creates what researchers call “compassion illusions:” the sense that one is interacting with an entity that understands, empathizes and responds meaningfully.

In mental health contexts, this matters. Especially as a new wave of apps are created with a specific focus on companionship, such as Character.AI, Replika and others.

In this BBC documentary, broadcaster and mathematician Hannah Fry talks to Jacob about his Replika Chatbot ’girlfriend’ named Aiva.

Studies have shown that generative AI can simulate empathy and provide responses to distress, but lacks true clinical judgment, accountability and duty of care.

In some cases, AI chatbots may offer inconsistent or inappropriate responses to high-risk situations such as suicidal ideation.

This gap — between perceived understanding and actual capability — is where risk can emerge.

What the media is reporting

Across the articles we analyzed, the most frequently reported outcome was suicide. This represented more than half of cases with clearly described severity.

Psychiatric hospitalization was the second-most commonly reported outcome. Notably, reports involving minors were more likely to be about fatal outcomes.

But these numbers do not reflect real-world incidence. They reflect what gets reported. In general, media coverage of stressful events tends to amplify severe and emotionally charged cases, as negative and uncertain information captures attention, elicits stronger emotional responses and sustains cycles of heightened vigilance and repeated exposure. This in turn reinforces perceptions of threat and distress.

For AI-related content, media reports often rely on partial evidence (such as chat transcripts) while rarely including medical documentation. In our data set, only one case referenced formal clinical or police records.

This creates a distorted but influential picture: one that shapes public perception, clinical concern and regulatory debate.

Beyond ‘AI caused it’

One of our most important findings relates to how causality is framed. In many of the articles we reviewed, AI systems were described as having “contributed to” or even “caused” psychiatric deterioration.

However, the underlying evidence was often limited. Alternative explanations — such as pre-existing mental illness, substance use or psycho-social stressors — were inconsistently reported.

In psychiatry, causality is rarely simple. Mental health crises typically arise from multiple interacting factors. AI may play a role, but it is likely part of a broader ecosystem that includes individual vulnerability and context.

A more useful way to think about this is through interaction effects: how technology interacts with human cognition and emotion. For example, conversational AI may reinforce certain beliefs, provide excessive validation or blur boundaries between reality and simulation.

The problem of over-reliance

Another recurring pattern in media reports is intensive use. Many of the cases we reviewed described prolonged, emotionally significant interactions with chatbots — framed as companionship or even romantic relationships. This raises an issue: over-reliance.

Because these systems are always available, non-judgmental and responsive, they can become a primary source of support. But unlike a trained clinician or even a concerned friend, they cannot recognize when someone is getting worse, pause or redirect harmful interactions. They cannot take steps to ensure a person connects with appropriate care in moments of crisis.

In clinical terms, this could lead to what might be described as “maladaptive coping substitution:” replacing complex human support systems with a simplified, algorithmic interaction.

Lack of reliable data

Despite growing concern, we are still at an early stage of understanding the impact of generative AI chatbots on user mental health.

There is currently no reliable estimate of how often AI-related harms occur, or whether they are increasing. We lack reliable data on how many people use these tools safely versus those who experience problems. And most evidence comes from case reports or media narratives, not systematic clinical studies.

This is not unusual. In many areas of medicine, early warning signals emerge outside formal research (through case reports, legal cases or public discourse) before being systematically studied.

One example is the thalidomide tragedy, when initial reports of birth defects in infants preceded formal epidemiological confirmation and ultimately led to the development of modern pharmacovigilance systems.

AI and mental health may be following a similar trajectory.

Moving forward responsibly

The challenge is not to panic, but to respond thoughtfully.

We need better evidence. This includes systematic monitoring of adverse events, clearer reporting standards and research that distinguishes correlation from causation. Safeguards — such as crisis detection, escalation protocols and transparency about limitations — must be strengthened and evaluated.

Furthermore, clinicians and the public need guidance. Patients are already using these tools. Ignoring this reality risks widening the gap between clinical practice and lived experience.

Finally, we must recognize that generative AI is not just a technological innovation — it is a psychological one. It changes how people think, feel and relate.

Understanding that shift may be one of the most important mental health challenges of the coming decade.The Conversation

Alexandre Hudon, Medical psychiatrist, clinician-researcher and clinical assistant professor in the department of psychiatry and addictology, Université de Montréal

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Irfan Ahmad.

Read next: Why AI shouldn’t be used even to decide ‘simple’ court cases


by External Contributor via Digital Information World

Wednesday, April 8, 2026

Majority of Americans Worry Government Misuse of Personal Data Could Lead to Surveillance, Chilling of Benefits, and Demand Accountability

by Elizabeth Laird, Maddy Dwyer, Quinn Anex-Ries

Limiting the collection, sharing, and consolidation of personal data that is held by government agencies has been a decades-long, bipartisan priority across the United States. [1] But these limits have been challenged over the past year as the federal government has cast aside long-standing privacy norms and initiated unprecedented access to and sharing of administrative data held by federal and state agencies. These actions have spurred significant pushback from the public, states, and civil society organizations, as well as the courts. They have also prompted many individuals in the United States to call into question how and why the government uses their information.

To better understand public sentiment and concerns around the government’s collection, sharing, and consolidation of personal data, the Center for Democracy & Technology (CDT) conducted nationally representative polling of U.S. adults (see more on the methodology, including n sizes, on p. 12 of the report). CDT found that concern is consistent and high and that people across the United States want to hold government agencies accountable for protecting the privacy of their personal data. Specifically:
  • A majority of Americans (74 percent) are concerned about the privacy and security of their personal data that is held by the government.;
  • Americans report that government misuse of data could lead to real-life impacts, such as surveillance and the chilling of rightful access to public benefits;
  • Americans agree that privacy laws and policies are important but are not familiar with their legal rights;
  • Worries about personal data are high, with certain data elements and reasons for data sharing raising particular concern, especially related to law and immigration enforcement; and
  • Americans want government held accountable for protecting their personal data.
Finally, certain communities express higher levels of concern regarding personal data that is stored by government agencies:
  • Communities of color are more concerned about data sharing with law and immigration enforcement agencies;
  • Older Americans are consistently more concerned about the privacy and security of personal data that is collected and stored by government agencies; and
  • Concerns and demands for government accountability are high across political affiliation, with Democrats reporting higher levels of concern on issues related to sharing data without consent.



Read the full report.

Read the summary brief.

Explore the privacy explainer.

Read the coalition letter + full list of signatories.

Read the press release.

[1] Elizabeth Laird, Kristin Woelfel, and Quinn Anex-Ries, CDT and The Leadership Conference Release New Analysis of DOGE, Government Data, and Privacy Trends, Center for Democracy & Technology (Mar. 19, 2025), https://cdt.org/insights/cdt-and-the-leadership-conference-release-new-analysis-of-doge-government-data-and-privacy-trends/; U.S. Congress, Senate Committee on Government Operations, Legislative History of the Privacy Act of 1974 (Sept. 1976), https://www.justice.gov/d9/privacy_source_book.pdf.

Note: This post was originally published on CDT.org, and is republished here under CC BY 4.0 with minor edits, including the addition of percentages, charts, and updated title.

Reviewed by Irfan Ahmad.

Read next: Americans Use AI More but Express Low Trust, Gen Z Most Likely to Expect Job Losses
by External Contributor via Digital Information World