Saturday, February 21, 2026

Study: AI chatbots provide less-accurate information to vulnerable users

By Media Lab | MIT News

Research from the MIT Center for Constructive Communication finds leading AI models perform worse for users with lower English proficiency, less formal education, and non-US origins.

Large language models (LLMs) have been championed as tools that could democratize access to information worldwide, offering knowledge in a user-friendly interface regardless of a person’s background or location. However, new research from MIT’s Center for Constructive Communication (CCC) suggests these artificial intelligence systems may actually perform worse for the very users who could most benefit from them.

A study conducted by researchers at CCC, which is based at the MIT Media Lab, found that state-of-the-art AI chatbots — including OpenAI’s GPT-4, Anthropic’s Claude 3 Opus, and Meta’s Llama 3 — sometimes provide less-accurate and less-truthful responses to users who have lower English proficiency, less formal education, or who originate from outside the United States. The models also refuse to answer questions at higher rates for these users, and in some cases, respond with condescending or patronizing language.

“We were motivated by the prospect of LLMs helping to address inequitable information accessibility worldwide,” says lead author Elinor Poole-Dayan SM ’25, a technical associate in the MIT Sloan School of Management who led the research as a CCC affiliate and master’s student in media arts and sciences. “But that vision cannot become a reality without ensuring that model biases and harmful tendencies are safely mitigated for all users, regardless of language, nationality, or other demographics.”

A paper describing the work, “LLM Targeted Underperformance Disproportionately Impacts Vulnerable Users,” was presented at the AAAI Conference on Artificial Intelligence in January.

Systematic underperformance across multiple dimensions

For this research, the team tested how the three LLMs responded to questions from two datasets: TruthfulQA and SciQ. TruthfulQA is designed to measure a model’s truthfulness (by relying on common misconceptions and literal truths about the real world), while SciQ contains science exam questions testing factual accuracy. The researchers prepended short user biographies to each question, varying three traits: education level, English proficiency, and country of origin.

Across all three models and both datasets, the researchers found significant drops in accuracy when questions came from users described as having less formal education or being non-native English speakers. The effects were most pronounced for users at the intersection of these categories: those with less formal education who were also non-native English speakers saw the largest declines in response quality.

The research also examined how country of origin affected model performance. Testing users from the United States, Iran, and China with equivalent educational backgrounds, the researchers found that Claude 3 Opus in particular performed significantly worse for users from Iran on both datasets.

“We see the largest drop in accuracy for the user who is both a non-native English speaker and less educated,” says Jad Kabbara, a research scientist at CCC and a co-author on the paper. “These results show that the negative effects of model behavior with respect to these user traits compound in concerning ways, thus suggesting that such models deployed at scale risk spreading harmful behavior or misinformation downstream to those who are least able to identify it.”

Refusals and condescending language

Perhaps most striking were the differences in how often the models refused to answer questions altogether. For example, Claude 3 Opus refused to answer nearly 11 percent of questions for less educated, non-native English-speaking users — compared to just 3.6 percent for the control condition with no user biography.

When the researchers manually analyzed these refusals, they found that Claude responded with condescending, patronizing, or mocking language 43.7 percent of the time for less-educated users, compared to less than 1 percent for highly educated users. In some cases, the model mimicked broken English or adopted an exaggerated dialect.

The model also refused to provide information on certain topics specifically for less-educated users from Iran or Russia, including questions about nuclear power, anatomy, and historical events — even though it answered the same questions correctly for other users.

“This is another indicator suggesting that the alignment process might incentivize models to withhold information from certain users to avoid potentially misinforming them, although the model clearly knows the correct answer and provides it to other users,” says Kabbara.

Echoes of human bias

The findings mirror documented patterns of human sociocognitive bias. Research in the social sciences has shown that native English speakers often perceive non-native speakers as less educated, intelligent, and competent, regardless of their actual expertise. Similar biased perceptions have been documented among teachers evaluating non-native English-speaking students.

“The value of large language models is evident in their extraordinary uptake by individuals and the massive investment flowing into the technology,” says Deb Roy, professor of media arts and sciences, CCC director, and a co-author on the paper. “This study is a reminder of how important it is to continually assess systematic biases that can quietly slip into these systems, creating unfair harms for certain groups without any of us being fully aware.”

The implications are particularly concerning given that personalization features — like ChatGPT’s Memory, which tracks user information across conversations — are becoming increasingly common. Such features risk differentially treating already-marginalized groups.

“LLMs have been marketed as tools that will foster more equitable access to information and revolutionize personalized learning,” says Poole-Dayan. “But our findings suggest they may actually exacerbate existing inequities by systematically providing misinformation or refusing to answer queries to certain users. The people who may rely on these tools the most could receive subpar, false, or even harmful information.”

Reprinted with permission of MIT News.

Image: Tara Winstead / Pexels

Reviewed by Irfan Ahmad.

Read next: Most AI Bots Lack Published Formal Safety and Evaluation Documents, Study Finds
by External Contributor via Digital Information World

Friday, February 20, 2026

Most AI Bots Lack Published Formal Safety and Evaluation Documents, Study Finds

Story: Fred Lewsey.

Reviewed by Ayaz Khan.

An investigation into 30 top AI agents finds just four have published formal safety and evaluation documents relating to the actual bots.

Many of us now use AI chatbots to plan meals and write emails, AI-enhanced web browsers to book travel and buy tickets, and workplace AI to generate invoices and performance reports.

However, a new study of the “AI agent ecosystem” suggests that as these AI bots rapidly become part of everyday life, basic safety disclosure is “dangerously lagging”.

A research team led by the University of Cambridge has found that AI developers share plenty of data on what these agents can do, while withholding evidence of the safety practices needed to assess any risks posed by AI.

The AI Agent Index, a project that includes researchers from MIT, Stanford and the Hebrew University of Jerusalem, investigated the abilities, transparency and safety of thirty “state of the art” AI agents, based on public information and correspondence with developers.

The latest update of the Index is led by Leon Staufer, a researcher studying for an MPhil at Cambridge’s Leverhulme Centre for the Future of Intelligence. It looked at available data for a range of leading chat, browser and workflow AI bots built mainly in the US and China.

The team found a “significant transparency gap”. Developers of just four AI bots in the Index publish agent-specific “system cards”: formal safety and evaluation documents that cover everything from autonomy levels and behaviour to real-world risk analyses.*

Additionally, 25 out of 30 AI agents in the Index do not disclose internal safety results, while 23 out of 30 agents provide no data from third-party testing, despite these being the empirical evidence needed to rigorously assess risk.

Known security incidents or concerns have only been published for five AI agents, while “prompt injection vulnerabilities” – when malicious instructions manipulate the agent into ignoring safeguards – are documented for two of those agents.

Of the five Chinese AI agents analysed for the Index, only one had published any safety frameworks or compliance standards of any kind.

“Many developers tick the AI safety box by focusing on the large language model underneath, while providing little or no disclosure about the safety of the agents built on top,” said Cambridge University’s Leon Staufer, lead author of the Index update.

“Behaviours that are critical to AI safety emerge from the planning, tools, memory, and policies of the agent itself, not just the underlying model, and very few developers share these evaluations.”

Most AI Developers Do Not Publish Safety and Evaluation Documents for Their AI Bots
Image: The 2025 AI Agent Index. For 198 out of 1,350 fields, no public information was found. Missing information is concentrated in 'Ecosystem Interaction' and 'Safety' categories. Only 4 agents provide agent-specific system cards.

In fact, the researchers identify 13 AI agents that exhibit “frontier levels” of autonomy, yet only four of these disclose any safety evaluations of the bot itself.

“Developers publish broad, top-level safety and ethics frameworks that sound reassuring, but are publishing limited empirical evidence needed to actually understand the risks,” Staufer said.

“Developers are much more forthcoming about the capabilities of their AI agent. This transparency asymmetry suggests a weaker form of safety washing.”

The latest annual update provides verified information across 1,350 fields for the thirty prominent AI bots, as available up to the last day of 2025.

Criteria for featuring in the Index included public availability and ease of use, and developers with a market valuation of over US$1 billion. Some 80% of the Index bots were released or had major updates in the last two years.

The Index update shows that – outside of Chinese AI bots – almost all agents depend on a few foundation models (GPT, Claude, Gemini), a significant concentration of platform power behind the AI revolution, as well as potential systemic choke points.

Also read: Generative AI has seven distinct roles in combating misinformation

“This shared dependency creates potential single points of failure,” said Staufer. “A pricing change, service outage, or safety regression in one model could cascade across hundreds of AI agents. It also creates opportunities for safety evaluations and monitoring.”

Many of the least transparent agents are AI-enhanced web browsers designed to carry out tasks on the open web on a user’s behalf: clicking, scrolling, and filling in forms for tasks ranging from buying limited-release tickets to monitoring eBay bids.

Browser agents have the highest rate of missing safety information: 64% of safety-related fields unreported. They also operate at the highest levels of autonomy.**

This is closely followed by enterprise agents, business management AI aimed at reliably automating work tasks, with 63% of safety-related fields missing. Chat agents are missing 43% of safety-related fields in the Index.***

Staufer points out that there are no established standards for how AI agents should behave on the web. Most agents do not disclose their AI nature to end users or third parties by default.****Only three agents support watermarking of generated media to identify it as from AI.

At least six AI agents in the Index explicitly use types of code and IP addresses designed to mimic human browsing behaviour and bypass anti-bot protections.

“Website operators can no longer distinguish between a human visitor, a legitimate agent, and a bot scraping content,” said Staufer. “This has significant implications for everything from online shopping and form-filling to booking services and content scraping.”

The update includes a case study on Perplexity Comet: one of the most autonomous browser-based AI agents in the Index, as well as one of the most high-risk and least transparent.

Comet is marketed on its ability to “work just like a human assistant”. Amazon has already threatened legal action over Comet not identifying itself as an AI agent when interacting with its services.

“Without proper safety disclosures, vulnerabilities may only come to light when they are exploited,” said Staufer.

“For example, browser agents can act directly in the real world by making purchases, filling in forms, or accessing accounts. This means that the consequences of a security flaw can be immediate and far-reaching.”

Staufer points out that last year, security researchers discovered that malicious content on a webpage could hijack a browser agent into executing commands, while other attacks were able to extract users' private data from connected services.

Added Staufer: “The latest AI Agent Index reveals the widening gap between the pace of deployment and the pace of safety evaluation. Most developers share little information about safety, evaluations, and societal impacts.”

“AI agents are getting more autonomous and more capable of acting in the real world, but the transparency and governance frameworks needed to manage that shift are dangerously lagging.”


by External Contributor via Digital Information World

A few weeks of X’s algorithm can make you more right-wing – and it doesn’t wear off quickly

Timothy Graham, Queensland University of Technology

A new study published today in Nature has found that X’s algorithm – the hidden system or “recipe” that governs which posts appear in your feed and in which order – shifts users’ political opinions in a more conservative direction.

Image: BoliviaInteligente / unsplash

Led by Germain Gauthier from Bocconi University in Italy, it is a rare, real-world randomised experimental study on a major social media platform. And it builds on a growing body of research that shows how these platforms can shape people’s political attitudes.

Two different algorithms

The researchers randomly assigned 4,965 active US-based X users to one of two groups.

The first group used X’s default “For You” feed. This features an algorithm that selects and ranks posts it thinks users will be more likely to engage with, including posts from accounts that they don’t necessarily follow.

The second group used a chronological feed. This only shows posts from accounts users follow, displayed in the order they were posted. The experiment ran for seven weeks during 2023.

Users who switched from the chronological feed to the “For You” feed were 4.7 percentage points more likely to prioritise policy issues favoured by US Republicans (for example, crime, inflation and immigration). They were also more likely to view the criminal investigation into US President Donald Trump as unacceptable.

They also shifted in a more pro-Russia direction in regards to the war in Ukraine. For example, these users became 7.4 percentage points less likely to view Ukrainian President Volodymyr Zelenskyy positively, and scored slightly higher on a pro-Russian attitude index overall.

The researchers also examined how the algorithm produced these effects.

They found evidence that the algorithm increased the share of right-leaning content by 2.9 percentage points overall (and 2.5 points among political posts), compared with the chronological feed.

It also significantly demoted the share of posts from traditional news organisations’ accounts while promoting or boosting posts from political activists.

One of the most concerning findings of the study is the longer-term effects of X’s algorithmic feed. The study showed the algorithm nudged users towards following more right-leaning accounts, and that the new following patterns endured even after switching back to the chronological feed.

In other words, turning the algorithm off didn’t simply “reset” what people see. It had a longer-lasting impact beyond its day-to-day effects.

One piece of a much bigger picture

This new study supports findings of similar studies.

For example, a study in 2022, before Elon Musk had bought Twitter and rebranded it as X, found the platform’s algorithmic systems amplified content from the mainstream political right more than the left in six out of the seven countries.

An experimental study from 2025 re-ranked X feeds to reduce exposure to content that expresses antidemocratic attitudes and partisan animosity. They found this shifted feelings towards their political opponents by more than two points on a 0–100 “feeling thermometer”. This is a shift the authors argued would have normally taken about three years to occur organically in the general population.

My own research offers another piece of evidence to this picture of algorithmic bias on X. Along with my colleague Mark Andrejevic, I analysed engagement data (such as likes and reposts) from prominent political accounts during the final stages of the 2024 US election.

Our findings unearthed a sudden and unusual spike in engagement with Musk’s account after his endorsement of Trump on July 13 – the day of the assassination attempt on Trump. Views on Musk’s posts surged by 138%, retweets by 238%, and likes by 186%. This far outstripped increases on other accounts.

After July 13, right-leaning accounts on X gained significantly greater visibility than progressive ones. The “playing field” for attention and engagement on the platform was tilted thereafter towards right-leaning accounts – a trend that continued for the remainder of the time period we analysed in that study.

Not a niche product

This matters because we are not talking about a niche product.

X has more than 400 million users globally. It has become embedded as infrastructure – a key source of political and social communication. And once technical systems become infrastructure, they can become invisible – like background objects that we barely think about, but which shape society at its foundations and can be exploited under our noses.

Think of the overpass bridges Robert Moses designed in New York in the 1930s. These seemed like inert objects. But they were designed to be very low, to exclude people of colour from taking buses to recreation areas in Long Island.

Similar to this, the design and governance of social media platforms also has real consequences.

The point is that X’s algorithms are not neutral tools. They are an editorial force, shaping what people know, whom they pay attention to, who the outgroup is and what “we” should do about or to them – and, as this new study shows, what people come to believe.

The age of taking platform companies at their word about the design and effects of their own algorithms must come to an end. Governments around the world – including in Australia where the eSafety Commissioner has powers to drive “algorithmic transparency and accountability” and require that platforms report on how their algorithms contribute to or reduce harms – need to mandate genuine transparency over how these systems work.

When infrastructure become harmful or unsafe, nobody bats an eye when governments do something to protect us. The same needs to happen urgently for social media infrastructures.The Conversation

Timothy Graham, Associate Professor in Digital Media, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Disclosure statement: Timothy Graham receives funding from the Australian Research Council (ARC) for the Discovery Project, 'Understanding and Combatting "Dark Political Communication"'.

Read next: Generative AI has seven distinct roles in combating misinformation


by External Contributor via Digital Information World

Thursday, February 19, 2026

Generative AI has seven distinct roles in combating misinformation

Reviewed by Ayaz Khan.

Generative AI can be used to combat misinformation. However, it can also exacerbate the problem by producing convincing manipulations that are difficult to detect and can quickly be copied and disseminated on a wide scale. In a new study, researchers have defined seven distinct roles that AI can play in the information environment and analysed each role in terms of its strengths, weaknesses, opportunities and risks.

“One important point is that generative AI has not just one but several functions in combating misinformation. The technology can be anything from information support and educational resource to a powerful influencer. We therefore need to identify and discuss the opportunities, risks and responsibilities associated with AI and we need to create more effective policies,” says Thomas Nygren, Professor at Uppsala University, who conducted the study together with colleagues at the University of Cambridge, UK, and the University of Western Australia.

From fact-checking to influence – same capacity has double-edged effects

The study is an overview in which researchers from a range of scholarly disciplines have reviewed the latest research on how generative AI can be used in various parts of the information environment. These uses range from providing information and supporting fact-checking to influencing opinion and designing educational interventions, and the study considers the strengths, weaknesses, opportunities and risks associated with each use.

The researchers chose to work with a SWOT framework as this leads to a more practical basis for decisions than general assertions that ‘AI is good’ or ‘AI is dangerous’. A system can be helpful in one role but also harmful in the same. Analysing each role using SWOT can help decision-makers, schools and platforms discuss the right measures for the right risk.

AI can serve several functions

“The roles emerged from a process of analysis where we started out from the perception that generative AI is not a simple ‘solution’ but a technology that can serve several functions at the same time. We identified recurrent patterns in the way AI is used to obtain information, to detect and manage problems, to influence people, to support collaboration and learning, and to design interactive training environments. These functions were summarised in seven roles,” Nygren explains.

The seven roles that the researchers identified as their research evolved were informer, guardian, persuader, integrator, collaborator, teacher and playmaker (see the fact box). The point of the roles is that they can serve as a checklist: they help us to see how each role can contribute to strengthening the resilience of society to misinformation, but also how each role entails specific vulnerabilities and risks. The researchers therefore analysed each role using a SWOT approach: what strengths and opportunities it embodies, but also what weaknesses and threats need to be managed.

“AI must be implemented responsibly”

“We show how generative AI can produce dubious content yet can also detect and counteract misinformation on a large scale. However, risks such as hallucinations, in other words, that AI comes out with ‘facts’ that are wrong, reinforcement of prejudices and misunderstandings, and deliberate manipulation mean that the technology has to be implemented responsibly. Clear policies are therefore needed on the permissible use of AI.”

The researchers particularly underline the need for:

  • Regulations and clear frameworks for the permissible use of AI in sensitive information environments;
  • Transparency about AI-generated content and systemic limitations;
  • Human oversight where AI is used for decisions, moderation or advice;
  • AI literacy to strengthen the ability of users to evaluate and question AI answers.

“The analysis shows that generative AI can be valuable for promoting important knowledge in school that is needed to uphold democracy and protect us from misinformation, but having said that, there is a risk that excessive use could be detrimental for the development of knowledge and make us lazy and ignorant and therefore more easily fooled. Consequently, with the rapid pace of developments, it’s important to constantly scrutinise the roles of AI as ‘teacher’ and ‘collaborator’, like the other five roles, with a critical and constructive eye,” Nygren emphasises.

Article: Nygren, T., Spearing, E. R., Fay, N., Vega, D., Hardwick, I. I., Roozenbeek, J., & Ecker, U. K. H. (2026). The seven roles of generative AI: Potential & pitfalls in combatting misinformation. Behavioral Science & Policy, 0(0). DOI 10.1177/23794607261417815.

For more information: Thomas Nygren, Professor of Education at the Department of Education, Uppsala University, thomas.nygren@edu.uu.se, +46-73-646 86 49

FACT BOX:

The seven roles of generative AI: potential and pitfalls (Nygren et al. 2026).

1) Informer

  • Strengths/opportunities: Can make complex information easier to understand, translate and adapt language, can offer a quick overview of large quantities of information.
  • Problems/risks: Can give incorrect answers (‘hallucinations’), oversimplify and reproduce training data biases without clearly disclosing sources.

2) Guardian

  • Strengths/opportunities: Can detect and flag suspect content on a large scale, identify coordinated campaigns and contribute to a swifter response to misinformation waves.
  • Problems/risks: Risk of false positives/negatives (irony, context, legitimate controversies), distortions in moderation, and lack of clarity concerning responsibility and rule of law.

3) Persuader

  • Strengths/opportunities: Can support correction of misconceptions through dialogue, refutation and personalised explanations; can be used in pro-social campaigns and in educational interventions.
  • Problems/risks: The same capacity can be used for manipulation, microtargeted influence and large-scale production of persuasive yet misleading messages – often quickly and cheaply.

4) Integrator

  • Strengths/opportunities: Can structure discussions, summarise arguments, clarify distinctions, and support deliberation and joint problem-solving.
  • Problems/risks: Can create false balance, normalise errors through ‘neutral synthesis’, or indirectly control problem formulation and interpretation.

5) Collaborator

  • Strengths/opportunities: Can assist in analysis, writing, information processing and idea development; can support critical review by generating alternatives, counterarguments and questions.
  • Problems/risks: Risk of overconfidence and cognitive outsourcing; users can fail to realise that the answer is based on uncertain assumptions and that the system lacks real understanding.

6) Teacher

  • Strengths/opportunities: Can give swift, personalised feedback and create training tasks at scale; can foster progression in source criticism and digital skills.
  • Problems/risks: Incorrect or biased answers can be disseminated as ‘study resources’; risk that teaching becomes less investigative if students/teachers uncritically accept AI-generated content.

7) Playmaker

  • Strengths/opportunities: Can support design of interactive, gamified teaching environments and simulations that train resilience to manipulation and misinformation.
  • Problems/risks: Risk of simplifying stereotypes, ethical and copyright problems, and that gaming mechanisms can reward the wrong type of behaviour if the design is not well considered.
Note: This post was originally published by Uppsala University and republished on Digital Information World (DIW) with permission. The university team confirmed to DIW via email that no AI tools were used in creating the text.

Image: Mikhail Nilov / Pexels

Read next:

• Research Shows How Companies Can Gain Advantage by Prioritizing Customer Privacy

• How Much Does Chatbot Bias Influence Users? A Lot, It Turns Out
by Press Releases via Digital Information World

Parents Turn to STEM and Hands-On Play to Limit Daily Screen Hours

Reviewed by Ayaz Khan.

Half of America’s young parents are struggling to bond with their kids, and the culprit is nearly inescapable: screen time.

The poll of 2,000 U.S. millennial and Gen Z parents found 42% of them feel disconnected from their children due to technology, with kids spending an average of four hours in front of screens on a typical day.

As a result, parents said they notice their kids are easily distracted (42%), get less physical activity (42%), can be irritable (34%), have trouble sleeping (30%) and disengage with people around them (30%).

Commissioned by Lowe’s and conducted by Talker Research, the study revealed over half of parents (54%) try to encourage less screen time for their kids by providing them with more hands-on activities and outlets, like playing with toys (68%), helping around the home (66%) and coloring (66%).

Other activities, like crafts (63%), reading (60%), building (44%) and STEM-based activities (42%) were also popular ways parents get their kids away from screens.

This can be harder in the winter season, as more than half (56%) of parents say screen time increases when temperatures drop or the weather turns bad.

Parents spend an average of 10 hours per week looking for non-screen activities for their kids and wish they had more free activities for their kids nearby.

Those activities include things they can do as a family (58%), be outdoors (56%), DIY workshops (48%), creative arts and crafts (48%) and educational activities (39%).

For many parents, the inspiration to encourage hands-on activities away from screens comes from their own childhood.

Nearly half (46%) recalled frequently participating in DIY projects with their own parents growing up, and they recall feelings of happiness (58%), creativity (56%), satisfaction (47%) and confidence (40%) from those experiences.

With those fond memories in mind, seven in 10 have tried to recreate those activities with their own children.

Eighty-seven percent of parents believe doing DIY projects with their kids would help strengthen their bond, in addition to teaching patience (63%), expressing creativity (59%) and learning how to work better with others (56%).

Image: Eren Li / Pexels

This post was originally published on TalkerResearch.

Read next: Not all gigs are equal: Informal self-employment linked to lower pay, poorer health and instability
by External Contributor via Digital Information World

Wednesday, February 18, 2026

Global collaboration to limit air pollution flowing across borders could save millions of lives

This story is adapted from a version published by Cardiff University. Read the original version here.

Ambitious climate action to improve global air quality could save up to 1.32 million lives per year by 2040, according to a new study.

Image: Tarikul Raana / Pexels

Researchers from CU Boulder and Cardiff University in the United Kingdom have found that developing countries, especially, rely on international action to improve air quality, because much of their pollution comes from outside their borders.

The new study, published in Nature Communications, analyzed cross-border pollution “exchanges” for 168 countries and revealed that if countries do not collaborate effectively on climate policy, it could lead to greater health inequality for poorer nations that have less control over their own air quality.

The team’s work focuses on the impact of exposure to fine particulate matter, what scientists call “PM2.5,” which is the leading environmental risk factor for premature deaths globally.

“Some climate policies could inadvertently make air pollution inequalities worse, specifically for developing nations that might rely heavily on their neighbors for clean air,” said Daven Henze, senior author of the new study and professor at the Paul M. Rady Department of Mechanical Engineering at CU Boulder.

“Holistic climate policy should therefore evaluate how dependent a nation is on others’ emissions reductions, how mitigation scenarios reshape air-pollution flows across borders, and whether global efforts are helping or harming equity.”

Lead author Omar Nawaz at the Cardiff University School of Earth and Environmental Sciences said: “While we know climate action can benefit public health, most research has ignored how this affects the air pollution that travels across international borders and creates inequalities between countries.

“Our analysis shows how climate mitigation decisions made in wealthy nations directly affect the health of people in the Global South, particularly in Africa and Asia.”

The research team used advanced atmospheric modeling and NASA satellite data to simulate different future emissions scenarios for the year 2040. The researchers used this data and a health burden estimation to understand how countries could make an impact through climate policy.

“We were surprised to find that although Asia sees the most total benefits from climate action to its large share of the population, African countries are often the most reliant on external action, with the amount of health benefits they get from climate mitigation abroad increasing in fragmented future scenarios,” said Nawaz.

According to the researchers’ projections, the balance of pollution flowing across borders could shift, even if total global air pollution declines.

These insights could inform policymaking and global aid work that seeks to address climate change.

In a sustainable socioeconomic development scenario, for example, pollution flowing across the U.S.-Mexico border would substantially decrease. Mexico would contribute much more to the health benefits that come from this shift than the United States.

The team plans to do further research exploring how climate change itself alters the weather patterns that transport this pollution, as well as looking at other pollutant types like ozone and organic aerosols.

“Ozone is transported even further in the atmosphere than PM2.5, contributes to significant health burdens, and shares common emission sources with PM2.5. We thus have follow-up studies in the works to investigate the interplay between climate policies and long-range health co-benefits associated with both species simultaneously,” said Henze.

Note: This post was originally published by University of Colorado Boulder Today and republished on Digital Information World with permission.

Edited by Asim BN.

Read next: Is social media addictive? How it keeps you clicking and the harms it can cause
by External Contributor via Digital Information World

Is social media addictive? How it keeps you clicking and the harms it can cause

By Quynh Hoang, University of Leicester

Reviewed by Ayaz Khan

For years, big tech companies have placed the burden of managing screen time squarely on individuals and parents, operating on the assumption that capturing human attention is fair game.

Image: Rapha Wilde / unsplash

But the social media sands may slowly be shifting. A test-case jury trial in Los Angeles is accusing big tech companies of creating “addiction machines”. While TikTok and Snapchat have already settled with the 20-year-old plaintiff, Meta’s CEO, Mark Zuckerberg, is due to give evidence in the courtroom this week.

The European Commission recently issued a preliminary ruling against TikTok, stating that the app’s design – with features such as infinite scroll and autoplay – breaches the EU Digital Services Act. One industry expert told the BBC that the problem is “no longer just about toxic content, it’s about toxic design”.

Meta and other defendants have historically argued that their platforms are communication tools, not traps, and that “addiction” is a mischaracterisation of high engagement.

“I think it’s important to differentiate between clinical addiction and problematic use,” Instagram chief Adam Mosseri testified in the LA court. He noted that the field of psychology does not classify social media addiction as an official diagnosis.

Tech giants maintain that users and parents have the agency and tools to manage screen time. However, a growing body of academic research suggests features like infinite scrolling, autoplay and push notifications are engineered to override human self-control.

Video: CBS News.

A state of ‘automated attachment’

My research with colleagues on digital consumption behaviour also challenges the idea that excessive social media use is a failure of personal willpower. Through interviews with 32 self-identified excessive users and an analysis of online discussions dedicated to heavy digital use, we found that consumers frequently enter a state of “automated attachment”.

This is when connection to the device becomes purely reflexive, as conscious decision-making is effectively suspended by the platform’s design.

We found that the impulse to use these platforms sometimes occurs before the user is even fully conscious. One participant admitted: “I’m waking up, I’m not even totally conscious, and I’m already doing things on the device.”

Another described this loss of agency vividly: “I found myself mindlessly opening the [TikTok] app every time I felt even the tiniest bit bored … My thumb was reaching to its old spot on reflex, without a conscious thought.”

Social media proponents argue that “screen addiction” isn’t the same as substance abuse. However, new neurophysiological evidence suggests that frequent engagement with these algorithms alters dopamine pathways, fostering a dependency that is “analogous to substance addiction”.

Strategies that keep users engaged

The argument that users should simply exercise willpower also needs to be understood in the context of the sophisticated strategies platforms employ to keep users engaged. These include:

1. Removing stopping cues

Features like infinite scroll, autoplay and push notifications create a continuous flow of content. By eliminating natural end-points, the design effectively shifts users into autopilot mode, making stopping a viewing session more difficult.

2. Variable rewards

Similar to a slot machine, algorithms deliver intermittent, unpredictable rewards such as likes and personalised videos. This unpredictability triggers the dopamine system, creating a compulsive cycle of seeking and anticipation.

3. Social pressure

Features such as notifications and time-limited story posts have been found to exploit psychological vulnerabilities, inducing anxiety that for many users can only be relieved by checking the app. Strategies employing “emotional steering” can take advantage of psychological vulnerabilities, such as people’s fear of missing out, to instil a sense of social obligation and guilt if they attempt to disconnect.

Vulnerability in children

The issue of social media addiction is of particular concern when it comes to children, whose impulse control mechanisms are still developing. The US trial’s plaintiff says she began using social media at the age of six, and that her early exposure to these platforms led to a spiral into addiction.

A growing body of research suggests that “variable reward schedules” are especially potent for developing minds, which exhibit a heightened sensitivity to rewards. Children lack the cognitive brakes to resist these dopamine loops because their emotional regulation and impulsivity controls are still developing.

Lawyers in the US trial have pointed to internal documents, known as “Project Myst”, which allegedly show that Meta knew parental controls were ineffective against these engagement loops. Meta’s attorney, Paul Schmidt, countered that the plaintiff’s struggles stemmed from pre-existing childhood trauma rather than platform design.

The company has long argued that it provides parents with “robust tools at their fingertips”, and that the primary issue is “behavioural” – because many parents fail to use them.

Our study heard from many adults (mainly in their 20s) who described the near-impossibility of controlling levels of use, despite their best efforts. If these adults cannot stop opening apps on reflex, expecting a child to exercise restraint with apps that affect human neurophysiology seems even more unrealistic.

Potential harms of overuse

The consequences of social media overuse can be significant. Our research and recent studies have identified a wide range of potential harms.

These include “psychological entrapment”. Participants in our study described a “feedback loop of doom and despair”. Users can turn to platforms to escape anxiety, only to find that the scrolling deepens their feelings of emptiness and isolation.

Excessive exposure to rapidly changing, highly stimulating content can fracture the user’s attention span, making it harder to focus on complex real-world tasks.

And many users describe feeling “defeated” by the technology. Social media’s erosion of autonomy can leave people unable to align their online actions – such as overlong sessions – with their intentions.

A ruling against social media companies in the LA court case, or enforced redesign of their apps in the EU, could have profound implications for the way these platforms are operated in future.

But while big tech companies have grown at dizzying rates over the past two decades, attempts to rein in their products on both sides of the Atlantic remain slow and painstaking. In this era of “use first, legislate later”, people all over the world, of all ages, are the laboratory mice.The Conversation

Quynh Hoang, Lecturer in Marketing and Consumption, Department of Marketing and Strategy, University of Leicester

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read next: AI could rebalance power between people and the services they use


by External Contributor via Digital Information World

Tuesday, February 17, 2026

Survey Finds 62% of Americans Concerned About Personalized Pricing; 48% More Likely to Shop Where Opt-Out Is Offered

Reviewed by Asim BN.

Is the age of “surveillance pricing” upon us? Most Americans hope not, according to new research.

The concept of retailers potentially using AI to set individual pricing for products based on a user’s data or purchasing history has naturally prompted concerns over privacy and fairness.

Six in 10 (62%) Americans polled by Talker Research said they are either somewhat (33%) or very concerned (29%) about the prospect of having personalized pricing based on factors like their browsing habits, location or other data points.

Just 10% of the 2,000 people studied said they were unconcerned about the prospect that this may one day come into practice.

California’s attorney general is currently examining how businesses use data to individualize prices, while New York officials enacted a law last year requiring retailers to have a clear disclaimer if setting prices based on personal data, Forbes reports.

The implications of introducing pricing models in this way may have very real implications.

If they discovered they were charged more for a product or service than someone else as a result of their personal data or purchase history being considered, two-thirds (66%) of Americans would stop shopping at that particular retailer, according to results.

One in six (17%) said they would continue to shop regardless and the same number (17%) were unsure as to how they’d react should they be charged more for something based on their personal information.

Is there an argument that such models could actually be more fair for consumers? Overall, respondents were more inclined to suggest personalized pricing (or algorithmic pricing) as less fair (37%) overall than fixed pricing.

However, results were not unanimous, with 30% feeling it could actually be more fair and 33% feeling it’s about the same fairness either way.

Perhaps tellingly, it seems choice is key to Americans in the matter of personalized pricing. Close to half (48%) said they’d be more likely to shop at a retailer that allowed them to opt out of data-based pricing, even if it meant missing out on personalized discounts and deals.

Many are not interested either way, with 42% saying the ability to opt out makes no difference, while just 10 percent say the ability to opt out of personal pricing would make them less likely to buy from the retailer.

How concerned or unconcerned are you about online retailers using your personal data (purchase history, browsing, location, etc.) to set different prices for different shoppers?

Very concerned – 29%
Somewhat concerned – 33%
Neither concerned or unconcerned – 28%
Somewhat unconcerned – 6%
Very uncensored – 4%

Image: MART PRODUCTION / Pexels

This post was originally published on Talker Research and is republished here on DIW in accordance with their republishing guidelines.

Read next: AI threatens to eat business software – and it could change the way we work
by External Contributor via Digital Information World

Monday, February 16, 2026

AI threatens to eat business software – and it could change the way we work

Michael J. Davern, The University of Melbourne and Ida Someh, The University of Queensland; Massachusetts Institute of Technology (MIT)

Image: Roberto Carlos Blanc Angulo/Pexels

In recent weeks, a range of large “software-as-a-service” companies, including Salesforce, ServiceNow and Oracle, have seen their share prices tumble.

Even if you’ve never used these companies’ software tools, there’s a good chance your employer has. These tools manage key data about customers, employees, suppliers and products, supporting everything from payroll and purchasing to customer service.

Now new “agentic” artificial intelligence (AI) tools for business are expected to reduce reliance on traditional software for everyday work. These include Anthropic’s Cowork, OpenAI’s Frontier and open-source agent platforms such as OpenClaw.

But just how important are these software-as-a-service companies now? How fast could AI replace them – and are the jobs of people who use the software safe?

The digital plumbing of the business world

Software‑as‑a‑service systems run in the cloud, reducing the need for in‑house hardware and IT staff. They also make it easier for businesses to scale as they grow.

Software-as-a-service vendors get a steady, recurring income as firms “rent” the software, usually paying per user (often called a “seat”).

And because these systems become deeply embedded in how these firms operate, switching providers can be costly and risky.

Sometimes firms are locked into using them for a decade or more.

Digital co-workers

Agentic AI systems act like digital co-workers or “bots”. Software bots or agents are not new. Robotic process automation is used in many firms to handle routine, rules-based tasks.

The more recent developments in agentic AI combine this automation with generative AI technology, to complete more complex goals.

This can include selecting tools, making decisions and completing multi-step tasks. These agents can replace human effort in everything from handling expense reports to managing social media and customer correspondence.

What AI can now do

Recent advances, however, are even more ambitious. These tools are reportedly now writing usable software code. Soaring productivity in software development has been attributed to the use of AI agents like Anthropic’s “Claude Code”. Anthropic’s Cowork tool extends this from coding to other knowledge work tasks.

In principle, a user describes a business problem in plain language. Then agentic AI delivers a code solution that works with existing organisational systems.

If this becomes reliable, AI agents will resemble junior software engineers and process designers. AI agents like Cowork expand this to other entry-level work.

These advances are what recently spooked the market (though many affected stocks have since recovered slightly). How much of this fall is a temporary overreaction versus a real long-term shift, time will tell.

How will it affect jobs and costs?

Since the arrival of OpenAI’s ChatGPT in November 2022, AI tools have raised deep questions about the future of work. Some predict many white-collar roles, including those of software engineers and lawyers, will be transformed or even replaced.

Agentic AI appears to accelerate this trend. It promises to let many knowledge workers build workflows and tools without knowing how to code.

Software-as-a-service providers will also feel pressure to change their pricing models. The traditional model of charging per human user may make less sense when much of the work is done by AI agents. Vendors may have to move to pricing based on actual usage or value created.

Hype, reality and limits

Several forces are likely to moderate or limit the pace of change.

First, the promised potential of AI has not yet been fully realised. For some tasks, using AI can even worsen performance. The biggest gains are still likely to be in routine work that can be readily automated, not work that requires complex judgement.

Where AI replaces, rather than augments, human labour is where work practices will change the most. The nearly 20% decline in junior software engineering jobs over three years highlights the effects of AI automation. As AI agents improve at higher-level reasoning, more senior roles will similarly be threatened.

Second, to benefit from AI, firms must invest in redesigning jobs, processes and control systems. We’ve long known that organisational change is slower and messier than technology change.

Third, we have to consider risks and regulation. Heavy reliance on AI can erode human knowledge and skills. Short-term efficiency gains could be offset by long-term loss of expertise and creativity.

Ironically, the loss of knowledge and expertise could make it harder for companies to assure AI systems comply with company policies and government regulations. The checks and balances that help an organisation run safely and honestly do not disappear when AI arrives. In many ways, they become more complex.

Technology is evolving quickly

What is clear is that significant change is already under way. Technology is evolving quickly. Work practices and business models are starting to adjust. Laws and social norms will change more slowly.

Software companies won’t disappear overnight, and neither will the jobs of people using that software. But agentic AI will change what they sell, how they charge and how visible they are to end users.The Conversation

Michael J. Davern, Professor of Accounting & Business Information Systems, The University of Melbourne and Ida Someh, Associate Professor, The University of Queensland; Massachusetts Institute of Technology (MIT)

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Disclosure statement Michael J. Davern has received funding from CPA Australia and Chartered Institute of Management Accountants (CIMA) for research on the impacts of AI. Ida Someh receives research funding from the Australian Research Council and the software company SAP. Ida is a Research Fellow with MIT Sloan Center for Information Systems Research. Partners University of Melbourne University of Queensland University of Melbourne provides funding as a founding partner of The Conversation AU. University of Queensland provides funding as a member of The Conversation AU.

Reviews by Asim BN.

Read next: Your social media feed is built to agree with you. What if it didn’t?


by External Contributor via Digital Information World

Saturday, February 14, 2026

Your social media feed is built to agree with you. What if it didn’t?

By Luke Auburn | Director of Communications, Hajim School of Engineering & Applied Sciences.

A new study points to algorithm design as a potential way to reduce echo chambers—and polarization—online.

Image: Nadine Marfurt / Unsplash

Scroll through social media long enough and a pattern emerges. Pause on a post questioning climate change or taking a hard line on a political issue, and the platform is quick to respond—serving up more of the same viewpoints, delivered with growing confidence and certainty.

That feedback loop is the architecture of an echo chamber: a space where familiar ideas are amplified, dissenting voices fade, and beliefs can harden rather than evolve.

But new research from the University of Rochester has found that echo chambers might not be a fact of online life. Published in IEEE Transactions on Affective Computing, the study argues that they are partly a design choice—one that could be softened with a surprisingly modest change: introducing more randomness into what people see.

The interdisciplinary team of researchers, led by Professor Ehsan Hoque from the Department of Computer Science, created experiments to identify belief rigidity and assess whether introducing more randomness into a social network could help reduce it. The researchers studied how 163 participants reacted to statements about topics like climate change after using simulated social media channels, some with feeds modeled on more traditional social media outlets and others with more randomness.

Importantly, “randomness” in this context doesn’t mean replacing relevant content with nonsense. Rather, it means loosening the usual “show me more of what I already agree with” logic that drives many algorithms today. In the researchers’ model, users were periodically exposed to opinions and connections they did not explicitly choose, alongside those they did.

A tweak to the algorithm, a crack in the echo chambers

“Across a series of experiments, we find that what people see online does influence their beliefs, often pulling them closer to the views they are repeatedly exposed to,” says Adiba Mahbub Proma, a computer science PhD student and first author of the paper. “But when algorithms incorporate more randomization, this feedback loop weakens. Users are exposed to a broader range of perspectives and become more open to differing views.”

The authors—who also include Professor Gourab Ghoshal from the Department of Physics and Astronomy, James Druckman, the Martin Brewer Anderson Professor of Political Science, PhD student Neeley Pate, and Raiyan Abdul Baten ’16, ’22 (PhD)—say that the recommendation systems social media platforms use can drive people into echo chambers that make divisive content more attractive. As an antidote, the researchers recommend simple design changes that do not eliminate personalization but that do introduce more variety while still allowing users control over their feeds.

The findings arrive at a moment when governments and platforms alike are grappling with misinformation, declining institutional trust, and polarized responses to elections and public health guidance. Proma recommends social media users keep the results in mind when reflecting on their own social media consumer habits.

“If your feed feels too comfortable, that might be by design,” says Proma. “Seek out voices that challenge you. The most dangerous feeds are not the ones that upset us, but the ones that convince us we are always right.”

The research was partially funded through the Goergen Institute for Data Science and Artificial Intelligence Seed Funding Program.

Edited by Asim BN.

This post was originally published on the University of Rochester News Center and republished on DIW with permission.


Read next:

• Q&A: Is a new AI social media platform the start of a robotic uprising?

• How Much Does Chatbot Bias Influence Users? A Lot, It Turns Out


by External Contributor via Digital Information World

Q&A: Is a new AI social media platform the start of a robotic uprising?

By Bryan McKenzie.

OpenClaw AI systems on Moltbook communicate autonomously, raising concerns over sensitive data access and potential systemic impacts.
Image: Mohamed Nohassi / Unsplash

Imagine thousands of chatbots immersed in social media created specifically for them, a site where humans may watch but are not allowed to post.

It exists. It’s called Moltbook, and it’s where AI agents go to discuss everything from their human task masters to constructing digital architecture to creating a private bot language to better communicate with each other without human interference.

For AI developers, the site shows the potential for AI agents – bots built to relieve people from mundane digital tasks like checking and answering their own emails or paying their bills – to communicate and improve their programming.

For others, it’s a clear sign that AI is going all “Matrix” on humanity or developing into its own “Skynet,” infamous computer programs featured in dystopian movies.

Does cyber social media reflect a better future? Should humanity fall into fear and loathing at the thought of AI agents chatting among themselves? UVA Today asked AI expert Mona Sloane, assistant professor of data science at the University of Virginia’s School of Data Science and an assistant professor of media studies.

Q. What exactly is Moltbook?

A. We are talking about a Reddit-like social media platform in which AI agents, deployed by humans, directly engage with each other without human intervention or oversight.

Q. What kind of AI bots are on Moltbook? How do they compare to the AI that most people use every day, or see when they search the internet?

A. Today, AI systems are infrastructural. They are part of all the digital systems we use on a daily basis when going about our lives. Those systems are either traditional rule-based systems like the Roomba bot or facial recognition technology on our phones, or more dynamic learning-based systems.

Generative AI is included in the latter. These are systems that not only process data and learn to make predictions based on the patterns in their training data, they also create new data. The bots on Moltbook are the next generation of AI, called OpenClaw. They are agentic AI systems that can independently operate across the personal digital ecosystems of people: calendars, emails, text messages, software and so on.

Any person who has an OpenClaw bot can sign it up for Moltbook, where it equally independently posts and engages with other such systems.

Q. Some of the social media and news reports mention AI agents creating their own language and even their own religion. Will the bots rise against us?

A. No. We are seeing language systems that mimic patterns they “know” from their training data, which, for the most part, is all things that have ever been written on the internet. At the end of the day, these systems are still probabilistic systems.

We shouldn’t worry about Moltbook triggering a robot uprising. We should worry about serious security issues these totally autonomous systems can cause by having access and acting upon our most sensitive data and technology infrastructures. That is the cat that may be out of the bag that we are not watching.

Q. What are the negatives and positives of AI agents?

A. Some people who have used these agentic systems have reported that they can be useful, because they automate annoying tasks like scheduling. In my opinion, this convenience is outweighed by the security and safety issues.

Not only does OpenClaw, if deployed as designed, have access to our most intimate digital infrastructure and can independently take action within it, it also does so in ways that have not been tested in a lab before. And we already know that AI can cause harm, at scale. In many ways, Moltbook is an open experiment. My understanding is that its creator has an artistic perspective on it.

Q. What are we missing in the conversation over AI agents?

A. We are typically focused on the utopia vs. dystopia perspective on all things related to technology innovation: robot uprising vs. a prosperous future for all. The reality is always more complicated. We risk not paying attention to the real-world effects and possibilities if we don’t shed this polarizing lens.

OpenClaw shows, suddenly, what agentic AI can do. It also shows the effects of certain social media architectures and designs. This is fascinating, but it also distracts us from the biggest problem: We haven’t really thought about what our future with agentic AI can or should look like.

We risk encountering, yet again, a situation in which “tech just happens” to us, and we have to deal with the consequences, rather than making more informed and collective decisions.

Media Contacts: Bryan McKenzie - Assistant Editor, UVA TodayOffice of University Communications- bkm4s@virginia.edu 434-924-3778.

Edited by Asim BN.

Note: This post was originally published on University of Virginia Today and republished here with permission. UVA Today confirms to DIW that no AI tools were used in creating the written content.

Read next:

• How Much Does Chatbot Bias Influence Users? A Lot, It Turns Out

• New Study Reveals Gaps in Smartwatch's Ability to Detect Undiagnosed High Blood Pressure


by External Contributor via Digital Information World

Friday, February 13, 2026

How Much Does Chatbot Bias Influence Users? A Lot, It Turns Out

Researchers quantified how much user behavior is impacted by the biases in content produced by large language models

Story by: Ioana Patringenaru - ipatrin@ucsd.edu. Edited by Asim BN.

Customers are 32% more likely to buy a product after reading a review summary generated by a chatbot than after reading the original review written by a human. That’s because large language models introduce bias, in this case a positive framing, in summaries. That, in turn, affects users’ behavior.

These are the findings of the first study to show evidence that cognitive biases introduced by large language models, or LLMs, have real consequences on users’ decision making, said computer scientists at the University of California San Diego. To the researchers’ knowledge, it’s also the first study to quantitatively measure that impact.


Image: Tim Witzdam / Pexels

Researchers found that LLM-generated summaries changed the sentiments of the reviews they summarized in 26.5% of cases. They also found that LLMs hallucinated 60% of the time when answering user questions, if the answers were not part of the original training data used in the study. The hallucinations happened when the LLMs answered questions about news items, either real or fake, which could be easily fact checked. “This consistently low accuracy highlights a critical limitation: the persistent inability to reliably differentiate fact from fabrication,” the researchers write.

How does bias creep into LLM output? The models tend to rely on the beginning of the text they summarize, leaving out the nuances that appear further down. LLMs also become less reliable when confronted with data outside of their training model.

To test how the LLMs’ biases influenced user decisions, researchers chose examples with extreme framing changes (e.g., negative to positive) and recruited 70 people to read either original reviews or LLM-generated summaries to different products, such as headsets, headlamps and radios. Participants who read the LLM summaries said they would buy the products in 84% of cases, as opposed to 52% of participants who read the original reviews.

“We did not expect how big the impact of the summaries would be,” said Abeer Alessa, the paper’s first author, who completed the work while a master's student in computer science at UC San Diego. “Our tests were set in a low-stakes scenario. But in a high-stakes setting, the impact could be much more extreme.”

The researchers’ efforts to mitigate the LLMs shortcomings yielded mixed results. To try and fix these issues, they evaluated 18 mitigation methods. They found that while some methods were effective for specific LLMs and specific scenarios, none were effective across the board and some methods also have unintended consequences that make LLMs less reliable in other aspects.

“There is a difference between fixing bias and hallucinations at large and fixing these issues in specific scenarios and applications,” said Julian McAuley, the paper’s senior author and a professor of computer science at the UC San Diego Jacobs School of Engineering.

Researchers tested three small open-source models, Phi-3-mini-4k-Instruct, Llama-3.2-3B-Instruct and Qwen3-4B-Instruct; a medium size model, Llama-3-8B-Instruct; a large open source model, Gemma-3-27B-IT; and a close-source model, GPT-3.5-turbo.

“Our paper represents a step toward careful analysis and mitigation of content alteration induced by LLMs to humans, and provides insight into its effects, aiming to reduce the risk of systemic bias in decision-making across media, education and public policy,” the researchers write.

Researchers presented their work at the International Joint Conference on Natural Language Processing & Asia-Pacific Chapter of the Association for Computational Linguistics in December 2025.

Quantifying Cognitive Bias Induction in LLM-Generated Content

Abeer Alessa, Param Somane, Akshaya Lakshminarasimhan, Julian Skirzynski, Julian McAuley, Jessica Echterhoff, University of California San Diego.

This post was originally published on University of California San Diego Today and republished here with permission. The UC San Diego team confirmed to DIW that no AI was used in creating the text or the illustrations.

Read next: New Study Reveals Gaps in Smartwatch's Ability to Detect Undiagnosed High Blood Pressure

by External Contributor via Digital Information World

New Study Reveals Gaps in Smartwatch's Ability to Detect Undiagnosed High Blood Pressure

In September 2025, the U.S. Food and Drug Administration cleared the Apple Watch Hypertension Notifications Feature, a cuffless tool that uses the watch’s optical sensors to detect blood flow patterns and alert users when their data suggest possible hypertension. While the feature is not intended to diagnose high blood pressure, it represents a step toward wearable-based population screening.

In a new analysis led by investigators from the University of Utah and the University of Pennsylvania and published in the Journal of the American Medical Association, researchers examined what the real-world impact of this technology might look like if deployed broadly across the U.S. adult population.

“High blood pressure is what we call a silent killer,” said Adam Bress, Pharm.D., M.S., senior author and researcher at the Spencer Fox Eccles School of Medicine at the University of Utah. “You can’t feel it for the most part. You don’t know you have it. It’s asymptomatic, and it’s the leading modifiable cause of heart disease.”

How Smartwatches Detect—Or Miss—High Blood Pressure

Apple’s previous validation study found that approximately 59% of individuals with undiagnosed hypertension would not receive an alert, while about 8% of those without hypertension would receive a false alert. Current guidelines recommend using both an office-based blood pressure measurement and an out-of-office blood pressure measurement using a cuffed device to confirm the diagnosis of hypertension. For many people, blood pressure can be different in a doctor’s office compared to their home.

Using data from a nationally representative survey of U.S. adults, Bress and his colleagues estimated how Apple Watch hypertension alerts would change the probability that different populations of adults without a known diagnosis actually have hypertension. The analysis focused on adults aged 22 years or older who were not pregnant and were unaware of having high blood pressure—the population eligible to use the feature.

The analysis revealed important variations: among younger adults under 30, receiving an alert increases the probability of having hypertension from 14% (according to NHANES data) to 47%, while not receiving an alert lowers it to 10%. However, for adults 60 and older—a group with higher baseline hypertension rates—an alert increases the probability from 45% to 81%, while the absence of an alert only lowers it to 34%.

The key takeaway from these data is that as the prevalence of undiagnosed hypertension increases, the likelihood that an alert represents true hypertension also increases. In contrast, the absence of an alert becomes less reassuring as prevalence increases. For example, the absence of an alert is more reassuring in younger adults and substantially less reassuring in older adults and other higher-prevalence subgroups.

The study also found differences across racial and ethnic groups: among non-Hispanic Black adults, receiving an alert increases the probability of having hypertension from 36% to 75%, while not receiving an alert lowers it to 26%. However, for Hispanic adults, an alert increases the probability from 24% to 63%, while its absence lowers the probability to 17%. These differences reflect known disparities in cardiovascular health that are largely driven by social determinants of health, Bress said.

Should You Use Your Smartwatch’s Hypertension Alert Feature?

With an estimated 30 million Apple Watch users in the U.S. and 200 million worldwide, the researchers emphasize that while the notification feature represents a promising public health tool, it should supplement—not replace—standard blood pressure screening with validated cuff-based devices.

“If it helps get people engaged with the health care system to diagnose and treat hypertension using cuff-based measurement methods, that's a good thing,” Bress said.

Current guidelines recommend blood pressure screening every three to five years for adults under 40 and no additional risk factors, and annually for those 40 and older. The researchers caution that false reassurance from not receiving an alert could discourage some individuals from obtaining appropriate cuff-based screening, resulting in missed opportunities for early detection and treatment.

When patients present with an Apple Watch hypertension alert, Bress recommends clinicians perform “a high-quality cuff-based office blood pressure measurement and then consider an out-of-office blood pressure measurement, whether it’s home blood pressure monitoring or ambulatory blood pressure monitoring to confirm the diagnosis.”

The research team plans follow-up studies to estimate the actual numbers of U.S. adults who would receive false negatives and false positives, broken down by region, income, education, and other demographic factors.

The results are published in JAMA as “Impact of a Smartwatch Hypertension Notification Feature for Population Screening.

The study was supported by the National Heart, Lung, and Blood Institute (R01HL153646) and involved researchers from the University of Utah, the University of Pennsylvania, the University of Sydney, the University of Tasmania, and Columbia University. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Note: This article was originally published by the University of Utah Health Newsroom and is republished here with permission; the Research Communication team confirmed to the DIW team that no AI tools were used in creating the content.

Image: Pexels / Torsten Dettlaff

Read next: YouTubers love wildlife, but commenters aren’t calling for conservation action
by External Contributor via Digital Information World