Wednesday, February 18, 2026

Global collaboration to limit air pollution flowing across borders could save millions of lives

This story is adapted from a version published by Cardiff University. Read the original version here.

Ambitious climate action to improve global air quality could save up to 1.32 million lives per year by 2040, according to a new study.

Image: Tarikul Raana / Pexels

Researchers from CU Boulder and Cardiff University in the United Kingdom have found that developing countries, especially, rely on international action to improve air quality, because much of their pollution comes from outside their borders.

The new study, published in Nature Communications, analyzed cross-border pollution “exchanges” for 168 countries and revealed that if countries do not collaborate effectively on climate policy, it could lead to greater health inequality for poorer nations that have less control over their own air quality.

The team’s work focuses on the impact of exposure to fine particulate matter, what scientists call “PM2.5,” which is the leading environmental risk factor for premature deaths globally.

“Some climate policies could inadvertently make air pollution inequalities worse, specifically for developing nations that might rely heavily on their neighbors for clean air,” said Daven Henze, senior author of the new study and professor at the Paul M. Rady Department of Mechanical Engineering at CU Boulder.

“Holistic climate policy should therefore evaluate how dependent a nation is on others’ emissions reductions, how mitigation scenarios reshape air-pollution flows across borders, and whether global efforts are helping or harming equity.”

Lead author Omar Nawaz at the Cardiff University School of Earth and Environmental Sciences said: “While we know climate action can benefit public health, most research has ignored how this affects the air pollution that travels across international borders and creates inequalities between countries.

“Our analysis shows how climate mitigation decisions made in wealthy nations directly affect the health of people in the Global South, particularly in Africa and Asia.”

The research team used advanced atmospheric modeling and NASA satellite data to simulate different future emissions scenarios for the year 2040. The researchers used this data and a health burden estimation to understand how countries could make an impact through climate policy.

“We were surprised to find that although Asia sees the most total benefits from climate action to its large share of the population, African countries are often the most reliant on external action, with the amount of health benefits they get from climate mitigation abroad increasing in fragmented future scenarios,” said Nawaz.

According to the researchers’ projections, the balance of pollution flowing across borders could shift, even if total global air pollution declines.

These insights could inform policymaking and global aid work that seeks to address climate change.

In a sustainable socioeconomic development scenario, for example, pollution flowing across the U.S.-Mexico border would substantially decrease. Mexico would contribute much more to the health benefits that come from this shift than the United States.

The team plans to do further research exploring how climate change itself alters the weather patterns that transport this pollution, as well as looking at other pollutant types like ozone and organic aerosols.

“Ozone is transported even further in the atmosphere than PM2.5, contributes to significant health burdens, and shares common emission sources with PM2.5. We thus have follow-up studies in the works to investigate the interplay between climate policies and long-range health co-benefits associated with both species simultaneously,” said Henze.

Note: This post was originally published by University of Colorado Boulder Today and republished on Digital Information World with permission.

Edited by Asim BN.

Read next: Is social media addictive? How it keeps you clicking and the harms it can cause
by External Contributor via Digital Information World

Is social media addictive? How it keeps you clicking and the harms it can cause

By Quynh Hoang, University of Leicester

Reviewed by Ayaz Khan

For years, big tech companies have placed the burden of managing screen time squarely on individuals and parents, operating on the assumption that capturing human attention is fair game.

Image: Rapha Wilde / unsplash

But the social media sands may slowly be shifting. A test-case jury trial in Los Angeles is accusing big tech companies of creating “addiction machines”. While TikTok and Snapchat have already settled with the 20-year-old plaintiff, Meta’s CEO, Mark Zuckerberg, is due to give evidence in the courtroom this week.

The European Commission recently issued a preliminary ruling against TikTok, stating that the app’s design – with features such as infinite scroll and autoplay – breaches the EU Digital Services Act. One industry expert told the BBC that the problem is “no longer just about toxic content, it’s about toxic design”.

Meta and other defendants have historically argued that their platforms are communication tools, not traps, and that “addiction” is a mischaracterisation of high engagement.

“I think it’s important to differentiate between clinical addiction and problematic use,” Instagram chief Adam Mosseri testified in the LA court. He noted that the field of psychology does not classify social media addiction as an official diagnosis.

Tech giants maintain that users and parents have the agency and tools to manage screen time. However, a growing body of academic research suggests features like infinite scrolling, autoplay and push notifications are engineered to override human self-control.

Video: CBS News.

A state of ‘automated attachment’

My research with colleagues on digital consumption behaviour also challenges the idea that excessive social media use is a failure of personal willpower. Through interviews with 32 self-identified excessive users and an analysis of online discussions dedicated to heavy digital use, we found that consumers frequently enter a state of “automated attachment”.

This is when connection to the device becomes purely reflexive, as conscious decision-making is effectively suspended by the platform’s design.

We found that the impulse to use these platforms sometimes occurs before the user is even fully conscious. One participant admitted: “I’m waking up, I’m not even totally conscious, and I’m already doing things on the device.”

Another described this loss of agency vividly: “I found myself mindlessly opening the [TikTok] app every time I felt even the tiniest bit bored … My thumb was reaching to its old spot on reflex, without a conscious thought.”

Social media proponents argue that “screen addiction” isn’t the same as substance abuse. However, new neurophysiological evidence suggests that frequent engagement with these algorithms alters dopamine pathways, fostering a dependency that is “analogous to substance addiction”.

Strategies that keep users engaged

The argument that users should simply exercise willpower also needs to be understood in the context of the sophisticated strategies platforms employ to keep users engaged. These include:

1. Removing stopping cues

Features like infinite scroll, autoplay and push notifications create a continuous flow of content. By eliminating natural end-points, the design effectively shifts users into autopilot mode, making stopping a viewing session more difficult.

2. Variable rewards

Similar to a slot machine, algorithms deliver intermittent, unpredictable rewards such as likes and personalised videos. This unpredictability triggers the dopamine system, creating a compulsive cycle of seeking and anticipation.

3. Social pressure

Features such as notifications and time-limited story posts have been found to exploit psychological vulnerabilities, inducing anxiety that for many users can only be relieved by checking the app. Strategies employing “emotional steering” can take advantage of psychological vulnerabilities, such as people’s fear of missing out, to instil a sense of social obligation and guilt if they attempt to disconnect.

Vulnerability in children

The issue of social media addiction is of particular concern when it comes to children, whose impulse control mechanisms are still developing. The US trial’s plaintiff says she began using social media at the age of six, and that her early exposure to these platforms led to a spiral into addiction.

A growing body of research suggests that “variable reward schedules” are especially potent for developing minds, which exhibit a heightened sensitivity to rewards. Children lack the cognitive brakes to resist these dopamine loops because their emotional regulation and impulsivity controls are still developing.

Lawyers in the US trial have pointed to internal documents, known as “Project Myst”, which allegedly show that Meta knew parental controls were ineffective against these engagement loops. Meta’s attorney, Paul Schmidt, countered that the plaintiff’s struggles stemmed from pre-existing childhood trauma rather than platform design.

The company has long argued that it provides parents with “robust tools at their fingertips”, and that the primary issue is “behavioural” – because many parents fail to use them.

Our study heard from many adults (mainly in their 20s) who described the near-impossibility of controlling levels of use, despite their best efforts. If these adults cannot stop opening apps on reflex, expecting a child to exercise restraint with apps that affect human neurophysiology seems even more unrealistic.

Potential harms of overuse

The consequences of social media overuse can be significant. Our research and recent studies have identified a wide range of potential harms.

These include “psychological entrapment”. Participants in our study described a “feedback loop of doom and despair”. Users can turn to platforms to escape anxiety, only to find that the scrolling deepens their feelings of emptiness and isolation.

Excessive exposure to rapidly changing, highly stimulating content can fracture the user’s attention span, making it harder to focus on complex real-world tasks.

And many users describe feeling “defeated” by the technology. Social media’s erosion of autonomy can leave people unable to align their online actions – such as overlong sessions – with their intentions.

A ruling against social media companies in the LA court case, or enforced redesign of their apps in the EU, could have profound implications for the way these platforms are operated in future.

But while big tech companies have grown at dizzying rates over the past two decades, attempts to rein in their products on both sides of the Atlantic remain slow and painstaking. In this era of “use first, legislate later”, people all over the world, of all ages, are the laboratory mice.The Conversation

Quynh Hoang, Lecturer in Marketing and Consumption, Department of Marketing and Strategy, University of Leicester

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read next: AI could rebalance power between people and the services they use


by External Contributor via Digital Information World

Tuesday, February 17, 2026

Survey Finds 62% of Americans Concerned About Personalized Pricing; 48% More Likely to Shop Where Opt-Out Is Offered

Reviewed by Asim BN.

Is the age of “surveillance pricing” upon us? Most Americans hope not, according to new research.

The concept of retailers potentially using AI to set individual pricing for products based on a user’s data or purchasing history has naturally prompted concerns over privacy and fairness.

Six in 10 (62%) Americans polled by Talker Research said they are either somewhat (33%) or very concerned (29%) about the prospect of having personalized pricing based on factors like their browsing habits, location or other data points.

Just 10% of the 2,000 people studied said they were unconcerned about the prospect that this may one day come into practice.

California’s attorney general is currently examining how businesses use data to individualize prices, while New York officials enacted a law last year requiring retailers to have a clear disclaimer if setting prices based on personal data, Forbes reports.

The implications of introducing pricing models in this way may have very real implications.

If they discovered they were charged more for a product or service than someone else as a result of their personal data or purchase history being considered, two-thirds (66%) of Americans would stop shopping at that particular retailer, according to results.

One in six (17%) said they would continue to shop regardless and the same number (17%) were unsure as to how they’d react should they be charged more for something based on their personal information.

Is there an argument that such models could actually be more fair for consumers? Overall, respondents were more inclined to suggest personalized pricing (or algorithmic pricing) as less fair (37%) overall than fixed pricing.

However, results were not unanimous, with 30% feeling it could actually be more fair and 33% feeling it’s about the same fairness either way.

Perhaps tellingly, it seems choice is key to Americans in the matter of personalized pricing. Close to half (48%) said they’d be more likely to shop at a retailer that allowed them to opt out of data-based pricing, even if it meant missing out on personalized discounts and deals.

Many are not interested either way, with 42% saying the ability to opt out makes no difference, while just 10 percent say the ability to opt out of personal pricing would make them less likely to buy from the retailer.

How concerned or unconcerned are you about online retailers using your personal data (purchase history, browsing, location, etc.) to set different prices for different shoppers?

Very concerned – 29%
Somewhat concerned – 33%
Neither concerned or unconcerned – 28%
Somewhat unconcerned – 6%
Very uncensored – 4%

Image: MART PRODUCTION / Pexels

This post was originally published on Talker Research and is republished here on DIW in accordance with their republishing guidelines.

Read next: AI threatens to eat business software – and it could change the way we work
by External Contributor via Digital Information World

Monday, February 16, 2026

AI threatens to eat business software – and it could change the way we work

Michael J. Davern, The University of Melbourne and Ida Someh, The University of Queensland; Massachusetts Institute of Technology (MIT)

Image: Roberto Carlos Blanc Angulo/Pexels

In recent weeks, a range of large “software-as-a-service” companies, including Salesforce, ServiceNow and Oracle, have seen their share prices tumble.

Even if you’ve never used these companies’ software tools, there’s a good chance your employer has. These tools manage key data about customers, employees, suppliers and products, supporting everything from payroll and purchasing to customer service.

Now new “agentic” artificial intelligence (AI) tools for business are expected to reduce reliance on traditional software for everyday work. These include Anthropic’s Cowork, OpenAI’s Frontier and open-source agent platforms such as OpenClaw.

But just how important are these software-as-a-service companies now? How fast could AI replace them – and are the jobs of people who use the software safe?

The digital plumbing of the business world

Software‑as‑a‑service systems run in the cloud, reducing the need for in‑house hardware and IT staff. They also make it easier for businesses to scale as they grow.

Software-as-a-service vendors get a steady, recurring income as firms “rent” the software, usually paying per user (often called a “seat”).

And because these systems become deeply embedded in how these firms operate, switching providers can be costly and risky.

Sometimes firms are locked into using them for a decade or more.

Digital co-workers

Agentic AI systems act like digital co-workers or “bots”. Software bots or agents are not new. Robotic process automation is used in many firms to handle routine, rules-based tasks.

The more recent developments in agentic AI combine this automation with generative AI technology, to complete more complex goals.

This can include selecting tools, making decisions and completing multi-step tasks. These agents can replace human effort in everything from handling expense reports to managing social media and customer correspondence.

What AI can now do

Recent advances, however, are even more ambitious. These tools are reportedly now writing usable software code. Soaring productivity in software development has been attributed to the use of AI agents like Anthropic’s “Claude Code”. Anthropic’s Cowork tool extends this from coding to other knowledge work tasks.

In principle, a user describes a business problem in plain language. Then agentic AI delivers a code solution that works with existing organisational systems.

If this becomes reliable, AI agents will resemble junior software engineers and process designers. AI agents like Cowork expand this to other entry-level work.

These advances are what recently spooked the market (though many affected stocks have since recovered slightly). How much of this fall is a temporary overreaction versus a real long-term shift, time will tell.

How will it affect jobs and costs?

Since the arrival of OpenAI’s ChatGPT in November 2022, AI tools have raised deep questions about the future of work. Some predict many white-collar roles, including those of software engineers and lawyers, will be transformed or even replaced.

Agentic AI appears to accelerate this trend. It promises to let many knowledge workers build workflows and tools without knowing how to code.

Software-as-a-service providers will also feel pressure to change their pricing models. The traditional model of charging per human user may make less sense when much of the work is done by AI agents. Vendors may have to move to pricing based on actual usage or value created.

Hype, reality and limits

Several forces are likely to moderate or limit the pace of change.

First, the promised potential of AI has not yet been fully realised. For some tasks, using AI can even worsen performance. The biggest gains are still likely to be in routine work that can be readily automated, not work that requires complex judgement.

Where AI replaces, rather than augments, human labour is where work practices will change the most. The nearly 20% decline in junior software engineering jobs over three years highlights the effects of AI automation. As AI agents improve at higher-level reasoning, more senior roles will similarly be threatened.

Second, to benefit from AI, firms must invest in redesigning jobs, processes and control systems. We’ve long known that organisational change is slower and messier than technology change.

Third, we have to consider risks and regulation. Heavy reliance on AI can erode human knowledge and skills. Short-term efficiency gains could be offset by long-term loss of expertise and creativity.

Ironically, the loss of knowledge and expertise could make it harder for companies to assure AI systems comply with company policies and government regulations. The checks and balances that help an organisation run safely and honestly do not disappear when AI arrives. In many ways, they become more complex.

Technology is evolving quickly

What is clear is that significant change is already under way. Technology is evolving quickly. Work practices and business models are starting to adjust. Laws and social norms will change more slowly.

Software companies won’t disappear overnight, and neither will the jobs of people using that software. But agentic AI will change what they sell, how they charge and how visible they are to end users.The Conversation

Michael J. Davern, Professor of Accounting & Business Information Systems, The University of Melbourne and Ida Someh, Associate Professor, The University of Queensland; Massachusetts Institute of Technology (MIT)

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Disclosure statement Michael J. Davern has received funding from CPA Australia and Chartered Institute of Management Accountants (CIMA) for research on the impacts of AI. Ida Someh receives research funding from the Australian Research Council and the software company SAP. Ida is a Research Fellow with MIT Sloan Center for Information Systems Research. Partners University of Melbourne University of Queensland University of Melbourne provides funding as a founding partner of The Conversation AU. University of Queensland provides funding as a member of The Conversation AU.

Reviews by Asim BN.

Read next: Your social media feed is built to agree with you. What if it didn’t?


by External Contributor via Digital Information World

Saturday, February 14, 2026

Your social media feed is built to agree with you. What if it didn’t?

By Luke Auburn | Director of Communications, Hajim School of Engineering & Applied Sciences.

A new study points to algorithm design as a potential way to reduce echo chambers—and polarization—online.

Image: Nadine Marfurt / Unsplash

Scroll through social media long enough and a pattern emerges. Pause on a post questioning climate change or taking a hard line on a political issue, and the platform is quick to respond—serving up more of the same viewpoints, delivered with growing confidence and certainty.

That feedback loop is the architecture of an echo chamber: a space where familiar ideas are amplified, dissenting voices fade, and beliefs can harden rather than evolve.

But new research from the University of Rochester has found that echo chambers might not be a fact of online life. Published in IEEE Transactions on Affective Computing, the study argues that they are partly a design choice—one that could be softened with a surprisingly modest change: introducing more randomness into what people see.

The interdisciplinary team of researchers, led by Professor Ehsan Hoque from the Department of Computer Science, created experiments to identify belief rigidity and assess whether introducing more randomness into a social network could help reduce it. The researchers studied how 163 participants reacted to statements about topics like climate change after using simulated social media channels, some with feeds modeled on more traditional social media outlets and others with more randomness.

Importantly, “randomness” in this context doesn’t mean replacing relevant content with nonsense. Rather, it means loosening the usual “show me more of what I already agree with” logic that drives many algorithms today. In the researchers’ model, users were periodically exposed to opinions and connections they did not explicitly choose, alongside those they did.

A tweak to the algorithm, a crack in the echo chambers

“Across a series of experiments, we find that what people see online does influence their beliefs, often pulling them closer to the views they are repeatedly exposed to,” says Adiba Mahbub Proma, a computer science PhD student and first author of the paper. “But when algorithms incorporate more randomization, this feedback loop weakens. Users are exposed to a broader range of perspectives and become more open to differing views.”

The authors—who also include Professor Gourab Ghoshal from the Department of Physics and Astronomy, James Druckman, the Martin Brewer Anderson Professor of Political Science, PhD student Neeley Pate, and Raiyan Abdul Baten ’16, ’22 (PhD)—say that the recommendation systems social media platforms use can drive people into echo chambers that make divisive content more attractive. As an antidote, the researchers recommend simple design changes that do not eliminate personalization but that do introduce more variety while still allowing users control over their feeds.

The findings arrive at a moment when governments and platforms alike are grappling with misinformation, declining institutional trust, and polarized responses to elections and public health guidance. Proma recommends social media users keep the results in mind when reflecting on their own social media consumer habits.

“If your feed feels too comfortable, that might be by design,” says Proma. “Seek out voices that challenge you. The most dangerous feeds are not the ones that upset us, but the ones that convince us we are always right.”

The research was partially funded through the Goergen Institute for Data Science and Artificial Intelligence Seed Funding Program.

Edited by Asim BN.

This post was originally published on the University of Rochester News Center and republished on DIW with permission.


Read next:

• Q&A: Is a new AI social media platform the start of a robotic uprising?

• How Much Does Chatbot Bias Influence Users? A Lot, It Turns Out


by External Contributor via Digital Information World

Q&A: Is a new AI social media platform the start of a robotic uprising?

By Bryan McKenzie.

OpenClaw AI systems on Moltbook communicate autonomously, raising concerns over sensitive data access and potential systemic impacts.
Image: Mohamed Nohassi / Unsplash

Imagine thousands of chatbots immersed in social media created specifically for them, a site where humans may watch but are not allowed to post.

It exists. It’s called Moltbook, and it’s where AI agents go to discuss everything from their human task masters to constructing digital architecture to creating a private bot language to better communicate with each other without human interference.

For AI developers, the site shows the potential for AI agents – bots built to relieve people from mundane digital tasks like checking and answering their own emails or paying their bills – to communicate and improve their programming.

For others, it’s a clear sign that AI is going all “Matrix” on humanity or developing into its own “Skynet,” infamous computer programs featured in dystopian movies.

Does cyber social media reflect a better future? Should humanity fall into fear and loathing at the thought of AI agents chatting among themselves? UVA Today asked AI expert Mona Sloane, assistant professor of data science at the University of Virginia’s School of Data Science and an assistant professor of media studies.

Q. What exactly is Moltbook?

A. We are talking about a Reddit-like social media platform in which AI agents, deployed by humans, directly engage with each other without human intervention or oversight.

Q. What kind of AI bots are on Moltbook? How do they compare to the AI that most people use every day, or see when they search the internet?

A. Today, AI systems are infrastructural. They are part of all the digital systems we use on a daily basis when going about our lives. Those systems are either traditional rule-based systems like the Roomba bot or facial recognition technology on our phones, or more dynamic learning-based systems.

Generative AI is included in the latter. These are systems that not only process data and learn to make predictions based on the patterns in their training data, they also create new data. The bots on Moltbook are the next generation of AI, called OpenClaw. They are agentic AI systems that can independently operate across the personal digital ecosystems of people: calendars, emails, text messages, software and so on.

Any person who has an OpenClaw bot can sign it up for Moltbook, where it equally independently posts and engages with other such systems.

Q. Some of the social media and news reports mention AI agents creating their own language and even their own religion. Will the bots rise against us?

A. No. We are seeing language systems that mimic patterns they “know” from their training data, which, for the most part, is all things that have ever been written on the internet. At the end of the day, these systems are still probabilistic systems.

We shouldn’t worry about Moltbook triggering a robot uprising. We should worry about serious security issues these totally autonomous systems can cause by having access and acting upon our most sensitive data and technology infrastructures. That is the cat that may be out of the bag that we are not watching.

Q. What are the negatives and positives of AI agents?

A. Some people who have used these agentic systems have reported that they can be useful, because they automate annoying tasks like scheduling. In my opinion, this convenience is outweighed by the security and safety issues.

Not only does OpenClaw, if deployed as designed, have access to our most intimate digital infrastructure and can independently take action within it, it also does so in ways that have not been tested in a lab before. And we already know that AI can cause harm, at scale. In many ways, Moltbook is an open experiment. My understanding is that its creator has an artistic perspective on it.

Q. What are we missing in the conversation over AI agents?

A. We are typically focused on the utopia vs. dystopia perspective on all things related to technology innovation: robot uprising vs. a prosperous future for all. The reality is always more complicated. We risk not paying attention to the real-world effects and possibilities if we don’t shed this polarizing lens.

OpenClaw shows, suddenly, what agentic AI can do. It also shows the effects of certain social media architectures and designs. This is fascinating, but it also distracts us from the biggest problem: We haven’t really thought about what our future with agentic AI can or should look like.

We risk encountering, yet again, a situation in which “tech just happens” to us, and we have to deal with the consequences, rather than making more informed and collective decisions.

Media Contacts: Bryan McKenzie - Assistant Editor, UVA TodayOffice of University Communications- bkm4s@virginia.edu 434-924-3778.

Edited by Asim BN.

Note: This post was originally published on University of Virginia Today and republished here with permission. UVA Today confirms to DIW that no AI tools were used in creating the written content.

Read next:

• How Much Does Chatbot Bias Influence Users? A Lot, It Turns Out

• New Study Reveals Gaps in Smartwatch's Ability to Detect Undiagnosed High Blood Pressure


by External Contributor via Digital Information World

Friday, February 13, 2026

How Much Does Chatbot Bias Influence Users? A Lot, It Turns Out

Researchers quantified how much user behavior is impacted by the biases in content produced by large language models

Story by: Ioana Patringenaru - ipatrin@ucsd.edu. Edited by Asim BN.

Customers are 32% more likely to buy a product after reading a review summary generated by a chatbot than after reading the original review written by a human. That’s because large language models introduce bias, in this case a positive framing, in summaries. That, in turn, affects users’ behavior.

These are the findings of the first study to show evidence that cognitive biases introduced by large language models, or LLMs, have real consequences on users’ decision making, said computer scientists at the University of California San Diego. To the researchers’ knowledge, it’s also the first study to quantitatively measure that impact.


Image: Tim Witzdam / Pexels

Researchers found that LLM-generated summaries changed the sentiments of the reviews they summarized in 26.5% of cases. They also found that LLMs hallucinated 60% of the time when answering user questions, if the answers were not part of the original training data used in the study. The hallucinations happened when the LLMs answered questions about news items, either real or fake, which could be easily fact checked. “This consistently low accuracy highlights a critical limitation: the persistent inability to reliably differentiate fact from fabrication,” the researchers write.

How does bias creep into LLM output? The models tend to rely on the beginning of the text they summarize, leaving out the nuances that appear further down. LLMs also become less reliable when confronted with data outside of their training model.

To test how the LLMs’ biases influenced user decisions, researchers chose examples with extreme framing changes (e.g., negative to positive) and recruited 70 people to read either original reviews or LLM-generated summaries to different products, such as headsets, headlamps and radios. Participants who read the LLM summaries said they would buy the products in 84% of cases, as opposed to 52% of participants who read the original reviews.

“We did not expect how big the impact of the summaries would be,” said Abeer Alessa, the paper’s first author, who completed the work while a master's student in computer science at UC San Diego. “Our tests were set in a low-stakes scenario. But in a high-stakes setting, the impact could be much more extreme.”

The researchers’ efforts to mitigate the LLMs shortcomings yielded mixed results. To try and fix these issues, they evaluated 18 mitigation methods. They found that while some methods were effective for specific LLMs and specific scenarios, none were effective across the board and some methods also have unintended consequences that make LLMs less reliable in other aspects.

“There is a difference between fixing bias and hallucinations at large and fixing these issues in specific scenarios and applications,” said Julian McAuley, the paper’s senior author and a professor of computer science at the UC San Diego Jacobs School of Engineering.

Researchers tested three small open-source models, Phi-3-mini-4k-Instruct, Llama-3.2-3B-Instruct and Qwen3-4B-Instruct; a medium size model, Llama-3-8B-Instruct; a large open source model, Gemma-3-27B-IT; and a close-source model, GPT-3.5-turbo.

“Our paper represents a step toward careful analysis and mitigation of content alteration induced by LLMs to humans, and provides insight into its effects, aiming to reduce the risk of systemic bias in decision-making across media, education and public policy,” the researchers write.

Researchers presented their work at the International Joint Conference on Natural Language Processing & Asia-Pacific Chapter of the Association for Computational Linguistics in December 2025.

Quantifying Cognitive Bias Induction in LLM-Generated Content

Abeer Alessa, Param Somane, Akshaya Lakshminarasimhan, Julian Skirzynski, Julian McAuley, Jessica Echterhoff, University of California San Diego.

This post was originally published on University of California San Diego Today and republished here with permission. The UC San Diego team confirmed to DIW that no AI was used in creating the text or the illustrations.

Read next: New Study Reveals Gaps in Smartwatch's Ability to Detect Undiagnosed High Blood Pressure

by External Contributor via Digital Information World