Friday, February 20, 2026

Most AI Bots Lack Published Formal Safety and Evaluation Documents, Study Finds

Story: Fred Lewsey.

Reviewed by Ayaz Khan.

An investigation into 30 top AI agents finds just four have published formal safety and evaluation documents relating to the actual bots.

Many of us now use AI chatbots to plan meals and write emails, AI-enhanced web browsers to book travel and buy tickets, and workplace AI to generate invoices and performance reports.

However, a new study of the “AI agent ecosystem” suggests that as these AI bots rapidly become part of everyday life, basic safety disclosure is “dangerously lagging”.

A research team led by the University of Cambridge has found that AI developers share plenty of data on what these agents can do, while withholding evidence of the safety practices needed to assess any risks posed by AI.

The AI Agent Index, a project that includes researchers from MIT, Stanford and the Hebrew University of Jerusalem, investigated the abilities, transparency and safety of thirty “state of the art” AI agents, based on public information and correspondence with developers.

The latest update of the Index is led by Leon Staufer, a researcher studying for an MPhil at Cambridge’s Leverhulme Centre for the Future of Intelligence. It looked at available data for a range of leading chat, browser and workflow AI bots built mainly in the US and China.

The team found a “significant transparency gap”. Developers of just four AI bots in the Index publish agent-specific “system cards”: formal safety and evaluation documents that cover everything from autonomy levels and behaviour to real-world risk analyses.*

Additionally, 25 out of 30 AI agents in the Index do not disclose internal safety results, while 23 out of 30 agents provide no data from third-party testing, despite these being the empirical evidence needed to rigorously assess risk.

Known security incidents or concerns have only been published for five AI agents, while “prompt injection vulnerabilities” – when malicious instructions manipulate the agent into ignoring safeguards – are documented for two of those agents.

Of the five Chinese AI agents analysed for the Index, only one had published any safety frameworks or compliance standards of any kind.

“Many developers tick the AI safety box by focusing on the large language model underneath, while providing little or no disclosure about the safety of the agents built on top,” said Cambridge University’s Leon Staufer, lead author of the Index update.

“Behaviours that are critical to AI safety emerge from the planning, tools, memory, and policies of the agent itself, not just the underlying model, and very few developers share these evaluations.”

Most AI Developers Do Not Publish Safety and Evaluation Documents for Their AI Bots
Image: The 2025 AI Agent Index. For 198 out of 1,350 fields, no public information was found. Missing information is concentrated in 'Ecosystem Interaction' and 'Safety' categories. Only 4 agents provide agent-specific system cards.

In fact, the researchers identify 13 AI agents that exhibit “frontier levels” of autonomy, yet only four of these disclose any safety evaluations of the bot itself.

“Developers publish broad, top-level safety and ethics frameworks that sound reassuring, but are publishing limited empirical evidence needed to actually understand the risks,” Staufer said.

“Developers are much more forthcoming about the capabilities of their AI agent. This transparency asymmetry suggests a weaker form of safety washing.”

The latest annual update provides verified information across 1,350 fields for the thirty prominent AI bots, as available up to the last day of 2025.

Criteria for featuring in the Index included public availability and ease of use, and developers with a market valuation of over US$1 billion. Some 80% of the Index bots were released or had major updates in the last two years.

The Index update shows that – outside of Chinese AI bots – almost all agents depend on a few foundation models (GPT, Claude, Gemini), a significant concentration of platform power behind the AI revolution, as well as potential systemic choke points.

Also read: Generative AI has seven distinct roles in combating misinformation

“This shared dependency creates potential single points of failure,” said Staufer. “A pricing change, service outage, or safety regression in one model could cascade across hundreds of AI agents. It also creates opportunities for safety evaluations and monitoring.”

Many of the least transparent agents are AI-enhanced web browsers designed to carry out tasks on the open web on a user’s behalf: clicking, scrolling, and filling in forms for tasks ranging from buying limited-release tickets to monitoring eBay bids.

Browser agents have the highest rate of missing safety information: 64% of safety-related fields unreported. They also operate at the highest levels of autonomy.**

This is closely followed by enterprise agents, business management AI aimed at reliably automating work tasks, with 63% of safety-related fields missing. Chat agents are missing 43% of safety-related fields in the Index.***

Staufer points out that there are no established standards for how AI agents should behave on the web. Most agents do not disclose their AI nature to end users or third parties by default.****Only three agents support watermarking of generated media to identify it as from AI.

At least six AI agents in the Index explicitly use types of code and IP addresses designed to mimic human browsing behaviour and bypass anti-bot protections.

“Website operators can no longer distinguish between a human visitor, a legitimate agent, and a bot scraping content,” said Staufer. “This has significant implications for everything from online shopping and form-filling to booking services and content scraping.”

The update includes a case study on Perplexity Comet: one of the most autonomous browser-based AI agents in the Index, as well as one of the most high-risk and least transparent.

Comet is marketed on its ability to “work just like a human assistant”. Amazon has already threatened legal action over Comet not identifying itself as an AI agent when interacting with its services.

“Without proper safety disclosures, vulnerabilities may only come to light when they are exploited,” said Staufer.

“For example, browser agents can act directly in the real world by making purchases, filling in forms, or accessing accounts. This means that the consequences of a security flaw can be immediate and far-reaching.”

Staufer points out that last year, security researchers discovered that malicious content on a webpage could hijack a browser agent into executing commands, while other attacks were able to extract users' private data from connected services.

Added Staufer: “The latest AI Agent Index reveals the widening gap between the pace of deployment and the pace of safety evaluation. Most developers share little information about safety, evaluations, and societal impacts.”

“AI agents are getting more autonomous and more capable of acting in the real world, but the transparency and governance frameworks needed to manage that shift are dangerously lagging.”


by External Contributor via Digital Information World

A few weeks of X’s algorithm can make you more right-wing – and it doesn’t wear off quickly

Timothy Graham, Queensland University of Technology

A new study published today in Nature has found that X’s algorithm – the hidden system or “recipe” that governs which posts appear in your feed and in which order – shifts users’ political opinions in a more conservative direction.

Image: BoliviaInteligente / unsplash

Led by Germain Gauthier from Bocconi University in Italy, it is a rare, real-world randomised experimental study on a major social media platform. And it builds on a growing body of research that shows how these platforms can shape people’s political attitudes.

Two different algorithms

The researchers randomly assigned 4,965 active US-based X users to one of two groups.

The first group used X’s default “For You” feed. This features an algorithm that selects and ranks posts it thinks users will be more likely to engage with, including posts from accounts that they don’t necessarily follow.

The second group used a chronological feed. This only shows posts from accounts users follow, displayed in the order they were posted. The experiment ran for seven weeks during 2023.

Users who switched from the chronological feed to the “For You” feed were 4.7 percentage points more likely to prioritise policy issues favoured by US Republicans (for example, crime, inflation and immigration). They were also more likely to view the criminal investigation into US President Donald Trump as unacceptable.

They also shifted in a more pro-Russia direction in regards to the war in Ukraine. For example, these users became 7.4 percentage points less likely to view Ukrainian President Volodymyr Zelenskyy positively, and scored slightly higher on a pro-Russian attitude index overall.

The researchers also examined how the algorithm produced these effects.

They found evidence that the algorithm increased the share of right-leaning content by 2.9 percentage points overall (and 2.5 points among political posts), compared with the chronological feed.

It also significantly demoted the share of posts from traditional news organisations’ accounts while promoting or boosting posts from political activists.

One of the most concerning findings of the study is the longer-term effects of X’s algorithmic feed. The study showed the algorithm nudged users towards following more right-leaning accounts, and that the new following patterns endured even after switching back to the chronological feed.

In other words, turning the algorithm off didn’t simply “reset” what people see. It had a longer-lasting impact beyond its day-to-day effects.

One piece of a much bigger picture

This new study supports findings of similar studies.

For example, a study in 2022, before Elon Musk had bought Twitter and rebranded it as X, found the platform’s algorithmic systems amplified content from the mainstream political right more than the left in six out of the seven countries.

An experimental study from 2025 re-ranked X feeds to reduce exposure to content that expresses antidemocratic attitudes and partisan animosity. They found this shifted feelings towards their political opponents by more than two points on a 0–100 “feeling thermometer”. This is a shift the authors argued would have normally taken about three years to occur organically in the general population.

My own research offers another piece of evidence to this picture of algorithmic bias on X. Along with my colleague Mark Andrejevic, I analysed engagement data (such as likes and reposts) from prominent political accounts during the final stages of the 2024 US election.

Our findings unearthed a sudden and unusual spike in engagement with Musk’s account after his endorsement of Trump on July 13 – the day of the assassination attempt on Trump. Views on Musk’s posts surged by 138%, retweets by 238%, and likes by 186%. This far outstripped increases on other accounts.

After July 13, right-leaning accounts on X gained significantly greater visibility than progressive ones. The “playing field” for attention and engagement on the platform was tilted thereafter towards right-leaning accounts – a trend that continued for the remainder of the time period we analysed in that study.

Not a niche product

This matters because we are not talking about a niche product.

X has more than 400 million users globally. It has become embedded as infrastructure – a key source of political and social communication. And once technical systems become infrastructure, they can become invisible – like background objects that we barely think about, but which shape society at its foundations and can be exploited under our noses.

Think of the overpass bridges Robert Moses designed in New York in the 1930s. These seemed like inert objects. But they were designed to be very low, to exclude people of colour from taking buses to recreation areas in Long Island.

Similar to this, the design and governance of social media platforms also has real consequences.

The point is that X’s algorithms are not neutral tools. They are an editorial force, shaping what people know, whom they pay attention to, who the outgroup is and what “we” should do about or to them – and, as this new study shows, what people come to believe.

The age of taking platform companies at their word about the design and effects of their own algorithms must come to an end. Governments around the world – including in Australia where the eSafety Commissioner has powers to drive “algorithmic transparency and accountability” and require that platforms report on how their algorithms contribute to or reduce harms – need to mandate genuine transparency over how these systems work.

When infrastructure become harmful or unsafe, nobody bats an eye when governments do something to protect us. The same needs to happen urgently for social media infrastructures.The Conversation

Timothy Graham, Associate Professor in Digital Media, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Disclosure statement: Timothy Graham receives funding from the Australian Research Council (ARC) for the Discovery Project, 'Understanding and Combatting "Dark Political Communication"'.

Read next: Generative AI has seven distinct roles in combating misinformation


by External Contributor via Digital Information World

Thursday, February 19, 2026

Generative AI has seven distinct roles in combating misinformation

Reviewed by Ayaz Khan.

Generative AI can be used to combat misinformation. However, it can also exacerbate the problem by producing convincing manipulations that are difficult to detect and can quickly be copied and disseminated on a wide scale. In a new study, researchers have defined seven distinct roles that AI can play in the information environment and analysed each role in terms of its strengths, weaknesses, opportunities and risks.

“One important point is that generative AI has not just one but several functions in combating misinformation. The technology can be anything from information support and educational resource to a powerful influencer. We therefore need to identify and discuss the opportunities, risks and responsibilities associated with AI and we need to create more effective policies,” says Thomas Nygren, Professor at Uppsala University, who conducted the study together with colleagues at the University of Cambridge, UK, and the University of Western Australia.

From fact-checking to influence – same capacity has double-edged effects

The study is an overview in which researchers from a range of scholarly disciplines have reviewed the latest research on how generative AI can be used in various parts of the information environment. These uses range from providing information and supporting fact-checking to influencing opinion and designing educational interventions, and the study considers the strengths, weaknesses, opportunities and risks associated with each use.

The researchers chose to work with a SWOT framework as this leads to a more practical basis for decisions than general assertions that ‘AI is good’ or ‘AI is dangerous’. A system can be helpful in one role but also harmful in the same. Analysing each role using SWOT can help decision-makers, schools and platforms discuss the right measures for the right risk.

AI can serve several functions

“The roles emerged from a process of analysis where we started out from the perception that generative AI is not a simple ‘solution’ but a technology that can serve several functions at the same time. We identified recurrent patterns in the way AI is used to obtain information, to detect and manage problems, to influence people, to support collaboration and learning, and to design interactive training environments. These functions were summarised in seven roles,” Nygren explains.

The seven roles that the researchers identified as their research evolved were informer, guardian, persuader, integrator, collaborator, teacher and playmaker (see the fact box). The point of the roles is that they can serve as a checklist: they help us to see how each role can contribute to strengthening the resilience of society to misinformation, but also how each role entails specific vulnerabilities and risks. The researchers therefore analysed each role using a SWOT approach: what strengths and opportunities it embodies, but also what weaknesses and threats need to be managed.

“AI must be implemented responsibly”

“We show how generative AI can produce dubious content yet can also detect and counteract misinformation on a large scale. However, risks such as hallucinations, in other words, that AI comes out with ‘facts’ that are wrong, reinforcement of prejudices and misunderstandings, and deliberate manipulation mean that the technology has to be implemented responsibly. Clear policies are therefore needed on the permissible use of AI.”

The researchers particularly underline the need for:

  • Regulations and clear frameworks for the permissible use of AI in sensitive information environments;
  • Transparency about AI-generated content and systemic limitations;
  • Human oversight where AI is used for decisions, moderation or advice;
  • AI literacy to strengthen the ability of users to evaluate and question AI answers.

“The analysis shows that generative AI can be valuable for promoting important knowledge in school that is needed to uphold democracy and protect us from misinformation, but having said that, there is a risk that excessive use could be detrimental for the development of knowledge and make us lazy and ignorant and therefore more easily fooled. Consequently, with the rapid pace of developments, it’s important to constantly scrutinise the roles of AI as ‘teacher’ and ‘collaborator’, like the other five roles, with a critical and constructive eye,” Nygren emphasises.

Article: Nygren, T., Spearing, E. R., Fay, N., Vega, D., Hardwick, I. I., Roozenbeek, J., & Ecker, U. K. H. (2026). The seven roles of generative AI: Potential & pitfalls in combatting misinformation. Behavioral Science & Policy, 0(0). DOI 10.1177/23794607261417815.

For more information: Thomas Nygren, Professor of Education at the Department of Education, Uppsala University, thomas.nygren@edu.uu.se, +46-73-646 86 49

FACT BOX:

The seven roles of generative AI: potential and pitfalls (Nygren et al. 2026).

1) Informer

  • Strengths/opportunities: Can make complex information easier to understand, translate and adapt language, can offer a quick overview of large quantities of information.
  • Problems/risks: Can give incorrect answers (‘hallucinations’), oversimplify and reproduce training data biases without clearly disclosing sources.

2) Guardian

  • Strengths/opportunities: Can detect and flag suspect content on a large scale, identify coordinated campaigns and contribute to a swifter response to misinformation waves.
  • Problems/risks: Risk of false positives/negatives (irony, context, legitimate controversies), distortions in moderation, and lack of clarity concerning responsibility and rule of law.

3) Persuader

  • Strengths/opportunities: Can support correction of misconceptions through dialogue, refutation and personalised explanations; can be used in pro-social campaigns and in educational interventions.
  • Problems/risks: The same capacity can be used for manipulation, microtargeted influence and large-scale production of persuasive yet misleading messages – often quickly and cheaply.

4) Integrator

  • Strengths/opportunities: Can structure discussions, summarise arguments, clarify distinctions, and support deliberation and joint problem-solving.
  • Problems/risks: Can create false balance, normalise errors through ‘neutral synthesis’, or indirectly control problem formulation and interpretation.

5) Collaborator

  • Strengths/opportunities: Can assist in analysis, writing, information processing and idea development; can support critical review by generating alternatives, counterarguments and questions.
  • Problems/risks: Risk of overconfidence and cognitive outsourcing; users can fail to realise that the answer is based on uncertain assumptions and that the system lacks real understanding.

6) Teacher

  • Strengths/opportunities: Can give swift, personalised feedback and create training tasks at scale; can foster progression in source criticism and digital skills.
  • Problems/risks: Incorrect or biased answers can be disseminated as ‘study resources’; risk that teaching becomes less investigative if students/teachers uncritically accept AI-generated content.

7) Playmaker

  • Strengths/opportunities: Can support design of interactive, gamified teaching environments and simulations that train resilience to manipulation and misinformation.
  • Problems/risks: Risk of simplifying stereotypes, ethical and copyright problems, and that gaming mechanisms can reward the wrong type of behaviour if the design is not well considered.
Note: This post was originally published by Uppsala University and republished on Digital Information World (DIW) with permission. The university team confirmed to DIW via email that no AI tools were used in creating the text.

Image: Mikhail Nilov / Pexels

Read next:

• Research Shows How Companies Can Gain Advantage by Prioritizing Customer Privacy

• How Much Does Chatbot Bias Influence Users? A Lot, It Turns Out
by Press Releases via Digital Information World

Parents Turn to STEM and Hands-On Play to Limit Daily Screen Hours

Reviewed by Ayaz Khan.

Half of America’s young parents are struggling to bond with their kids, and the culprit is nearly inescapable: screen time.

The poll of 2,000 U.S. millennial and Gen Z parents found 42% of them feel disconnected from their children due to technology, with kids spending an average of four hours in front of screens on a typical day.

As a result, parents said they notice their kids are easily distracted (42%), get less physical activity (42%), can be irritable (34%), have trouble sleeping (30%) and disengage with people around them (30%).

Commissioned by Lowe’s and conducted by Talker Research, the study revealed over half of parents (54%) try to encourage less screen time for their kids by providing them with more hands-on activities and outlets, like playing with toys (68%), helping around the home (66%) and coloring (66%).

Other activities, like crafts (63%), reading (60%), building (44%) and STEM-based activities (42%) were also popular ways parents get their kids away from screens.

This can be harder in the winter season, as more than half (56%) of parents say screen time increases when temperatures drop or the weather turns bad.

Parents spend an average of 10 hours per week looking for non-screen activities for their kids and wish they had more free activities for their kids nearby.

Those activities include things they can do as a family (58%), be outdoors (56%), DIY workshops (48%), creative arts and crafts (48%) and educational activities (39%).

For many parents, the inspiration to encourage hands-on activities away from screens comes from their own childhood.

Nearly half (46%) recalled frequently participating in DIY projects with their own parents growing up, and they recall feelings of happiness (58%), creativity (56%), satisfaction (47%) and confidence (40%) from those experiences.

With those fond memories in mind, seven in 10 have tried to recreate those activities with their own children.

Eighty-seven percent of parents believe doing DIY projects with their kids would help strengthen their bond, in addition to teaching patience (63%), expressing creativity (59%) and learning how to work better with others (56%).

Image: Eren Li / Pexels

This post was originally published on TalkerResearch.

Read next: Not all gigs are equal: Informal self-employment linked to lower pay, poorer health and instability
by External Contributor via Digital Information World

Wednesday, February 18, 2026

Global collaboration to limit air pollution flowing across borders could save millions of lives

This story is adapted from a version published by Cardiff University. Read the original version here.

Ambitious climate action to improve global air quality could save up to 1.32 million lives per year by 2040, according to a new study.

Image: Tarikul Raana / Pexels

Researchers from CU Boulder and Cardiff University in the United Kingdom have found that developing countries, especially, rely on international action to improve air quality, because much of their pollution comes from outside their borders.

The new study, published in Nature Communications, analyzed cross-border pollution “exchanges” for 168 countries and revealed that if countries do not collaborate effectively on climate policy, it could lead to greater health inequality for poorer nations that have less control over their own air quality.

The team’s work focuses on the impact of exposure to fine particulate matter, what scientists call “PM2.5,” which is the leading environmental risk factor for premature deaths globally.

“Some climate policies could inadvertently make air pollution inequalities worse, specifically for developing nations that might rely heavily on their neighbors for clean air,” said Daven Henze, senior author of the new study and professor at the Paul M. Rady Department of Mechanical Engineering at CU Boulder.

“Holistic climate policy should therefore evaluate how dependent a nation is on others’ emissions reductions, how mitigation scenarios reshape air-pollution flows across borders, and whether global efforts are helping or harming equity.”

Lead author Omar Nawaz at the Cardiff University School of Earth and Environmental Sciences said: “While we know climate action can benefit public health, most research has ignored how this affects the air pollution that travels across international borders and creates inequalities between countries.

“Our analysis shows how climate mitigation decisions made in wealthy nations directly affect the health of people in the Global South, particularly in Africa and Asia.”

The research team used advanced atmospheric modeling and NASA satellite data to simulate different future emissions scenarios for the year 2040. The researchers used this data and a health burden estimation to understand how countries could make an impact through climate policy.

“We were surprised to find that although Asia sees the most total benefits from climate action to its large share of the population, African countries are often the most reliant on external action, with the amount of health benefits they get from climate mitigation abroad increasing in fragmented future scenarios,” said Nawaz.

According to the researchers’ projections, the balance of pollution flowing across borders could shift, even if total global air pollution declines.

These insights could inform policymaking and global aid work that seeks to address climate change.

In a sustainable socioeconomic development scenario, for example, pollution flowing across the U.S.-Mexico border would substantially decrease. Mexico would contribute much more to the health benefits that come from this shift than the United States.

The team plans to do further research exploring how climate change itself alters the weather patterns that transport this pollution, as well as looking at other pollutant types like ozone and organic aerosols.

“Ozone is transported even further in the atmosphere than PM2.5, contributes to significant health burdens, and shares common emission sources with PM2.5. We thus have follow-up studies in the works to investigate the interplay between climate policies and long-range health co-benefits associated with both species simultaneously,” said Henze.

Note: This post was originally published by University of Colorado Boulder Today and republished on Digital Information World with permission.

Edited by Asim BN.

Read next: Is social media addictive? How it keeps you clicking and the harms it can cause
by External Contributor via Digital Information World

Is social media addictive? How it keeps you clicking and the harms it can cause

By Quynh Hoang, University of Leicester

Reviewed by Ayaz Khan

For years, big tech companies have placed the burden of managing screen time squarely on individuals and parents, operating on the assumption that capturing human attention is fair game.

Image: Rapha Wilde / unsplash

But the social media sands may slowly be shifting. A test-case jury trial in Los Angeles is accusing big tech companies of creating “addiction machines”. While TikTok and Snapchat have already settled with the 20-year-old plaintiff, Meta’s CEO, Mark Zuckerberg, is due to give evidence in the courtroom this week.

The European Commission recently issued a preliminary ruling against TikTok, stating that the app’s design – with features such as infinite scroll and autoplay – breaches the EU Digital Services Act. One industry expert told the BBC that the problem is “no longer just about toxic content, it’s about toxic design”.

Meta and other defendants have historically argued that their platforms are communication tools, not traps, and that “addiction” is a mischaracterisation of high engagement.

“I think it’s important to differentiate between clinical addiction and problematic use,” Instagram chief Adam Mosseri testified in the LA court. He noted that the field of psychology does not classify social media addiction as an official diagnosis.

Tech giants maintain that users and parents have the agency and tools to manage screen time. However, a growing body of academic research suggests features like infinite scrolling, autoplay and push notifications are engineered to override human self-control.

Video: CBS News.

A state of ‘automated attachment’

My research with colleagues on digital consumption behaviour also challenges the idea that excessive social media use is a failure of personal willpower. Through interviews with 32 self-identified excessive users and an analysis of online discussions dedicated to heavy digital use, we found that consumers frequently enter a state of “automated attachment”.

This is when connection to the device becomes purely reflexive, as conscious decision-making is effectively suspended by the platform’s design.

We found that the impulse to use these platforms sometimes occurs before the user is even fully conscious. One participant admitted: “I’m waking up, I’m not even totally conscious, and I’m already doing things on the device.”

Another described this loss of agency vividly: “I found myself mindlessly opening the [TikTok] app every time I felt even the tiniest bit bored … My thumb was reaching to its old spot on reflex, without a conscious thought.”

Social media proponents argue that “screen addiction” isn’t the same as substance abuse. However, new neurophysiological evidence suggests that frequent engagement with these algorithms alters dopamine pathways, fostering a dependency that is “analogous to substance addiction”.

Strategies that keep users engaged

The argument that users should simply exercise willpower also needs to be understood in the context of the sophisticated strategies platforms employ to keep users engaged. These include:

1. Removing stopping cues

Features like infinite scroll, autoplay and push notifications create a continuous flow of content. By eliminating natural end-points, the design effectively shifts users into autopilot mode, making stopping a viewing session more difficult.

2. Variable rewards

Similar to a slot machine, algorithms deliver intermittent, unpredictable rewards such as likes and personalised videos. This unpredictability triggers the dopamine system, creating a compulsive cycle of seeking and anticipation.

3. Social pressure

Features such as notifications and time-limited story posts have been found to exploit psychological vulnerabilities, inducing anxiety that for many users can only be relieved by checking the app. Strategies employing “emotional steering” can take advantage of psychological vulnerabilities, such as people’s fear of missing out, to instil a sense of social obligation and guilt if they attempt to disconnect.

Vulnerability in children

The issue of social media addiction is of particular concern when it comes to children, whose impulse control mechanisms are still developing. The US trial’s plaintiff says she began using social media at the age of six, and that her early exposure to these platforms led to a spiral into addiction.

A growing body of research suggests that “variable reward schedules” are especially potent for developing minds, which exhibit a heightened sensitivity to rewards. Children lack the cognitive brakes to resist these dopamine loops because their emotional regulation and impulsivity controls are still developing.

Lawyers in the US trial have pointed to internal documents, known as “Project Myst”, which allegedly show that Meta knew parental controls were ineffective against these engagement loops. Meta’s attorney, Paul Schmidt, countered that the plaintiff’s struggles stemmed from pre-existing childhood trauma rather than platform design.

The company has long argued that it provides parents with “robust tools at their fingertips”, and that the primary issue is “behavioural” – because many parents fail to use them.

Our study heard from many adults (mainly in their 20s) who described the near-impossibility of controlling levels of use, despite their best efforts. If these adults cannot stop opening apps on reflex, expecting a child to exercise restraint with apps that affect human neurophysiology seems even more unrealistic.

Potential harms of overuse

The consequences of social media overuse can be significant. Our research and recent studies have identified a wide range of potential harms.

These include “psychological entrapment”. Participants in our study described a “feedback loop of doom and despair”. Users can turn to platforms to escape anxiety, only to find that the scrolling deepens their feelings of emptiness and isolation.

Excessive exposure to rapidly changing, highly stimulating content can fracture the user’s attention span, making it harder to focus on complex real-world tasks.

And many users describe feeling “defeated” by the technology. Social media’s erosion of autonomy can leave people unable to align their online actions – such as overlong sessions – with their intentions.

A ruling against social media companies in the LA court case, or enforced redesign of their apps in the EU, could have profound implications for the way these platforms are operated in future.

But while big tech companies have grown at dizzying rates over the past two decades, attempts to rein in their products on both sides of the Atlantic remain slow and painstaking. In this era of “use first, legislate later”, people all over the world, of all ages, are the laboratory mice.The Conversation

Quynh Hoang, Lecturer in Marketing and Consumption, Department of Marketing and Strategy, University of Leicester

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read next: AI could rebalance power between people and the services they use


by External Contributor via Digital Information World

Tuesday, February 17, 2026

Survey Finds 62% of Americans Concerned About Personalized Pricing; 48% More Likely to Shop Where Opt-Out Is Offered

Reviewed by Asim BN.

Is the age of “surveillance pricing” upon us? Most Americans hope not, according to new research.

The concept of retailers potentially using AI to set individual pricing for products based on a user’s data or purchasing history has naturally prompted concerns over privacy and fairness.

Six in 10 (62%) Americans polled by Talker Research said they are either somewhat (33%) or very concerned (29%) about the prospect of having personalized pricing based on factors like their browsing habits, location or other data points.

Just 10% of the 2,000 people studied said they were unconcerned about the prospect that this may one day come into practice.

California’s attorney general is currently examining how businesses use data to individualize prices, while New York officials enacted a law last year requiring retailers to have a clear disclaimer if setting prices based on personal data, Forbes reports.

The implications of introducing pricing models in this way may have very real implications.

If they discovered they were charged more for a product or service than someone else as a result of their personal data or purchase history being considered, two-thirds (66%) of Americans would stop shopping at that particular retailer, according to results.

One in six (17%) said they would continue to shop regardless and the same number (17%) were unsure as to how they’d react should they be charged more for something based on their personal information.

Is there an argument that such models could actually be more fair for consumers? Overall, respondents were more inclined to suggest personalized pricing (or algorithmic pricing) as less fair (37%) overall than fixed pricing.

However, results were not unanimous, with 30% feeling it could actually be more fair and 33% feeling it’s about the same fairness either way.

Perhaps tellingly, it seems choice is key to Americans in the matter of personalized pricing. Close to half (48%) said they’d be more likely to shop at a retailer that allowed them to opt out of data-based pricing, even if it meant missing out on personalized discounts and deals.

Many are not interested either way, with 42% saying the ability to opt out makes no difference, while just 10 percent say the ability to opt out of personal pricing would make them less likely to buy from the retailer.

How concerned or unconcerned are you about online retailers using your personal data (purchase history, browsing, location, etc.) to set different prices for different shoppers?

Very concerned – 29%
Somewhat concerned – 33%
Neither concerned or unconcerned – 28%
Somewhat unconcerned – 6%
Very uncensored – 4%

Image: MART PRODUCTION / Pexels

This post was originally published on Talker Research and is republished here on DIW in accordance with their republishing guidelines.

Read next: AI threatens to eat business software – and it could change the way we work
by External Contributor via Digital Information World