Tuesday, February 24, 2026

AI energy use: New tools show which model consumes the most power, and why

Reviewed by Ayaz Khan.

Software can help developers assess AI model energy use, which is necessary to lower costs and strain on the grid

Image: Brett Sayles / Pexels

AI users and developers can now measure the amount of electricity various AI models consume to complete tasks with open-source software and an online leaderboard developed at the University of Michigan.

Companies can download the software to evaluate private models run on private hardware. And while the software can’t evaluate the energy costs of queries run on proprietary AI models at private data centers, it has allowed U-M engineers to measure the power used by open-weight AI models in which the parameters under the hood are publicly available. The power requirements can be viewed on an online leaderboard, which was updated this month. Their results have revealed trends on how AI energy use varies with model design and implementation.

“If you want to optimize energy efficiency and minimize environmental impact, knowing the energy requirements of the models is critical, but popular benchmarks for assessing AI ignore this aspect of performance,” said Mosharaf Chowdhury, associate professor of computer science and engineering and the corresponding author of a study describing the software.

Tools for informed decision-making

The researchers measured energy use across several different tasks, including chatting, video and image generation, problem solving and coding. For some tasks, the energy requirements of open-weight models can vary by a factor of 300. With the results, Chowdhury’s team has developed tutorials for developers to learn how to measure and lower the energy costs of their models. They gave their latest tutorial at the Neural Information Processing Systems (NeurIPS) Conference in December.

The researchers designed their software with partial funding from the National Science Foundation to help solve AI’s growing energy demands. Between 80% and 90% of the sector’s energy is consumed when a trained model processes a request at remote data centers—what the industry calls inference.

As AI models grow in size and are used more often, they need more power. Data centers in the United States consumed about 4% of the country’s total power in 2024—or about as much as Pakistan uses in a year. Data centers are projected to use twice as much power by 2030, according to a study by the Pew Research Center. But many estimates on AI growth rely on ‘envelope’ calculations, which are made by multiplying the maximum power draw per GPU by the number of GPUs. It’s only an estimate of the highest possible energy cost.

“A lot of people are concerned about AI’s growing energy use, which is fair,” Chowdhury said. “However, many who worry can be overly pessimistic, and those who want more data centers are often overly optimistic. The reality is not black and white, and there’s a lot we don’t know because nobody is making direct measurements of AI power use available. Our tool can provide more accurate data for better decision-making.”

Why do some AI models use more power?

The team’s assessments of open-weight models revealed larger trends in how an AI’s design could affect its energy requirements. A key factor was the number of generated tokens—the basic units of data processed by AI. In LLMs, tokens are pieces of words, so wordier models tend to use more energy than concise models. Problem-solving or reasoning models also use more energy because they generate “chains of thought” that contain 10 to 100 more tokens per request.

But the energy requirement of even a single model can change, depending on how it’s run at the data center. Processing queries in batches, for example, will result in less energy use at the data center overall, although larger batches take longer to run. The choice of software for allocating computer memory to queries can also impact AI’s energy requirements.

“There are many ways to deploy AI and translate what the model wants to do into computations on the hardware,” said Jae-Won Chung, U-M doctoral student in computer science and engineering and the study’s first author. “Our tool can automate the search through that parameter space and find the most efficient set of parameters based on the user’s needs.”

The research was also supported by grants and gifts from VMware, the Mozilla Foundation, Cisco, Ford, GitHub, Salesforce, Google and the Kwanjeong Educational Foundation.

Contact: Kate McAlpine.

This post was originally published by the University of Michigan News and is republished here with permission.

Read next: 

AI chatbots provide less-accurate information to vulnerable users, study

• How shaming unethical brands makes companies improve their behaviour
by External Contributor via Digital Information World

How shaming unethical brands makes companies improve their behaviour

Janet Godsell, Loughborough University and Nikolai Kazantsev, University of Cambridge

Recent investigations have uncovered forced labour in agricultural supply chains, illegal fishing feeding supermarket freezers, deforestation embedded in everyday food products, and unsafe conditions in factories producing “sustainable” fashion. These harms were not visible on labels. They surfaced only when journalists, whistleblowers or activists exposed them.

Image: Atoms / Unsplash

And when they did, something predictable happened. Consumers felt uneasy. Brands issued statements. Promises were made. The point is that the force that set change in motion was not regulation. It was consumers.

Discovering that an ordinary purchase may be tied to exploitation or environmental damage creates a jolt of personal responsibility. In our research, we found that when environmental consequences are clearly linked to people’s own buying choices, many are willing to switch products — especially when credible alternatives exist.

But guilt is private. It nudges personal behaviour. It does not automatically reshape systems. The shift happens when private discomfort becomes public voice.

Consumers are often also the first to make hidden environmental harms visible. They post evidence on social media. They question corporate claims. They compare sustainability promises with independent reporting. They organise petitions, boycotts and review campaigns. By shining a spotlight on the truth, the scrutiny shifts from shoppers to brands.

That shift matters because modern brands depend on trust. Reputation is an asset. When sustainability claims are publicly challenged, credibility is at risk. Research in organisational behaviour shows that firms respond quickly to threats to legitimacy. Reputational damage affects customer loyalty, investor confidence and regulatory attention.

In many high-profile cases, supply chain reforms have followed intense public scrutiny rather than quiet compliance checks. Leaders may not act out of moral awakening — but they do act when inaction becomes costly to their reputation.

Consumers can trigger the emotional chain reaction. They feel guilt. They seek information. They speak collectively. That collective voice generates corporate shame.

Sustainability professor Mike Berners-Lee argues in his book A Climate of Truth that demanding honesty is one of the most powerful climate actions available to citizens. Raising standards of truthfulness in business and media changes incentives. When the gap between what companies say and what they do becomes visible, maintaining that gap becomes harder.

Our research explores how that visibility can be strengthened. The findings were clear. When environmental and social consequences are personalised and traceable, sustainability feels less distant. People see both their own role and the role of particular firms. That dual awareness encourages two responses: behavioural change driven by guilt and corporate accountability driven by shame.

Shame works because it is social. Brands care about how they are seen. When the negative environmental and social effects of supply chains can be publicly connected to named products, corporate narratives become contestable in real time.

Making supply chains socially visible

The technology to improve transparency already exists. Companies track goods through logistics systems, supplier databases and digital product-tagging that collect detailed information about sourcing and production. The barrier is not data collection. It is disclosure.

Environmental indicators — carbon emissions, water use, land conversion risk, labour standards compliance — can be linked to products through QR codes or retail apps. Comparable reporting standards would ensure consistency. Simple digital interfaces would make information accessible. Social sharing tools would allow consumers to compare and discuss findings publicly.

Social media is crucial. It already enables workers, communities and campaigners to challenge corporate messaging. Integrating verified supply chain data into these spaces would shift transparency from crisis response to everyday expectation.

This strategy, with its behaviour change directive, could work more effectively than rules or green marketing campaigns alone.

Regulation is essential but often slow and uneven across borders. Marketing campaigns can highlight selective improvements while leaving deeper practices untouched. Transparency activated by collective consumer voice operates differently. It aligns emotional motivation with reputational consequence.

Consumers are not passive recipients of information. They are catalysts. By feeling the first twinge of guilt, asking harder questions and speaking together, they create the conditions under which companies experience shame. When shame threatens trust and market position, change becomes rational and inevitable.

Shame is uncomfortable. But when directed at opaque systems rather than consumers, it can be powerful. By demanding truth and making supply chains socially visible, consumers can push businesses towards greater transparency — and, ultimately, towards more sustainable practice.


Janet Godsell, Dean and Professor of Operations and Supply Chain Strategy, Loughborough Business School, Loughborough University and Nikolai Kazantsev, Postdoctoral Researcher, Institute for Manufacturing, University of Cambridge

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Edited by Asim BN. Reviewed by Ayaz Khan.

Read next: Study: AI chatbots provide less-accurate information to vulnerable users


by External Contributor via Digital Information World

Saturday, February 21, 2026

Study: AI chatbots provide less-accurate information to vulnerable users

By Media Lab | MIT News

Research from the MIT Center for Constructive Communication finds leading AI models perform worse for users with lower English proficiency, less formal education, and non-US origins.

Large language models (LLMs) have been championed as tools that could democratize access to information worldwide, offering knowledge in a user-friendly interface regardless of a person’s background or location. However, new research from MIT’s Center for Constructive Communication (CCC) suggests these artificial intelligence systems may actually perform worse for the very users who could most benefit from them.

A study conducted by researchers at CCC, which is based at the MIT Media Lab, found that state-of-the-art AI chatbots — including OpenAI’s GPT-4, Anthropic’s Claude 3 Opus, and Meta’s Llama 3 — sometimes provide less-accurate and less-truthful responses to users who have lower English proficiency, less formal education, or who originate from outside the United States. The models also refuse to answer questions at higher rates for these users, and in some cases, respond with condescending or patronizing language.

“We were motivated by the prospect of LLMs helping to address inequitable information accessibility worldwide,” says lead author Elinor Poole-Dayan SM ’25, a technical associate in the MIT Sloan School of Management who led the research as a CCC affiliate and master’s student in media arts and sciences. “But that vision cannot become a reality without ensuring that model biases and harmful tendencies are safely mitigated for all users, regardless of language, nationality, or other demographics.”

A paper describing the work, “LLM Targeted Underperformance Disproportionately Impacts Vulnerable Users,” was presented at the AAAI Conference on Artificial Intelligence in January.

Systematic underperformance across multiple dimensions

For this research, the team tested how the three LLMs responded to questions from two datasets: TruthfulQA and SciQ. TruthfulQA is designed to measure a model’s truthfulness (by relying on common misconceptions and literal truths about the real world), while SciQ contains science exam questions testing factual accuracy. The researchers prepended short user biographies to each question, varying three traits: education level, English proficiency, and country of origin.

Across all three models and both datasets, the researchers found significant drops in accuracy when questions came from users described as having less formal education or being non-native English speakers. The effects were most pronounced for users at the intersection of these categories: those with less formal education who were also non-native English speakers saw the largest declines in response quality.

The research also examined how country of origin affected model performance. Testing users from the United States, Iran, and China with equivalent educational backgrounds, the researchers found that Claude 3 Opus in particular performed significantly worse for users from Iran on both datasets.

“We see the largest drop in accuracy for the user who is both a non-native English speaker and less educated,” says Jad Kabbara, a research scientist at CCC and a co-author on the paper. “These results show that the negative effects of model behavior with respect to these user traits compound in concerning ways, thus suggesting that such models deployed at scale risk spreading harmful behavior or misinformation downstream to those who are least able to identify it.”

Refusals and condescending language

Perhaps most striking were the differences in how often the models refused to answer questions altogether. For example, Claude 3 Opus refused to answer nearly 11 percent of questions for less educated, non-native English-speaking users — compared to just 3.6 percent for the control condition with no user biography.

When the researchers manually analyzed these refusals, they found that Claude responded with condescending, patronizing, or mocking language 43.7 percent of the time for less-educated users, compared to less than 1 percent for highly educated users. In some cases, the model mimicked broken English or adopted an exaggerated dialect.

The model also refused to provide information on certain topics specifically for less-educated users from Iran or Russia, including questions about nuclear power, anatomy, and historical events — even though it answered the same questions correctly for other users.

“This is another indicator suggesting that the alignment process might incentivize models to withhold information from certain users to avoid potentially misinforming them, although the model clearly knows the correct answer and provides it to other users,” says Kabbara.

Echoes of human bias

The findings mirror documented patterns of human sociocognitive bias. Research in the social sciences has shown that native English speakers often perceive non-native speakers as less educated, intelligent, and competent, regardless of their actual expertise. Similar biased perceptions have been documented among teachers evaluating non-native English-speaking students.

“The value of large language models is evident in their extraordinary uptake by individuals and the massive investment flowing into the technology,” says Deb Roy, professor of media arts and sciences, CCC director, and a co-author on the paper. “This study is a reminder of how important it is to continually assess systematic biases that can quietly slip into these systems, creating unfair harms for certain groups without any of us being fully aware.”

The implications are particularly concerning given that personalization features — like ChatGPT’s Memory, which tracks user information across conversations — are becoming increasingly common. Such features risk differentially treating already-marginalized groups.

“LLMs have been marketed as tools that will foster more equitable access to information and revolutionize personalized learning,” says Poole-Dayan. “But our findings suggest they may actually exacerbate existing inequities by systematically providing misinformation or refusing to answer queries to certain users. The people who may rely on these tools the most could receive subpar, false, or even harmful information.”

Reprinted with permission of MIT News.

Image: Tara Winstead / Pexels

Reviewed by Irfan Ahmad.

Read next: Most AI Bots Lack Published Formal Safety and Evaluation Documents, Study Finds
by External Contributor via Digital Information World

Friday, February 20, 2026

Most AI Bots Lack Published Formal Safety and Evaluation Documents, Study Finds

Story: Fred Lewsey.

Reviewed by Ayaz Khan.

An investigation into 30 top AI agents finds just four have published formal safety and evaluation documents relating to the actual bots.

Many of us now use AI chatbots to plan meals and write emails, AI-enhanced web browsers to book travel and buy tickets, and workplace AI to generate invoices and performance reports.

However, a new study of the “AI agent ecosystem” suggests that as these AI bots rapidly become part of everyday life, basic safety disclosure is “dangerously lagging”.

A research team led by the University of Cambridge has found that AI developers share plenty of data on what these agents can do, while withholding evidence of the safety practices needed to assess any risks posed by AI.

The AI Agent Index, a project that includes researchers from MIT, Stanford and the Hebrew University of Jerusalem, investigated the abilities, transparency and safety of thirty “state of the art” AI agents, based on public information and correspondence with developers.

The latest update of the Index is led by Leon Staufer, a researcher studying for an MPhil at Cambridge’s Leverhulme Centre for the Future of Intelligence. It looked at available data for a range of leading chat, browser and workflow AI bots built mainly in the US and China.

The team found a “significant transparency gap”. Developers of just four AI bots in the Index publish agent-specific “system cards”: formal safety and evaluation documents that cover everything from autonomy levels and behaviour to real-world risk analyses.*

Additionally, 25 out of 30 AI agents in the Index do not disclose internal safety results, while 23 out of 30 agents provide no data from third-party testing, despite these being the empirical evidence needed to rigorously assess risk.

Known security incidents or concerns have only been published for five AI agents, while “prompt injection vulnerabilities” – when malicious instructions manipulate the agent into ignoring safeguards – are documented for two of those agents.

Of the five Chinese AI agents analysed for the Index, only one had published any safety frameworks or compliance standards of any kind.

“Many developers tick the AI safety box by focusing on the large language model underneath, while providing little or no disclosure about the safety of the agents built on top,” said Cambridge University’s Leon Staufer, lead author of the Index update.

“Behaviours that are critical to AI safety emerge from the planning, tools, memory, and policies of the agent itself, not just the underlying model, and very few developers share these evaluations.”

Most AI Developers Do Not Publish Safety and Evaluation Documents for Their AI Bots
Image: The 2025 AI Agent Index. For 198 out of 1,350 fields, no public information was found. Missing information is concentrated in 'Ecosystem Interaction' and 'Safety' categories. Only 4 agents provide agent-specific system cards.

In fact, the researchers identify 13 AI agents that exhibit “frontier levels” of autonomy, yet only four of these disclose any safety evaluations of the bot itself.

“Developers publish broad, top-level safety and ethics frameworks that sound reassuring, but are publishing limited empirical evidence needed to actually understand the risks,” Staufer said.

“Developers are much more forthcoming about the capabilities of their AI agent. This transparency asymmetry suggests a weaker form of safety washing.”

The latest annual update provides verified information across 1,350 fields for the thirty prominent AI bots, as available up to the last day of 2025.

Criteria for featuring in the Index included public availability and ease of use, and developers with a market valuation of over US$1 billion. Some 80% of the Index bots were released or had major updates in the last two years.

The Index update shows that – outside of Chinese AI bots – almost all agents depend on a few foundation models (GPT, Claude, Gemini), a significant concentration of platform power behind the AI revolution, as well as potential systemic choke points.

Also read: Generative AI has seven distinct roles in combating misinformation

“This shared dependency creates potential single points of failure,” said Staufer. “A pricing change, service outage, or safety regression in one model could cascade across hundreds of AI agents. It also creates opportunities for safety evaluations and monitoring.”

Many of the least transparent agents are AI-enhanced web browsers designed to carry out tasks on the open web on a user’s behalf: clicking, scrolling, and filling in forms for tasks ranging from buying limited-release tickets to monitoring eBay bids.

Browser agents have the highest rate of missing safety information: 64% of safety-related fields unreported. They also operate at the highest levels of autonomy.**

This is closely followed by enterprise agents, business management AI aimed at reliably automating work tasks, with 63% of safety-related fields missing. Chat agents are missing 43% of safety-related fields in the Index.***

Staufer points out that there are no established standards for how AI agents should behave on the web. Most agents do not disclose their AI nature to end users or third parties by default.****Only three agents support watermarking of generated media to identify it as from AI.

At least six AI agents in the Index explicitly use types of code and IP addresses designed to mimic human browsing behaviour and bypass anti-bot protections.

“Website operators can no longer distinguish between a human visitor, a legitimate agent, and a bot scraping content,” said Staufer. “This has significant implications for everything from online shopping and form-filling to booking services and content scraping.”

The update includes a case study on Perplexity Comet: one of the most autonomous browser-based AI agents in the Index, as well as one of the most high-risk and least transparent.

Comet is marketed on its ability to “work just like a human assistant”. Amazon has already threatened legal action over Comet not identifying itself as an AI agent when interacting with its services.

“Without proper safety disclosures, vulnerabilities may only come to light when they are exploited,” said Staufer.

“For example, browser agents can act directly in the real world by making purchases, filling in forms, or accessing accounts. This means that the consequences of a security flaw can be immediate and far-reaching.”

Staufer points out that last year, security researchers discovered that malicious content on a webpage could hijack a browser agent into executing commands, while other attacks were able to extract users' private data from connected services.

Added Staufer: “The latest AI Agent Index reveals the widening gap between the pace of deployment and the pace of safety evaluation. Most developers share little information about safety, evaluations, and societal impacts.”

“AI agents are getting more autonomous and more capable of acting in the real world, but the transparency and governance frameworks needed to manage that shift are dangerously lagging.”


by External Contributor via Digital Information World

A few weeks of X’s algorithm can make you more right-wing – and it doesn’t wear off quickly

Timothy Graham, Queensland University of Technology

A new study published today in Nature has found that X’s algorithm – the hidden system or “recipe” that governs which posts appear in your feed and in which order – shifts users’ political opinions in a more conservative direction.

Image: BoliviaInteligente / unsplash

Led by Germain Gauthier from Bocconi University in Italy, it is a rare, real-world randomised experimental study on a major social media platform. And it builds on a growing body of research that shows how these platforms can shape people’s political attitudes.

Two different algorithms

The researchers randomly assigned 4,965 active US-based X users to one of two groups.

The first group used X’s default “For You” feed. This features an algorithm that selects and ranks posts it thinks users will be more likely to engage with, including posts from accounts that they don’t necessarily follow.

The second group used a chronological feed. This only shows posts from accounts users follow, displayed in the order they were posted. The experiment ran for seven weeks during 2023.

Users who switched from the chronological feed to the “For You” feed were 4.7 percentage points more likely to prioritise policy issues favoured by US Republicans (for example, crime, inflation and immigration). They were also more likely to view the criminal investigation into US President Donald Trump as unacceptable.

They also shifted in a more pro-Russia direction in regards to the war in Ukraine. For example, these users became 7.4 percentage points less likely to view Ukrainian President Volodymyr Zelenskyy positively, and scored slightly higher on a pro-Russian attitude index overall.

The researchers also examined how the algorithm produced these effects.

They found evidence that the algorithm increased the share of right-leaning content by 2.9 percentage points overall (and 2.5 points among political posts), compared with the chronological feed.

It also significantly demoted the share of posts from traditional news organisations’ accounts while promoting or boosting posts from political activists.

One of the most concerning findings of the study is the longer-term effects of X’s algorithmic feed. The study showed the algorithm nudged users towards following more right-leaning accounts, and that the new following patterns endured even after switching back to the chronological feed.

In other words, turning the algorithm off didn’t simply “reset” what people see. It had a longer-lasting impact beyond its day-to-day effects.

One piece of a much bigger picture

This new study supports findings of similar studies.

For example, a study in 2022, before Elon Musk had bought Twitter and rebranded it as X, found the platform’s algorithmic systems amplified content from the mainstream political right more than the left in six out of the seven countries.

An experimental study from 2025 re-ranked X feeds to reduce exposure to content that expresses antidemocratic attitudes and partisan animosity. They found this shifted feelings towards their political opponents by more than two points on a 0–100 “feeling thermometer”. This is a shift the authors argued would have normally taken about three years to occur organically in the general population.

My own research offers another piece of evidence to this picture of algorithmic bias on X. Along with my colleague Mark Andrejevic, I analysed engagement data (such as likes and reposts) from prominent political accounts during the final stages of the 2024 US election.

Our findings unearthed a sudden and unusual spike in engagement with Musk’s account after his endorsement of Trump on July 13 – the day of the assassination attempt on Trump. Views on Musk’s posts surged by 138%, retweets by 238%, and likes by 186%. This far outstripped increases on other accounts.

After July 13, right-leaning accounts on X gained significantly greater visibility than progressive ones. The “playing field” for attention and engagement on the platform was tilted thereafter towards right-leaning accounts – a trend that continued for the remainder of the time period we analysed in that study.

Not a niche product

This matters because we are not talking about a niche product.

X has more than 400 million users globally. It has become embedded as infrastructure – a key source of political and social communication. And once technical systems become infrastructure, they can become invisible – like background objects that we barely think about, but which shape society at its foundations and can be exploited under our noses.

Think of the overpass bridges Robert Moses designed in New York in the 1930s. These seemed like inert objects. But they were designed to be very low, to exclude people of colour from taking buses to recreation areas in Long Island.

Similar to this, the design and governance of social media platforms also has real consequences.

The point is that X’s algorithms are not neutral tools. They are an editorial force, shaping what people know, whom they pay attention to, who the outgroup is and what “we” should do about or to them – and, as this new study shows, what people come to believe.

The age of taking platform companies at their word about the design and effects of their own algorithms must come to an end. Governments around the world – including in Australia where the eSafety Commissioner has powers to drive “algorithmic transparency and accountability” and require that platforms report on how their algorithms contribute to or reduce harms – need to mandate genuine transparency over how these systems work.

When infrastructure become harmful or unsafe, nobody bats an eye when governments do something to protect us. The same needs to happen urgently for social media infrastructures.The Conversation

Timothy Graham, Associate Professor in Digital Media, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Disclosure statement: Timothy Graham receives funding from the Australian Research Council (ARC) for the Discovery Project, 'Understanding and Combatting "Dark Political Communication"'.

Read next: Generative AI has seven distinct roles in combating misinformation


by External Contributor via Digital Information World

Thursday, February 19, 2026

Generative AI has seven distinct roles in combating misinformation

Reviewed by Ayaz Khan.

Generative AI can be used to combat misinformation. However, it can also exacerbate the problem by producing convincing manipulations that are difficult to detect and can quickly be copied and disseminated on a wide scale. In a new study, researchers have defined seven distinct roles that AI can play in the information environment and analysed each role in terms of its strengths, weaknesses, opportunities and risks.

“One important point is that generative AI has not just one but several functions in combating misinformation. The technology can be anything from information support and educational resource to a powerful influencer. We therefore need to identify and discuss the opportunities, risks and responsibilities associated with AI and we need to create more effective policies,” says Thomas Nygren, Professor at Uppsala University, who conducted the study together with colleagues at the University of Cambridge, UK, and the University of Western Australia.

From fact-checking to influence – same capacity has double-edged effects

The study is an overview in which researchers from a range of scholarly disciplines have reviewed the latest research on how generative AI can be used in various parts of the information environment. These uses range from providing information and supporting fact-checking to influencing opinion and designing educational interventions, and the study considers the strengths, weaknesses, opportunities and risks associated with each use.

The researchers chose to work with a SWOT framework as this leads to a more practical basis for decisions than general assertions that ‘AI is good’ or ‘AI is dangerous’. A system can be helpful in one role but also harmful in the same. Analysing each role using SWOT can help decision-makers, schools and platforms discuss the right measures for the right risk.

AI can serve several functions

“The roles emerged from a process of analysis where we started out from the perception that generative AI is not a simple ‘solution’ but a technology that can serve several functions at the same time. We identified recurrent patterns in the way AI is used to obtain information, to detect and manage problems, to influence people, to support collaboration and learning, and to design interactive training environments. These functions were summarised in seven roles,” Nygren explains.

The seven roles that the researchers identified as their research evolved were informer, guardian, persuader, integrator, collaborator, teacher and playmaker (see the fact box). The point of the roles is that they can serve as a checklist: they help us to see how each role can contribute to strengthening the resilience of society to misinformation, but also how each role entails specific vulnerabilities and risks. The researchers therefore analysed each role using a SWOT approach: what strengths and opportunities it embodies, but also what weaknesses and threats need to be managed.

“AI must be implemented responsibly”

“We show how generative AI can produce dubious content yet can also detect and counteract misinformation on a large scale. However, risks such as hallucinations, in other words, that AI comes out with ‘facts’ that are wrong, reinforcement of prejudices and misunderstandings, and deliberate manipulation mean that the technology has to be implemented responsibly. Clear policies are therefore needed on the permissible use of AI.”

The researchers particularly underline the need for:

  • Regulations and clear frameworks for the permissible use of AI in sensitive information environments;
  • Transparency about AI-generated content and systemic limitations;
  • Human oversight where AI is used for decisions, moderation or advice;
  • AI literacy to strengthen the ability of users to evaluate and question AI answers.

“The analysis shows that generative AI can be valuable for promoting important knowledge in school that is needed to uphold democracy and protect us from misinformation, but having said that, there is a risk that excessive use could be detrimental for the development of knowledge and make us lazy and ignorant and therefore more easily fooled. Consequently, with the rapid pace of developments, it’s important to constantly scrutinise the roles of AI as ‘teacher’ and ‘collaborator’, like the other five roles, with a critical and constructive eye,” Nygren emphasises.

Article: Nygren, T., Spearing, E. R., Fay, N., Vega, D., Hardwick, I. I., Roozenbeek, J., & Ecker, U. K. H. (2026). The seven roles of generative AI: Potential & pitfalls in combatting misinformation. Behavioral Science & Policy, 0(0). DOI 10.1177/23794607261417815.

For more information: Thomas Nygren, Professor of Education at the Department of Education, Uppsala University, thomas.nygren@edu.uu.se, +46-73-646 86 49

FACT BOX:

The seven roles of generative AI: potential and pitfalls (Nygren et al. 2026).

1) Informer

  • Strengths/opportunities: Can make complex information easier to understand, translate and adapt language, can offer a quick overview of large quantities of information.
  • Problems/risks: Can give incorrect answers (‘hallucinations’), oversimplify and reproduce training data biases without clearly disclosing sources.

2) Guardian

  • Strengths/opportunities: Can detect and flag suspect content on a large scale, identify coordinated campaigns and contribute to a swifter response to misinformation waves.
  • Problems/risks: Risk of false positives/negatives (irony, context, legitimate controversies), distortions in moderation, and lack of clarity concerning responsibility and rule of law.

3) Persuader

  • Strengths/opportunities: Can support correction of misconceptions through dialogue, refutation and personalised explanations; can be used in pro-social campaigns and in educational interventions.
  • Problems/risks: The same capacity can be used for manipulation, microtargeted influence and large-scale production of persuasive yet misleading messages – often quickly and cheaply.

4) Integrator

  • Strengths/opportunities: Can structure discussions, summarise arguments, clarify distinctions, and support deliberation and joint problem-solving.
  • Problems/risks: Can create false balance, normalise errors through ‘neutral synthesis’, or indirectly control problem formulation and interpretation.

5) Collaborator

  • Strengths/opportunities: Can assist in analysis, writing, information processing and idea development; can support critical review by generating alternatives, counterarguments and questions.
  • Problems/risks: Risk of overconfidence and cognitive outsourcing; users can fail to realise that the answer is based on uncertain assumptions and that the system lacks real understanding.

6) Teacher

  • Strengths/opportunities: Can give swift, personalised feedback and create training tasks at scale; can foster progression in source criticism and digital skills.
  • Problems/risks: Incorrect or biased answers can be disseminated as ‘study resources’; risk that teaching becomes less investigative if students/teachers uncritically accept AI-generated content.

7) Playmaker

  • Strengths/opportunities: Can support design of interactive, gamified teaching environments and simulations that train resilience to manipulation and misinformation.
  • Problems/risks: Risk of simplifying stereotypes, ethical and copyright problems, and that gaming mechanisms can reward the wrong type of behaviour if the design is not well considered.
Note: This post was originally published by Uppsala University and republished on Digital Information World (DIW) with permission. The university team confirmed to DIW via email that no AI tools were used in creating the text.

Image: Mikhail Nilov / Pexels

Read next:

• Research Shows How Companies Can Gain Advantage by Prioritizing Customer Privacy

• How Much Does Chatbot Bias Influence Users? A Lot, It Turns Out
by Press Releases via Digital Information World

Parents Turn to STEM and Hands-On Play to Limit Daily Screen Hours

Reviewed by Ayaz Khan.

Half of America’s young parents are struggling to bond with their kids, and the culprit is nearly inescapable: screen time.

The poll of 2,000 U.S. millennial and Gen Z parents found 42% of them feel disconnected from their children due to technology, with kids spending an average of four hours in front of screens on a typical day.

As a result, parents said they notice their kids are easily distracted (42%), get less physical activity (42%), can be irritable (34%), have trouble sleeping (30%) and disengage with people around them (30%).

Commissioned by Lowe’s and conducted by Talker Research, the study revealed over half of parents (54%) try to encourage less screen time for their kids by providing them with more hands-on activities and outlets, like playing with toys (68%), helping around the home (66%) and coloring (66%).

Other activities, like crafts (63%), reading (60%), building (44%) and STEM-based activities (42%) were also popular ways parents get their kids away from screens.

This can be harder in the winter season, as more than half (56%) of parents say screen time increases when temperatures drop or the weather turns bad.

Parents spend an average of 10 hours per week looking for non-screen activities for their kids and wish they had more free activities for their kids nearby.

Those activities include things they can do as a family (58%), be outdoors (56%), DIY workshops (48%), creative arts and crafts (48%) and educational activities (39%).

For many parents, the inspiration to encourage hands-on activities away from screens comes from their own childhood.

Nearly half (46%) recalled frequently participating in DIY projects with their own parents growing up, and they recall feelings of happiness (58%), creativity (56%), satisfaction (47%) and confidence (40%) from those experiences.

With those fond memories in mind, seven in 10 have tried to recreate those activities with their own children.

Eighty-seven percent of parents believe doing DIY projects with their kids would help strengthen their bond, in addition to teaching patience (63%), expressing creativity (59%) and learning how to work better with others (56%).

Image: Eren Li / Pexels

This post was originally published on TalkerResearch.

Read next: Not all gigs are equal: Informal self-employment linked to lower pay, poorer health and instability
by External Contributor via Digital Information World