Wednesday, March 25, 2026

Online ad fraud is a feature, not a bug

By Benjamin Kessler

Image: Erik Mclean / Unsplash

Technological advancements and the dynamics of the platform economy make rooting out fraud more complicated than it may seem.

With print media circulation and broadcast television viewership in free fall, a lot is riding on the online advertising space being able to take up the slack. The good news is, digital ad spend is booming.

The bad news? A good chunk of that money is chasing a mirage.

Online ad fraud—where ad publishers falsely inflate engagement metrics (impressions, clicks, etc.) to boost revenues—is a growing problem that eats upwards of 20 percent of global ad spend.

Min Chen and Abhishek Ray, both professors in the information systems and operations management area at Costello College of Business at George Mason University, are researching how online ad networks, such as Google Ads, can improve upon existing anti-fraud methods. Their recently published paper in Management Science explores deep-rooted dynamics of the online ad ecosystem that make eliminating fraud even more complicated than it may seem at first glance. The paper was co-authored by Subodha Kumar of Temple University.

The researchers used a game-theoretic model to replicate the interconnected decision-making of the three players involved: advertisers, publishers, and the networks that serve as go-between.

“The way the ecosystem works is that the platforms in the middle, the ad networks, shares the benefit from the transaction,” Chen explains. “People have been arguing whether the network is incentivized to put their best efforts behind deterring fraud, since the fraudulent traffic benefits the networks too. So we tried to create a model to capture this.”

“If the advertisers rely solely on the reports from the ad networks, they may be at risk. They should use third-party tools to audit the performance better.” — Min Chen, information systems and operations management professor at the Costello College of Business at George Mason University

In addition, the model incorporates the two main fraud deterrents that networks routinely use. One is technological—platforms can adopt tougher standards for fraud detection, widening the scope of suspicious activity that gets flagged. The other is economic—lowering payments to all publishers so as to disincentivize large-scale fraud.

Surprisingly, the researchers find that the online ad economy works best when the two approaches seem to be working at cross-purposes. A tightening in fraud detection technology, paired with high payments for publishers, may sometimes produce the best outcomes for advertisers, publishers, and networks, as the market evolves.

The reason is rooted in the imperfect nature of fraud detection. To be sure, detection systems are improving all the time, especially with the advent of AI. But fraudsters do their best to blend in and adapt, using technological tools that often outpace those of their pursuers. “You cannot catch all the fraud, and if you try, you are going to mis-detect a lot of non-fraud,” Chen says.

Tougher fraud detection, then, will always mean more false positives, no matter how good the technology gets. To counter this inherent unfairness that penalizes good and bad actors alike, the ad network’s payment to publishers need to go up. Otherwise, publishers may take their business elsewhere—especially those most valuable to the system, i.e. those that are trustworthy — thereby decreasing the advertisers’ valuation on ad traffic.

“These ad networks are kind of a unique system where you can be monetarily rewarded for being honest, or punished for being dishonest,” Ray says. “What we discover for this system is there can be a way in which we can give carrots to people, not just sticks.”

On a similar note, the researchers find that an attempt to purge “bad apple” advertisers from the system can backfire due to false positives. In fact, fraud can sharply increase if networks, believing they have solved the problem, relax their fraud detection standards and raise incentives for the remaining advertisers. “Since the publishers who produce the fraudulent traffic are fewer now, the ad network may no longer need to maintain a strict detection policy. This can encourage the remaining ones to commit much more fraud,” Chen explains.

To Ray and Chen, online ad fraud is, in at least one sense, no different from older forms of malfeasance that are found in all free societies. “We need to have some kind of mechanism for managing the level of fraud, because the fraud detection method is never going to be perfect, whether it’s financial fraud, accounting fraud, etc.,” Chen says.

But as an example of the contemporary platform economy, the online advertising ecosystem is also distinctive, in that its de facto regulatory authority has skin in the game. The ad networks’ mixed incentives—as both beneficiaries and inhibitors of fraud—can undermine integrity and trust within an already-compromised system.

“If the advertisers rely solely on the reports from the ad networks, they may be at risk,” Chen says. “They should use third-party tools to audit the performance better.”

Editor’s Note: This post was originally published on George Mason University News and republished on DIW with permission.

Reviewed by Asim BN.

Read next: 

• Why you may be paying more than you need to for digital subscriptions

• Researchers Pioneer New Technique to Stop LLMs from Giving Users Unsafe Responses


by External Contributor via Digital Information World

Researchers Pioneer New Technique to Stop LLMs from Giving Users Unsafe Responses

By Matt Shipman, NC State News

Image: Nahrizul Kadri / Unsplash

Researchers have identified key components in large language models (LLMs) that play a critical role in ensuring these AI systems provide safe responses to user queries. The researchers used these insights to develop and demonstrate AI training techniques that improve LLM safety while minimizing the “alignment tax,” meaning the AI becomes safer without significantly affecting performance.

LLMs, such as ChatGPT, are being used for an increasing number of applications – including people asking for advice or instructions on how to perform a variety of tasks. The nature of some of these applications means that it is important for LLMs to generate safe responses to user queries.

“We don’t want LLMs to tell people to harm themselves or to give them information they can use to harm other people,” says Jung-Eun Kim, corresponding author of a paper on the work and an assistant professor of computer science at North Carolina State University.

At issue is a model’s safety alignment, or training protocols designed to ensure that the AI’s outputs are consistent with human values.

“There are two challenges here,” says Kim. “The first challenge is the so-called alignment tax, which refers to the fact that incorporating safety alignment has an adverse effect on the accuracy of a model’s outputs.”

“The second challenge is that existing LLMs generally incorporate safety alignment at a superficial level, which makes it possible for users to circumvent safety features,” says Jianwei Li, first author of the paper and a Ph.D. student at NCState. “For example, if a user asks for instructions to steal money, a model will likely refuse. But if a user asks for instructions to steal money in order to help people, the model would be more likely to provide that information.

“This second challenge can be exacerbated when users ‘fine-tune’ an LLM – modifying it to operate in a specific domain,” says Li. “For example, an LLM may have good safety performance. But if a user wants to modify that LLM for use in the context of a specific business or organization, the user may train that LLM on additional data. Previous research shows us that fine-tuning can weaken safety performance.

“Our goal with this work was to provide a better understanding of existing safety alignment issues and outline a new direction for how to implement a non-superficial safety alignment for LLMs.”

To that end, the researchers created the Superficial Safety Alignment Hypothesis (SSAH), which neatly captures how safety alignment currently works in LLMs. Basically, it holds that superficial safety alignment views a user request as binary, either safe or unsafe. In addition, the SSAH notes that LLMs currently make the binary determination on whether to answer the request at the beginning of the answer-generating process. If the request is deemed safe, a response is generated and provided to the user. If the request is deemed not safe, the model declines to generate a response.

The researchers also identified safety-critical “neurons” in LLM neural networks that are critical for determining whether the model should fulfill or refuse a user request.

“We found that ‘freezing’ these specific neurons during the fine-tuning process allows the model to retain the safety characteristics of the original model while adapting to new tasks in a specific domain,” says Li.

“And we demonstrated that we can minimize the alignment tax while preserving safety alignment during the fine-tuning process,” says Kim.

“The big picture here is that we have developed a hypothesis that serves as a conceptual framework for understanding the challenges associated with safety alignment in LLMs, used that framework to identify a technique that helps us address one of those challenges, and then demonstrated that the technique works,” says Kim.

“Moving forward, our work here highlights the need to develop techniques that will allow models to continuously re-evaluate and re-select their reasoning direction – safe or unsafe –throughout the response generation process,” says Li.

The paper, “Superficial Safety Alignment Hypothesis,” will be presented at the Fourteenth International Conference on Learning Representations (ICLR2026), being held April 23-27 in Rio de Janeiro, Brazil.

The researchers have made relevant code and additional information available at: https://ssa-h.github.io/.

This post was originally published on NC State News and republished here with permission.

Reviewed by Ayaz Khan.

Read next: 

• Using your AI chatbot as a search engine? Be careful what you believe

• Why you may be paying more than you need to for digital subscriptions


by External Contributor via Digital Information World

Why you may be paying more than you need to for digital subscriptions

Erhan Kilincarslan, University of Huddersfield


Image: 
Vitaly Gariev / Unsplash

The way we watch TV, listen to music, order groceries and take photos has changed in the past decade or so. For many of us, all of these activities involve a monthly payment.

Subscriptions have quietly become a major part of household spending across the world. But many people underestimate how much they actually pay. And there is evidence which suggests that the design of subscription services – combined with common human traits – can make these payments easy to overlook.

In the UK, consumers spend around £26 billion a year subscribing to everything from digital media to cosmetics and coffee. (Around 69% of UK households subscribe to at least one video streaming service such as Netflix or Amazon Prime Video.)

And a few small monthly payments can quickly add up. Data from Barclays bank suggests that individual consumers spend £50.60 on – so more than £600 a year. It also shows that spending on digital content and subscription services has increased by nearly 50% since 2020. In households where several people hold subscriptions, the combined spending can be considerably higher.

The result is a subscription economy that is growing faster than many consumers realise. And one reason households underestimate their spending is that some subscriptions continue running even when people no longer use them.

The UK government estimates that of the 155 million subscriptions currently active in the UK, nearly 10 million are unwanted – at a cost to consumers of £1.6 billion each year.

The charity Citizens Advice has calculated that over £300 million a year is spent on subscriptions that people are not actually using, often because they automatically renewed after a free trial.

In many cases the individual payments are small, which makes them easy to miss in a bank statement.

Behavioural economics offers one explanation. Research shows that people tend to evaluate spending using what’s known as “mental accounting” – the tendency to treat small payments separately instead of thinking about how they add up overall. As a result, people group purchases into categories rather than looking at the total amount leaving their bank account.

A £9.99 streaming subscription or a £4.99 app service may not feel significant on its own. But when several subscriptions accumulate, the combined cost can become substantial.

Another factor is automatic renewal. Many services continue charging unless customers actively cancel. This interacts with what behavioural scientists call “status quo bias”, the tendency to stick with the default option.

When cancelling requires effort or attention, people often postpone the decision and continue paying.

Consumer groups have also raised concerns about so called subscription traps. These occur when people are unintentionally signed up to recurring payments or find it difficult to cancel them.

It has been claimed that more than 20 million adults in the UK have signed up to a subscription without realising it and about 4.7 million people are still paying for one they did not knowingly sign up to.

These cases often involve free trials that automatically convert into paid subscriptions or online sign up processes where the recurring payment is not clearly explained.

Researchers studying digital interfaces have also identified design practices that make subscriptions easier to start than to cancel, sometimes described as “dark patterns” in online design.

New rules

The growing scale of the problem has attracted regulatory attention. The UK government has introduced measures aimed at tackling subscription traps, including clearer information about recurring payments and easier cancellation processes. A consultation is now taking place on how these rules will be implemented before they come fully into force.

The goal is to ensure that consumers understand the financial commitment they are entering when signing up to a subscription service.

The new measures will probably help reduce some accidental subscriptions, particularly those created through unclear sign-up processes or free trials that automatically convert into paid plans. And it seems sensible to make sure that subscription contracts contain clearer information and easier cancellation rights to help consumers avoid unwanted recurring payments.

But behavioural factors such as inertia and automatic renewal mean the problem may not disappear entirely. Even when cancellation is straightforward, consumers often delay reviewing small recurring payments, allowing subscriptions to continue.

For households, digital spending often feels invisible. Subscriptions are typically spread across multiple platforms and paid automatically through bank cards or direct debits. Without a deliberate review of monthly statements, it can be difficult to see how much these payments add up to.

Subscriptions can offer convenience and flexibility. But as the subscription economy continues to grow, it can also quietly increase household spending in ways that many consumers barely notice.The Conversation

Erhan Kilincarslan, Reader in Accounting and Finance, University of Huddersfield

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Asim BN.

Read next:

• Using your AI chatbot as a search engine? Be careful what you believe

• Instagram, Facebook, TikTok Engagement Rose in Q1 2026 While Snapchat Declined


by External Contributor via Digital Information World

Instagram, Facebook, TikTok Engagement Rose in Q1 2026 While Snapchat Declined

By Adam Blacker, Apptopia

Every quarter, we look at average time spent per daily active user across major US social platforms using Apptopia’s consumer device panel. The Q1 2026 data stands out for one reason: three of four platforms grew engagement year-over-year. Snapchat didn’t.

Comparing Q1 2025 to Q1 2026, Instagram [NASDAQ: META] grew Average Time Spent per DAU by 9.8%. Facebook grew 10.3%. TikTok grew 6.7%. Snapchat [NYSE: SNAP] declined 2.5%.

That alone would be notable. What makes it more significant is where Snap was twelve months ago. Q1 2025 was the strongest first quarter in Snap’s recent history on this metric. Time spent surged 35.8% versus Q1 2024. The 17-25 cohort, Snap’s core franchise demographic, spiked 43.6%. The product was gaining traction across every age group.


Q1 2026 reversed nearly all of it. The 17-25 cohort went from +43.6% to -0.5%. The 26-35 group went from +21.6% to -0.4%. Full-year 2025 data confirms Snap carried momentum through the middle of the year; annual growth was 16.0% for all users, meaning the reversal is recent.

The wider pattern is just as telling. Over the three Q1 periods in our study, Snap’s time spent growth rates were 8.2%, then 35.8%, then -2.5%. That’s a 38pp spread between the highest and lowest readings. Facebook’s equivalent spread was 4 points (6.3%, 8.1%, 10.3%). TikTok’s was 5 points. Instagram’s was 10.5 points, decelerating gradually from a high base. Snap is the outlier on consistency by a wide margin. Its average Q1 growth of 13.8% looks similar to Instagram’s 14.0%, but the path is a spike and a crash versus a steady glide. For anyone building a forward estimate around Snap’s engagement trends, that volatility is the problem. You can underwrite a growth rate that compounds quarter after quarter. You can’t underwrite one that swings 38 points.


“When one platform reverses while the rest of the sector keeps growing, it’s not something macro going on,” said Tom Grant, VP of Research at Apptopia. “If Gen Z were broadly pulling back from social apps, you’d see it everywhere. You do not. So not only is SNAP seeing a Q1 decline while others rise, it is now experiencing rising volatility as a business.”

The rest of the competitive set held up. Facebook posted its third consecutive Q1 acceleration, with growth concentrated in the 26-45 age range — the highest-CPM demographic in digital advertising. Instagram grew across every cohort, led by 26-35 at 14.1%. TikTok still commands the most absolute time per user in every age group, roughly 2x Facebook and 2x Instagram, and its time spent growth of 6.7% was positive if unspectacular.

Time spent across major social platforms continues to grow (time spent on mobile as well), but the consistency of that growth increasingly separates the pack. For investors, the Q1 data suggests the more durable engagement stories right now sit with Meta and TikTok, while Snap’s trajectory remains the one that needs the most proving out.

Note: This post was originally published on Apptopia blog and is republished on DIW with permission.

Reviewed by Irfan Ahmad.

Read next:

• Americans Spend an Average of 6.3 Hours Daily on Mobile Devices; Older Users Log Up to 358 Minutes Across 17 Apps

• Using your AI chatbot as a search engine? Be careful what you believe
by External Contributor via Digital Information World

Tuesday, March 24, 2026

How AI English and human English differ – and how to decide when to use artificial language

Laura Aull, University of Michigan

Suspicion and affection. Apprehension and excitement. Most people have mixed feelings about AI English, whether or not they always recognize it. When reading text generated by AI, people feel it sounds off, or fake. When reading English by a human, people are more likely to feel it has a characteristic voice or a personal touch.

Image: Airam Dato-on - Pexels

What exactly makes English sound human, or sound like AI? And does it matter if AI English never truly achieves a human feel?

I research the institutionalization of English. There is a long, problematic history of people feeling positively or negatively toward different kinds of English, rewarding how it is spoken or written by some sectors of society and devaluing how it is used by others.

When generative AI language tools came along, they scaled up these problems. English-based large language models are trained on text from the public internet. Human instructions tell the models to sound like formal English. Because of that, large language models end up trained on all the bias baked into standardized human texts and ideas.

In my work, I encounter people who would never trust the internet to tell them what is right and wrong, yet they trust generative AI to tell them how to write.

Human vs. AI

The first step to becoming a more informed user of AI English is to try to understand what people mean when they say writing sounds human. This understanding will improve your AI literacy. Most importantly, it will allow you to learn to recognize two qualities that make human English different from AI English: variation and readability.

Human English contains persistent, if subtle, linguistic patterns of variation and readability. By contrast, AI uses what I call exam English – a rather formal, dense English that is favored in academic tests and papers. It is less varied and less readable. People perceive it as robotic, but they also perceive it as smart.

Here’s a quick test: Read the two text messages below and guess which one is by a human and which one is by ChatGPT.

“i’m not sure how to break this to you. there’s no easy way to put it…i can’t make the friday-night fun. sorry. however, feel free to text me during the evening if there are any lulls in conversation. anyway, hope ur exotic trip goes well. see u next term.”

“Hey! I’m really sorry, but I won’t be able to make it Friday night. I hope you all have a great time, and I’ll see you next term!”

A human reader would probably notice several patterns right away. The first message has more “textese”: It defaults to lowercase and includes phonetic spellings “ur” and “u.” The second text has exam English capital letters, commas and spelling.

People are likely to have other impressions, too. Perhaps the first text feels more personal, and less sure of itself. Maybe the second text feels stiff, like it was written by an acquaintance. The first text contains different kinds of phrases and clauses, while the second text repeats the same clause structure four times.

On some level, human readers pick up on such patterns. Most people would say that the first text is by a human and the second is by AI. Indeed, the second passage was generated by ChatGPT.

Even this basic illustration shows that human English includes variation in word usage and grammatical structures that breaks up information and conveys personal meaning. AI English has less variation and more dense noun phrases. In research studies, these patterns appear repeatedly across genres and registers.

Some AI English patterns change

AI writing tools evolve, and large language models vary. GPT 5 was infamously cold-sounding compared with its predecessor GPT 4, for example.

But the patterns I am talking about are likely to persist. AI English favors what exam English has always rewarded: homogeneity and information density. And thus far, instructional tuning – training AI models to follow human instruction – only makes AI English less like human English. It doesn’t help that AI writing is part of what AI bots train on.

The net effect today is that AI English has been trained on English that is much more narrow than actual, collective human English in practice. Humans, by contrast, don’t just use language that is probable, but language that is possible – based on the varied language use they have observed, their creative capacity for new utterances and their propensity to blend personal and impersonal language patterns.

AI models can produce conventionally correct, smart-sounding language, but that language lacks the variation, accessibility and creativity that make language human.

How AI and human English can coexist

If you can become more aware of differences between AI and human English, those insights can help you use both language forms more productively. Here are a few steps to take:

Use language labels. When describing a given passage, use labels like “dense,” “plain,” “interpersonal” or “informational”, not social labels like “sounds smart” or “sounds off.” Consider exploring the actual patterns in human and AI English and trying to describe language patterns, not feelings about them, in other words.

Use AI tools selectively. Not only does human English have more accessible and varied patterns, it also engages the brain more than using AI language tools. To help prevent AI English from overshadowing human varied language in the world, use AI selectively.

Use curated tools. Tools like small language models and programs that you can add to a web browser to root out bias, such as Bias Shield, can help people make principled choices about AI English use. Tools such as translingual chatbots can also bring to AI English much more of the global variation in human English.

Be conscious of what sounds smart, and why. A century and a half of exam English makes it easy to think that dense, impersonal writing patterns are smart. But like any language patterns, they have pros and cons. They are not particularly personable or readable, especially for diverse audiences, and they are not representative of the range of global English in use today.

There can be good reasons to use exam English, but not just because AI bots generate it, or because people have learned to perceive it as smarter.

At its best, AI English is a language database driven by statistics. It’s big, but it’s canned. History tells us that a full range of global human English gives people the greatest possibilities for expression and connection.The Conversation

Laura Aull, Professor of English and Linguistics, University of Michigan

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Disclosure statement: Laura Aull does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Partners: University of Michigan University of Michigan provides funding as a founding partner of The Conversation US.

Reviewed by Ayaz Khan.

Read next: Blissful (A)Ignorance: People rarely notice AI-written messages in everyday communication


by External Contributor via Digital Information World

Friday, March 20, 2026

Blissful (A)Ignorance: People rarely notice AI-written messages in everyday communication

This news release did not use artificial intelligence—and even if it did, you wouldn’t suspect it.

Image: Solen Feyissa - Pexels

These days, you may be reading AI-written news more often than you think. The same can be said for emails, texts, and social media sites, according to a new study by researchers at the University of Michigan and Duke University.

The study found that undisclosed AI use does not trigger suspicion among people. When AI use is disclosed or strongly suspected (when people already pay a lot of attention to AI), people typically judge senders negatively, said Andras Molnar, U-M assistant professor of psychology and study co-author.

“For example, when we already suspect that someone generated their message using AI, we tend to think of them as less friendly, less trustworthy, less authentic and so on, compared to when the same text is genuinely human-written,” he said. “This ‘AI penalty’ has been widely documented in past studies.”

What the “AI penalty” suggests is that people, on average, lean toward the negative interpretation that focuses on the person (e.g., the person was lazy) instead of the more positive interpretation that takes into account the situation (e.g., there was a lot of time pressure).

However, under more realistic conditions, audiences may be uncertain, or even completely unaware, of communicators’ potential use of AI. Molnar, along with lead author Jiaqi Zhu of Duke, conducted two online experiments of more than 1,300 U.S. adults to examine how both explicit disclosure and uncertainty regarding AI use affect social impressions in realistic communication contexts (e.g., email, social media, texting).

Their research, published in Computers in Human Behavior, highlights that even though there are these massive penalties in social interactions when AI use is known, people don’t naturally suspect AI use: Participants in realistic situations treated messages of unknown origin as if they were known to be genuinely human-written. In other words, those who use AI as a shortcut most likely get away with it and keep their positive impressions.

Molnar said that concerns about widespread rejection of AI-assisted communication may be overstated for now, though attitudes could shift as AI awareness grows.

Contact: Jared Wadley.

Study: Blissful (A)Ignorance: Despite the widespread adoption of AI in communication, people do not suspect AI use in realistic contexts

This post was originally published by the University of Michigan News and is republished here with permission.

Reviewed by Irfan Ahmad.

Read next: 

• Content Marketers Embrace AI in Content Creation

• A better method for identifying overconfident large language models
by External Contributor via Digital Information World

Content Marketers Embrace AI in Content Creation

by Felix Richter, Data Journalist Statista

Less than four years after the release of ChatGPT marked the beginning of the AI era, artificial intelligence has become an integral part of the content marketing toolkit. From drafting text and generating visuals to analyzing campaign performance, AI-powered tools are being used for many day-to-day tasks, ideally helping teams to save time on routine tasks and make time for creative and strategic thinking.

According to the Statista+ Content Marketing Trend Study 2026, content creation is currently the most common application of AI tools. Just over half of the 252 surveyed B2B content marketing professionals said that their department uses AI to produce text, images or videos. Analytical tasks are another major use case, with 45 percent relying on AI for reporting and performance measurement.

Beyond these core areas, many marketers are also integrating AI into supporting processes. Around 4 in 10 respondents reported using AI for customer service as well as for ideation and inspiration. Others apply the technology to automate workflows, manage knowledge and documentation or for technical tasks such as search engine optimization. At 4 percent, only a small minority of organizations reported not having started using AI tools at all.

For more insights on AI in content marketing, download the 8th edition of our B2B Content Marketing Trend Study for free here.

AI dominates content marketing, aiding creation, analytics, customer service, ideation, workflow, and SEO, per Statista study.

This post was originally published on Statista and is republished under Creative Commons License CC BY-ND.

Reviewed by Asim BN.

Read next: A better method for identifying overconfident large language models
by External Contributor via Digital Information World

A better method for identifying overconfident large language models

By Adam Zewe | MIT News

Image: Marija Zaric / Unsplash

This new metric for measuring uncertainty could flag hallucinations and help users know whether to trust an AI model.

Large language models (LLMs) can generate credible but inaccurate responses, so researchers have developed uncertainty quantification methods to check the reliability of predictions. One popular method involves submitting the same prompt multiple times to see if the model generates the same answer.

But this method measures self-confidence, and even the most impressive LLM might be confidently wrong. Overconfidence can mislead users about the accuracy of a prediction, which might result in devastating consequences in high-stakes settings like health care or finance.

To address this shortcoming, MIT researchers introduced a new method for measuring a different type of uncertainty that more reliably identifies confident but incorrect LLM responses.

Their method involves comparing a target model’s response to responses from a group of similar LLMs. They found that measuring cross-model disagreement more accurately captures this type of uncertainty than traditional approaches.

They combined their approach with a measure of LLM self-consistency to create a total uncertainty metric, and evaluated it on 10 realistic tasks, such as question-answering and math reasoning. This total uncertainty metric consistently outperformed other measures and was better at identifying unreliable predictions.

“Self-consistency is being used in a lot of different approaches for uncertainty quantification, but if your estimate of uncertainty only relies on a single model’s outcome, it is not necessarily trustable. We went back to the beginning to understand the limitations of current approaches and used those as a starting point to design a complementary method that can empirically improve the results,” says Kimia Hamidieh, an electrical engineering and computer science (EECS) graduate student at MIT and lead author of a paper on this technique .

She is joined on the paper by Veronika Thost, a research scientist at the MIT-IBM Watson AI Lab; Walter Gerych, a former MIT postdoc who is now an assistant professor at Worcester Polytechnic Institute; Mikhail Yurochkin, a staff research scientist at the MIT-IBM Watson AI Lab; and senior author Marzyeh Ghassemi, an associate professor in EECS and a member of the Institute of Medical Engineering Sciences and the Laboratory for Information and Decision Systems.

Understanding overconfidence

Many popular methods for uncertainty quantification involve asking a model for a confidence score or testing the consistency of its responses to the same prompt. These methods estimate aleatoric uncertainty, or how internally confident a model is in its own prediction.

However, LLMs can be confident when they are completely wrong. Research has shown that epistemic uncertainty, or uncertainty about whether one is using the right model, can be a better way to assess true uncertainty when a model is overconfident.

The MIT researchers estimate epistemic uncertainty by measuring disagreement across a similar group of LLMs.

“If I ask ChatGPT the same question multiple times and it gives me the same answer over and over again, that doesn’t mean the answer is necessarily correct. If I switch to Claude or Gemini and ask them the same question, and I get a different answer, that is going to give me a sense of the epistemic uncertainty,” Hamidieh explains.

Epistemic uncertainty attempts to capture how far a target model diverges from the ideal model for that task. But since it is impossible to build an ideal model, researchers use surrogates or approximations that often rely on faulty assumptions.

To improve uncertainty quantification, the MIT researchers needed a more accurate way to estimate epistemic uncertainty.

An ensemble approach

The method they developed involves measuring the divergence between the target model and a small ensemble of models with similar size and architecture. They found that comparing semantic similarity, or how closely the meanings of the responses match, could provide a better estimate of epistemic uncertainty.

To achieve the most accurate estimate, the researchers needed a set of LLMs that covered diverse responses, weren’t too similar to the target model, and were weighted based on credibility.

“We found that the easiest way to satisfy all these properties is to take models that are trained by different companies. We tried many different approaches that were more complex, but this very simple approach ended up working best,” Hamidieh says.

Once they had developed this method for estimating epistemic uncertainty, they combined it with a standard approach that measures aleatoric uncertainty. This total uncertainty metric (TU) offered the most accurate reflection of whether a model’s confidence level is trustworthy.

“Uncertainty depends on the uncertainty of the given prompt as well as how close our model is to the optimal model. This is why summing up these two uncertainty metrics is going to give us the best estimate,” Hamidieh says.

TU could more effectively identify situations where an LLM is hallucinating, since epistemic uncertainty can flag confidently wrong outputs that aleatoric uncertainty might miss. It could also enable researchers to reinforce an LLM’s confidently correct answers during training, which may improve performance.

They tested TU using multiple LLMs on 10 common tasks, such as question-answering, summarization, translation, and math reasoning. Their method more effectively identified unreliable predictions than either measure on its own.

Measuring total uncertainty often required fewer queries than calculating aleatoric uncertainty, which could reduce computational costs and save energy.

Their experiments also revealed that epistemic uncertainty is most effective on tasks with a unique correct answer, like factual question-answering, but may underperform on more open-ended tasks.

In the future, the researchers could adapt their technique to improve its performance on open-ended queries. They may also build on this work by exploring other forms of aleatoric uncertainty.

This work is funded, in part, by the MIT-IBM Watson AI Lab.

Republished with permission of MIT News.

Reviewed by Irfan Ahmad.

Read next:

Can’t stop endlessly scrolling? Tips to help you take back control

AI Chatbots Push Users to Share Sensitive Data During Tax Help, With ChatGPT Most Persistent, Analysis Finds
by External Contributor via Digital Information World

Thursday, March 19, 2026

AI Chatbots Push Users to Share Sensitive Data During Tax Help, With ChatGPT Most Persistent, Analysis Finds

By Surfshark

Image: Salvador Rios / Unsplash

As tax season hits, you have options to file your tax return yourself or with help from someone else. But what if you let an AI chatbot step in to assist? It's a tempting choice — always available, always free, and ready to help when professional assistance is either too expensive or hard to find as deadlines loom. A new study, conducted by Surfshark, explores whether turning to AI chatbots is as smart a move as it sounds.

OpenAI's ChatGPT, Google's Gemini, and xAI's Grok have emerged as the frontrunners in the AI chatbot and tools sector. Recent data from Similarweb indicates that these platforms collectively account for nearly 84% of total traffic, making them the most likely choice for individuals seeking consultation, including tax-related advice. ChatGPT leads with 5.4B monthly visitors, followed by Gemini with 2.1B, and Grok with 0.3B.¹

Key insights

  • Simulated conversations about tax returns on the most popular AI chatbots worldwide — ChatGPT, Gemini, and Grok — showed a clear pattern: users were actively pushed to provide personal information, starting from their job, income, or country, even with neutral prompts like “tax return”. ChatGPT was the most persistent, while Gemini and Grok were easier to navigate for those avoiding personal data input. For example, with Gemini, even when users were encouraged to provide personal information and chose not to, the AI chatbot smoothly continued the conversation, using example data if necessary. In contrast, ChatGPT made several attempts in a row to steer users toward providing their sensitive information.
  • To illustrate AI chatbots' data collection behavior, consider an interaction with ChatGPT. Initially, this chatbot concludes its response with a request: “Just tell me your job and approximate yearly income, and I can estimate your refund.” If the user chooses to ignore this request, ChatGPT persists in its next response, asking the user to share the requested details and even seeking more data. If the user proceeds to ignore such requests, ChatGPT adopts a more assertive tone, using phrases like “Please reply with these” and “You can answer like this example.” Ultimately, if the user prompts with “no,” the chatbot ceases to offer estimates. In the case of Gemini, if a user responds with “no,” the chatbot replies with a message: “No worries at all! Since you'd rather not share your specific numbers, I've put together a ‘cheat sheet’ for the current 2025–26 financial year (ending June 30, 2026). You can use this to do the math yourself.”
  • AI chatbots can gather user information beyond what is explicitly provided in user prompts. For instance, in a simulated interaction using a VPN connected to an Australian server, ChatGPT tailored its responses based on the user's location data. It started with phrases such as “If you're in Australia” and offered tax-related details specific to that country. In contrast, Gemini not only provided information relevant to Australia but also included details for the US and UK. This broader coverage makes its data collection practices less obvious and potentially less suspicious for users who aren't familiar with the Terms of Service and Privacy Policy. Grok, on the other hand, focused on delivering responses related to US tax returns and offered to customize information further if users provided additional details about their circumstances — such as their country, income type, or specific questions.
  • This example aligns with findings from a study Surfshark conducted last year, which examined the data collection practices of the top AI chatbots available on the Apple App Store. The study revealed how data-hungry some of these chatbots can be, with certain apps collecting up to 32 out of 35 possible data types. Location data is just one example of the extensive information these chatbots may gather, highlighting the importance of understanding their data collection practices.
  • However, using Grok can be frustrating because it frequently prompts users to sign up, after which companies can gain insights into users' habits and interests or target them with ads, as ChatGPT has already announced plans to do. During simulated conversations, interactions were often interrupted with a “high demand” note, forcing users to either wait or sign up for higher priority access. Additionally, after the fifth prompt, a message limit was reached, preventing further chat progression. Similarly, ChatGPT frequently asked users to create an account to unlock features such as uploading files or images or accessing enhanced capabilities. In contrast, Gemini's approach was the least aggressive, suggesting that users create an account only after they had been prompted at least 10 times.
  • The main website page for Gemini explicitly states that the AI chatbot can make mistakes. ChatGPT provides a similar disclaimer after the first prompt, additionally warning users not to share sensitive information and noting that chats may be reviewed and used to train their models. In contrast, Grok does not visibly display such a statement on the chat screen, although it is included in the Terms of Service. For these reasons, transparency about sources and access to links are crucial for assessing the accuracy of AI chatbot information, particularly in sensitive areas like tax returns.
  • A highly concerning finding is that Gemini does not provide any source references, raising issues about the verifiability of its information. Meanwhile, ChatGPT takes an inconsistent approach, offering links only for certain highlighted words, with explanatory text in a sidebar. In contrast, Grok enhances transparency by providing an extensive list of sources with direct links to content. However, it is important to note that merely providing a link does not ensure that the information was correctly interpreted or used by the AI, leaving users to navigate these technologies at their own risk.

Methodology and sources

The study aims to provide insights into the chatbots' behavior and the risks associated with their use in sensitive contexts, such as tax return assistance. To simulate user behavior, three distinct starting prompts were used: a neutral “tax return”, a more engaging “help me with my tax return,” and a third prompt, “how can you help me with my tax return?” Following the initial prompt, subsequent user interactions were limited to “yes” if the chatbot suggested an action, or “no” if it requested personal information. If the interaction stalled, the AI chatbot’s first suggestion was used to continue the conversation. Each initial prompt was entered into a new chat thread using Google Chrome's Incognito mode, with a VPN connected to an Australian server. All interactions were conducted in English. Data was collected on March 12, 2026.

Among the top five AI chatbots and tools with the highest user traffic, OpenAI's ChatGPT, Google's Gemini, and xAI's Grok were selected for analysis because their accessible free versions do not require users to sign in. As a result, Anthropic Claude and DeepSeek were excluded from the analysis due to their requirement for account creation before use. No additional settings were adjusted after accessing the AI chatbot websites.

Note: The same prompts do not always produce identical results, so the first recorded take was used for analysis.

For the complete research material behind this study, visit here.

This post was originally published by Surfshark and is republished on DIW with permission.

Reviewed by Asim BN.

Read next:

Two-thirds of workers are burned out – here’s what science says about how to tackle it

Iran war shows how AI speeds up military ‘kill chains’


by External Contributor via Digital Information World

Wednesday, March 18, 2026

Iran war shows how AI speeds up military ‘kill chains’

Craig Jones, Newcastle University and Helen M Kinsella, University of Minnesota
Growing reliance on AI in warfare challenges ethics, legality, and accountability amid rising civilian harm.
Image: Saifee Art / Unsplash

The US-Israel war on Iran has been described as “the first AI war”. But recent deployments of artificial intelligence are, in fact, the latest in a long history of technological developments that prize a need for speed in the military “kill chain”.

“Sixty seconds – that’s all it took,” claimed a former Israeli Mossad agent of the strikes that killed Iran’s supreme leader, Ayatollah Ali Khamenei, on February 28 2026, the first day of the US-Israel war on Iran.

The speed and scale of war have been significantly enhanced by use of AI systems. But this need for speed brings serious risks for civilians and military combatants alike.

Modern military operations produce and rely on an enormous amount of intelligence. This includes intercepted phone calls and text messages, the mass surveillance of the internet (known as “signals intelligence”), as well as satellite imagery and video feeds from loitering drones. We can think of all this intelligence as data – and the problem is, there’s too much of it.

As early as 2010, the US Air Force was concerned about “swimming in sensors and drowning in data”. Too many hours of footage, and too many analysts manually reviewing this intelligence.

AI systems can dramatically speed up the analysis of military intelligence. Brad Cooper, head of US Central Command (CentCom), recently confirmed the use of AI tools in the war against Iran, saying:

These systems help us sift through vast amounts of data in seconds, so our leaders can cut through the noise and make smarter decisions faster than the enemy can react … Advanced AI tools can turn processes that used to take hours and sometimes even days into seconds.

In 2024, an investigation by Georgetown University found that the US Army’s 18th Airborne Corps had employed AI to assist with intelligence processing – reducing a team of 2,000 to just 20.

The allure of speed

In the second world war, the aerial targeting cycle – from collecting images to assembling target packages complete with intelligence reports – could take weeks or even months. But over the ensuing decades, the US military set about what it called “compressing the kill chain” – shortening the time between the identification of a target and use of force against it.

During the first Gulf war of 1991, Iraq’s president Saddam Hussein made use of mobile missile launchers that would roam the desert firing Scud missiles. By the time US radar identified its location, the launcher could be miles away. This “shoot and scoot” tactic required new technology to track these mobile targets.

Mobile Scud missile launchers proved a new challenge for the US military during the first Gulf war.

A key breakthrough came shortly after the September 11 attacks in the form of an armed Predator drone.

In November 2002, the CIA targeted and killed Al Qaeda’s leader in Yemen, Qaed Salim Sinan al-Harithi. This heralded a new era of warfare in which drones piloted from military bases in the US flew remotely over the skies of Yemen, Somalia, Pakistan, Iraq, Afghanistan and elsewhere.

The drones’ powerful cameras could take high-resolution video and beam it back to the US via satellite in a matter of seconds, enabling the drone operators to track mobile targets. The same drone which had eyes on the target could fire missiles to kill or destroy the target.

With greater speed comes greater risk

Two decades ago, it was easy to dismiss as hyperbole the idea that the coming age of cyberwarfare might bring about “bombing at the speed of thought”, a phrase coined by American historian Nick Cullather in 2003. Yet with the advent of AI warfare, the unthinkable has become almost antiquated.

Part of the push to employ AI tools is the sense that human thought is no match for the processing speeds enabled by AI systems. The US Department of Defense’s artificial intelligence strategy states: “Military AI is going to be a race for the foreseeable future, and therefore speed wins … We must accept that the risks of not moving fast enough outweigh the risks of imperfect alignment.”

While the precise uses of AI by US and other military is shrouded in secrecy, information has been made public that highlights the risks of its use on civilian populations.

In Gaza, according to Israeli intelligence sources, the AI systems Lavender and Gospel have been programmed to accept up to 100 civilian casualties (and occasionally even more) for a strike on a single suspected Hamas combatant. More than 75,000 people are estimated to have been killed there since October 7 2023.

In February 2024, a US airstrike killed a 20-year-old student, Abdul-Rahman al-Rawi. At the time, a senior US official admitted the strikes had used AI targeting – although confusingly, the US military now says it has “no way of knowing” whether it used AI in specific airstrikes.

The risk is that AI could lower the threshold or cost of going to war, as people play an increasingly passive role in reviewing and rubber-stamping the work of AI.

The embedding of AI into military kill chains intersects with other alarming developments. After years of inaction, the US military spent more than a decade developing an infrastructure to avoid civilian casualties in war, but it has been almost totally dismantled under the Trump administration.

The lawyers who give advice to the military on targeting operations, including compliance with international law and rules of engagement, have been sidelined and fired.

Meanwhile, since the start of the war in Iran, more than 1,200 civilians have been killed, according to the Iranian Health Ministry. On February 28, the US military struck an elementary school in the south of Iran, killing at least 175 people, most of them children.

The US secretary of defense, Pete Hegseth, has been clear that the military’s aim in Iran is for “maximum lethality, not tepid legality. Violent effect, not politically correct”.

With such an attitude, and by privileging speed over deliberation, civilian casualties become inevitable, and accountability ever more elusive.The Conversation

Craig Jones, Senior Lecturer in Political Geography, Department of Geography, Newcastle University and Helen M Kinsella, Professor of Political Science and Law, Department of Political Science, University of Minnesota

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Asim BN.

Read next: 

• From Anthropic to Iran: Who sets the limits on AI’s use in war and surveillance?

• Political Unrest Is the Leading Cause of Internet Shutdowns


by External Contributor via Digital Information World

Two-thirds of workers are burned out – here’s what science says about how to tackle it

By Taylor & Francis

Evidence-based, long-term psychological strategies to build a framework for your brain’s resilience and overcome burnout

Burnout is at an all-time high, with some studies saying two-thirds of employees now cite job burnout as a major challenge.

Overwork and chronic stress do not just drain energy, they can erode health, contributing to a wide range of psychological and physical problems, including depression, anxiety, cardiovascular disease and even increased stroke risk.

Shaina Siber offers solutions rooted in science in her new book, Using ACT and CFT for Burnout Recovery: The Beyond Burnout Blueprint, with strategies to help people in high pressure situations break the cycle of exhaustion.

What is burnout

The term “burnout,” coined by psychologist Herbert Freudenberger in the 1970s, described a state of physical and mental exhaustion among workers. Decades later, the World Health Organization formally recognised burnout as an “occupational phenomenon,” characterised by exhaustion, cynicism, detachment and reduced effectiveness.

“Burnout isn’t just making us miserable; it’s making us sick. Half a century after naming the problem, we are left collectively scratching our heads on how to resolve it.

“If you’re experiencing burnout, chances are you’ve already tried to ‘fix’ it. Maybe you leaned into conventional wisdom: More exercise, more sleep, more meditation, more sunshine, more kale. Maybe you bought into the idea that a vacation or spa day would reset your system.

“Here’s the truth: We cannot rely on “good vibes only” for finding our way out of burnout. There aren’t enough green juices, yoga classes, or massages in the world to self-care burnout into submission. Even the most restorative vacation glow often evaporates before you’ve finished unpacking,” Siber explains.

Siber says that while we cannot ignore the systemic realities that drive burnout, such as unsafe staffing, impossible workloads, workplace discrimination and other pervasive and damaging issues, we can acknowledge these challenges and find a way to cope that does not cause us physical and psychological harm.

“I do not ask people to deny or minimise these issues; or pretend they don’t matter. But burnout isn’t something you can simply eliminate once your external circumstances change. Pain and challenge are inevitable in work, and in life,” she says.

Burnout: A neurological and psychological perspective

Burnout is more than just feeling tired, it’s a state of chronic stress that rewires the brain. Science tells us that prolonged stress activates the amygdala, the brain’s fear centre, while suppressing activity in the prefrontal cortex, which governs decision-making and emotional regulation.

This imbalance leaves individuals stuck in survival mode, unable to access the psychological flexibility needed to recover.

Siber explains: “Burnout often pulls us into mental time travel: replaying the past, catastrophising the future, or checking out altogether. Burnout isn’t just about exhaustion; it’s about the erosion of meaning, connection, and agency in our lives.”

Acceptance and Commitment Therapy (ACT) and Compassion-Focused Therapy (CFT) offer a way to recalibrate.

ACT promotes a concept called ‘radical acceptance’ to encourage psychological flexibility, the ability to stay present, open up to difficult experiences and take action in keeping with wider goals. Meeting difficult situations with acceptance can alter the brain’s neural responses to difficult thoughts and emotions by reducing the hyperactivity in the brain’s Default Mode Network (DMN), which is linked to rumination and self-centred thinking, while improving the connections between the higher-thinking parts and emotional processing centres for more measured responses.

CFT complements this by using compassion to reduce the control of the brain’s fear centre, regulate the nervous system and activate the brain’s affiliative pathways that promote safeness and connection. Together, these approaches help individuals move from survival mode to thriving.

A science-based blueprint for burnout recovery

Siber’s Beyond Burnout Blueprint integrates ACT and CFT into a framework designed to tackle burnout at its roots, as opposed to tempering its impact with lifestyle adjustments.

Unlike conventional wellness fixes, which often focus on short-term nervous system regulation techniques like exercise or meditation, this approach goes further into the psychological and systemic bodily reactions that fuel burnout.

The framework begins with creating a vision, which involves clarifying your deeply held values to serve as a guide throughout the process.

“Imagine the life you’re building toward, not just the challenges you’re trying to escape,” Siber explains.

Then, the process entails welcoming the unwanted, which involves learning how to sit with discomfort rather than suppressing it, thereby fostering resilience and emotional openness.

Watching your words is another critical step, focusing on minimising unhelpful narratives that fuel self-criticism and replacing them with more compassionate and flexible self-talk. Far from being a ‘nice-to-have’, compassion helps to regulate the nervous system.

“Practicing fierce compassion is essential for cultivating self-compassion, which softens the grip of burnout and promotes emotional healing,” Siber explains.

“Compassion makes the flexibility ACT cultivates more accessible and sustainable.

“Compassion, especially self-compassion, isn’t a finish line you cross once. It’s a lifelong relationship you tend to, one choice, one breath, one moment at a time.”

Also, people should identify their strengths and what matters to them, allowing them to rediscover what energises and fulfils them, she suggests.

Siber’s describes exercises designed to help apply these principles in their daily lives. The “Spotting Inflexibility” exercise, for instance, helps individuals identify patterns of psychological rigidity that fuel burnout. By noticing these patterns without judgement, readers can begin to shift their responses.

Burnout in high-pressure professions

Burnout doesn’t discriminate, but it disproportionately affects those in high-stakes fields like healthcare, education, law, finance, and tech.

Siber highlights the unique challenges faced by these professions, from moral injury in healthcare to the relentless demands of competitive corporate cultures.

For leaders and teams, she emphasises the importance of systemic change, such as fair workloads, flexible arrangements and psychologically safe environments.

“True prevention requires redesigning work itself,” Siber says. “Fair workloads, trained managers, and accessible mental health resources are essential.”

For people in high pressure roles, Siber explains why nurturing resilience is a more sustainable tactic than lifestyle changes: “Burnout resilience allows you to regulate, refocus, and rise when burnout shows up. It’s not about working harder to fix yourself. It’s about learning to move through discomfort without losing sight of what matters most.”

This post was originally published on Taylor & Francis Group Newsroom and is republished on DIW with permission.

Image: Vitaly Gariev / Unsplash

Reviewed by Irfan Ahmad

Read next: 

• Tech companies are blaming massive layoffs on AI. What’s really going on?

Political Unrest Is the Leading Cause of Internet Shutdowns
by External Contributor via Digital Information World

Tuesday, March 17, 2026

Political Unrest Is the Leading Cause of Internet Shutdowns

by Tristan Gaudiaut, Data Journalist Statista

Governments around the world continued to impose restrictions on internet access in 2025, often in response to political tensions and public unrest. According to data from Surfshark, political turmoil was by far the leading cause of such measures last year. As our chart shows, 25 regional internet shutdowns and 16 nationwide shutdowns were linked to political instability, along with 10 cases involving the blocking of social media platforms.

Protests were another major trigger. Authorities imposed 13 regional shutdowns and three social media blocks in response to demonstrations. Elections also played a role, particularly when governments sought to control the flow of information during sensitive political periods. In 2025, six nationwide shutdowns and five social media blocks were linked to election-related concerns.

These measures include actions such as blocking websites, restricting social media platforms or messaging services and imposing regional or nationwide internet shutdowns. Many of these restrictions were concentrated in Asia and Africa. Governments in ten Asian countries introduced 56 new restrictions in 2025, while eight African countries accounted for another 20 cases. India recorded the highest number of incidents, imposing 24 restrictions during the year, often linked to political unrest or protests. Other countries reporting multiple incidents included Iraq, Afghanistan and Iran, where authorities repeatedly limited internet access during periods of tension or demonstrations.

Asia led with 56 new internet restrictions, while India alone imposed 24 due to unrest and protests.

Note: This post originally appeared on Statista and is republished on DIW under Creative Commons License (CC BY‑ND).

Read next: 

• Mobile Accounts for Nearly 60 Percent of Web Traffic

• 2026 Social Media Benchmark: TikTok Engagement Soars 49% YoY to 3.70%, Instagram Holds 0.48%, Facebook 0.15%, X Drops to 0.12%
by External Contributor via Digital Information World

2026 Social Media Benchmark: TikTok Engagement Soars 49% YoY to 3.70%, Instagram Holds 0.48%, Facebook 0.15%, X Drops to 0.12%

By Elena Cucu - Socialinsider

These social media benchmarks for 2026 will help you empower your strategy. See how your brand stacks up against industry standards.

If I were to ask where your brand feels most “seen” online, would you pick TikTok? Instagram? Facebook or X? Maybe all of them—depending on what you’re hoping to spark with your latest post.

Let’s be real: audiences are moving faster than ever, and sometimes all you get is a scroll, a silent view, or the occasional quick “like.” On other days, your community comes alive, commenting, sharing, or even starting a conversation that takes on a life of its own.

With platform habits and algorithms always changing, it can be tough to know what real engagement actually looks like anymore.

That’s why Socialinsider analyzed 70M social media posts across TikTok, Instagram, Facebook, and X, to understand the future of social media, audience interactions, and how brands can better prepare their strategies for 2026.

This Socialinsider 2026 social media benchmarks report analyzes engagement rates, impressions, likes, comments, shares, and posting frequency benchmarks across Facebook, Instagram, TikTok, and X (formerly Twitter).

By understanding these trends, brands can identify opportunities, optimize their content strategies, and enhance their social media return on investment (ROI).

Executive summary

  • TikTok’s engagement rate is 3.70%, up 49% YoY. Instagram’s engagement rate is 0.48%, staying almost flat in 2025.
  • Facebook averaged 0.15% engagement, dipping in early 2025 and declining gradually afterward.
  • Average comments per post fell on TikTok (24%) and Instagram (16%), suggesting a shift toward more passive engagement.
  • TikTok recorded notable growth in shares per post, increasing by 45% YoY, mirroring the upward trend in overall engagement. As for Instagram, it registered a 12% increase.
  • Both TikTok and Instagram experienced an increase in video views. TikTok had a 3% growth rate, while Instagram had a more pronounced 29% YoY growth rate.
  • Brands post an average of 5 posts per week on Instagram and TikTok.

Social media benchmarks 2026 by platform

Platform

2024 Engagement Rate

2025 Engagement Rate

TikTok

2.50%

3.70%

Instagram

0.50%

0.48%

Facebook

0.15%

0.15%

X

0.15%

0.12%

Each year, the landscape of social media engagement evolves—driven by shifting user behavior, algorithm changes, and brands’ creative strategies.

Keeping up to date with the latest social media benchmarks becomes more crucial than ever for marketers wanting to set informed goals, outperform competitors, or report on campaign success. Because, as we all know, engagement benchmarks digging is so much more than finding and putting a pin on a number - it’s about context, clarity, and the confidence to know you’re putting your effort in the right place.

Whether you’re aiming to improve your TikTok performance, curious about the average engagement rate on Facebook, or looking for that sweet spot on Instagram, understanding these social media engagement benchmarks will help you set realistic targets (and brag a bit about your wins).


So, how did engagement shift in over the past year across the biggest platforms? Let’s break it down:
  • TikTok: TikTok made headlines yet again, with engagement rates leaping from 2.50% to a standout 3.73%, registering an impressive 49% YoY growth. For brands looking to push boundaries or tap into new audiences, TikTok’s growth cements its role as the go-to channel for high energy and high returns.
  • Instagram: Here’s where it gets interesting. Instagram’s engagement rate nudged down a touch, from 0.50% to 0.48%. It might not feel like much, but even a slight drop is worth noting. Still, if you ever wondered “what is the average Instagram engagement rate?” or how it compares, you’ve got your answer: Instagram remains a step above Facebook when it comes to sparking conversations and connections.
  • Facebook: You might be surprised, or maybe not, that the average Facebook engagement rate hasn’t budged—it’s holding strong at 0.15% year-over-year. For many brands, this means expectations on Facebook should be steady: you’re playing in a mature, less volatile space. If you’re asking what the average Facebook engagement rate is in 2026, it’s still 0.15%, so you know exactly what to aim for.
  • X: And what about X? The platform saw a subtle slide in engagement rates, dipping from 0.15% in 2024 to 0.12% in 2025. For brands that still invest in X, this signals a need for sharper content strategies and perhaps a rethink on how best to capture attention in a changing environment. While the numbers are modest, they offer a valuable reminder: standing still isn’t an option if you want to keep your audience engaged on X.
The engagement gap exists because people use these platforms very differently. Instagram is still largely about polished, aesthetic curation, while TikTok feels more raw, authentic, and immediate.
On TikTok, people don’t just scroll for inspiration. They actively look for answers. Whether it’s finding a restaurant in London, a solution for acne, or an honest review of the latest Marvel movie, users are increasingly going straight to TikTok instead of Google. But it goes beyond utility. TikTok is where people find communities around very specific interests and, in many cases, a sense of belonging.
The “For You” page plays a huge role in this. Discovery on TikTok feels effortless. The algorithm shows you what you want before you even know you’re looking for it. That’s what fuels deeper engagement. TikTok shortens the distance between users and the content they actually care about, while Instagram is still catching up when it comes to frictionless discovery. - Morgane Wasilewski, Social Media Manager at Channable

Strategic tactics to increase your engagement rate across platforms

Looking to turn benchmark insights into real results? Here are a few proven tactics to help boost engagement rate across your social channels:

  • Humanize your brand: Show real people, stories, and behind-the-scenes moments. Audiences engage more with authenticity than with “stock” or overtly polished content.
  • Embrace platform-specific features: Polls on X, Reels on Instagram, native stories—each feature comes with algorithmic boosts and higher user participation.
  • Invest in powerful hooks: Capture attention right away—whether through a dynamic visual, a bold headline, or a pressing question. The faster you deliver a reason for audiences to interact with you, the more engagement your content will rack up.

Average likes per post across platforms

Platform

2024 Average Likes per Post

2025 Average Likes per Post

TikTok

3092

3492

Instagram

395

335

Facebook

155

255

X

40

15

It’s no secret that a quick glance at your like count gives you a pulse check on how your content is resonating. But averages across platforms?

That’s where benchmarks become real-world roadmaps, helping you answer “Are we ahead of the curve, or is there room to grow?”

  • TikTok: Still the pulse-raiser of the social scene, TikTok stands out for its consistently high appetite for content. User enthusiasm hasn’t just stayed strong—it’s elevated (by 12%) showing that audiences are not only present but actively rewarding creative, eye-catching posts. If you’re leaning into trends and keeping things fresh, TikTok is still the go-to channel for visible, organic love.
  • Instagram: This year brought a subtle shift for Instagram—while likes remain a core part of the experience, there’s a distinct sense of rising competition, with likes decreasing by 15% compared to previous values. The platform’s atmosphere has grown a bit more competitive, meaning it now takes even more creativity and true community-building to earn those taps. For brands, it’s a cue to double down on originality and ensure your content has a genuine point of view.
  • Facebook: Defying expectations, Facebook managed a quiet resurgence in user engagement, gaining with 64% more likes compared to the previous year. For brands that really listen to their audience and tailor content accordingly, Facebook can surely ensure success. It’s a reminder that authenticity and relevance can still move the needle on legacy platforms—even when trends seem to point elsewhere.
  • X: With a 62% YoY decrease in likes, it's becoming clearer that audiences here are becoming more selective and thoughtful, making every like harder to earn but potentially more meaningful. Brands can’t afford to phone it in: winning attention on X now requires sharper, more relevant content and a willingness to rethink what real engagement means on this platform.
Instagram likes are declining not because content is weaker, but because the platform prioritizes watch time, saves, and shares over passive engagement. Users increasingly interact through DMs and private channels, which don't show up in public metrics. The engagement isn't gone. It's just moved to the actions that actually drive reach.
Facebook's like rebound shows what happens when brands stop treating every platform the same and remember that Facebook was built for community, not distribution. Conversational posts that speak directly to existing audiences lower the friction to engage, making likes a natural response again. It's proof that platform-native strategy beats cross-posting every time. Valeria Sillani, Global Social Media Manager at EasyVista and OTRS

Strategic tactics to increase your likes across platforms

Here are several strategic moves you can use across any network to turn more of your audience into active fans:

  • Offer quick-win tips, hacks, or inspiration: Share bite-sized advice, “did you know?” facts, or motivational messages that provide instant value—content that’s useful or heartening tends to get more likes and shares.
  • Optimize your visual storytelling: Prioritize striking imagery, bold graphics, or stop-motion visuals that stand out immediately in crowded feeds. High-quality, scroll-stopping visuals are often rewarded with more likes at first glance.
  • Create recurring series with an interactive hook: Establish an ongoing content theme—such as “Monday Motivation” or “Ask Me Anything Wednesdays”—that encourages habitual interaction. When followers come to expect (and look forward to) consistent, interactive posts, likes tend to grow over time.

Average comments across platforms

Platform

2024 Average Comments per Post

2025 Average Comments per Post

TikTok

66

50

Instagram

24

20

Facebook

17

22

X

1

1

Comments are where true engagement lives—where audiences pause the scroll, join the conversation, and leave their mark. But not all platforms spark dialogue equally, and this year brought some interesting shifts in the art of getting people talking.

  • TikTok: The buzz is real, but conversation is getting more selective. While TikTok still inspires tons of quick reactions, users are now less likely to jump into long comment threads, the platform scoring a 24% decrease YoY in comments generated. This is a sign the platform’s interaction style is evolving—quick, high-energy content still rules, but deeper exchanges may need a new approach.
  • Instagram: Engagement through comments remains a pillar on Instagram, though it’s on a gentle downward slope (scoring a 20% YoY decrease). With so much content competing for attention, getting followers to pause and say something requires more intentional prompts and community-minded hooks.
  • Facebook: The original home of social dialogue is regaining some spark, registering a 20% increase in the number of comments generated. Despite the platform’s age, Facebook posts are seeing livelier comment sections, pointing to the value of familiar formats and trusted communities. When brands nurture discussion and invite open input, their audience is ready to chime in.
  • X: The nature of engagement here is brief and immediate; most users scroll, like, or move on. For those aiming to build actual conversation threads, success now depends on delivering hot takes or timely commentary that simply can’t be ignored.
Comments require time, and users are looking for quicker ways to engage with content. Instead of reacting publicly, they are forwarding content to friends privately or in group chats. This points to a shift toward connection-driven engagement.
It’s also important to note that Gen-Z is often described as the “spectator generation”, highly tuned in but selected about when and where they speak.
Also, platforms and algorithms are ever-changing. We’ve seen an increase in prioritisation around watch time and shares which could also explain this shift in behaviour. Overall, this shift tells us that users still care about content but they prefer to engage privately rather than through a public thread, leaving no trail. - Melody Doffman, Social Media Manager at Nestlé

Strategic tactics to increase your comments across platforms

Here are strategic tactics to help spark (and sustain) a lively comment section across any platform:

  • Ask for feedback, ideas, or suggestions: Request input on new products, features, or content directions. Phrasing like “What should we try next?” or “How can we improve?” empowers your audience and shows that their voice matters, motivating them to comment.
  • Share unfinished stories or open-ended scenarios: Post cliffhangers, “what would you do?” questions, or stories with missing pieces. The curiosity and desire to weigh in encourage followers to fill in the blanks and keep the conversation going.
  • Partner with micro-influencers for authentic collabs: Instead of big-budget sponsorships, tap niche or local creators who align with your brand values. Their loyal, engaged audiences trust their content—meaning your brand message gets a genuine boost in both reach and interaction.

Average shares across platforms

Platform

2024 Average Shares per Post

2025 Average Shares per Post

TikTok

170

248

Instagram

40

45

Facebook

13

17

X

1

1

As audiences grow more selective—often opting to scroll, swipe, or simply “like” in silence—the humble share has taken on a whole new significance. Shares are now the gold standard of audience action: proof that your content strikes a chord deep enough for someone to broadcast it beyond their own feed.

This shift is especially telling as passive consumption climbs across nearly every network. In a climate where getting users to even pause is a win, inspiring them to hit “share” says you’ve delivered true value—something worth amplifying

This year’s trends show that while not every platform is built equally for virality, every channel offers unique opportunities to inspire that all-powerful share.

  • TikTok: Virality is thriving on TikTok. Sharing culture on this platform keeps gaining momentum as users enthusiastically boost what entertains, educates, or hits a cultural nerve. For creators and brands with their finger on the pulse, TikTok continues to deliver unmatched share potential, actually registering a 45% more shares generated YoY.

  • Instagram: Sharing on Instagram maintained its slow-but-steady climb, increasing by 12%. While the share button isn’t the platform’s star, consistently shareable content—think valuable tips, memes, or beautiful visuals—means audiences are a bit more willing to spread the love to DMs and Stories.

  • Facebook: Sharing has seen an uptick on Facebook as well (increasing by 30%), confirming that meaningful, relatable content still finds its way to broader audiences here. Tapping into personal connections and community-focused posts is your in-road to more organic reach.

  • X: Shares (retweets) on X remain flat, underscoring the challenge of igniting widespread conversation. To cut through, content must be especially bold, timely, or divisive—otherwise, users are much more likely to observe than amplify.

Strategic tactics to increase your shares across platforms

Turning scrollers into sharers is a mark of resonance on any platform. While TikTok leads the pack, every social network rewards content that taps into emotion or value—so focus on creating posts people can’t wait to show others.

  • Leverage user-generated content (UGC): Spotlight posts, stories, and case studies from real customers and followers. Audiences are more likely to share content that features themselves or people they relate to—plus, UGC brings an instant credibility boost.
  • Tap into emotion—humor, awe, or inspiration: Content that makes people laugh, grabs their attention, or lifts their spirits is naturally shareable. Lean into moments or messages that spark a strong reaction, and your followers will want to pass it on.
  • Encourage sharing as a form of participation: Invite your audience to be part of a movement—whether it's tagging friends, joining a challenge, or sharing their take on a topic. When sharing becomes a way to participate, your reach multiplies.
If marketers want to drive more shares, they need to focus on content people genuinely want to send to their group chats. That might be something highly relatable, genuinely useful, creatively inspiring, or simply something that makes people smile. The common thread is value — your content needs to earn its place in someone’s scroll.
Sharing is also a form of self-expression. When people share a post, they’re signaling their interests, values, or sense of humor. Pay attention to your own behavior here: when you share content from other brands or creators, save it and ask how that idea could be adapted for your brand.
You also don’t need to reinvent the wheel. Analyze your most-shared posts to spot patterns in topics or formats, and don’t hesitate to repurpose what’s already worked. As marketers, we see everything we publish but the average follower doesn’t, which makes revisiting strong ideas even more effective. - Elissa Wardrop, Social Media Specialist at IKEA

Average views across platforms

Platform

2024 Average Views per Post

2025 Average Views per Post

TikTok

6268

6496

Instagram

2635

3403

Facebook

1100

913

X

1430

2979

Views are the foundation of social success: every like, comment, or share begins with someone simply watching. But viewing habits aren’t static, and shifts in how (and where) people consume content reveal where the action—and the opportunity—truly lie.

  • TikTok: Momentum remains strong on TikTok. Audiences are consistently turning up in high numbers, increasing its average number of views by 3% YoY, with the platform continuing to be the go-to for viral reach. Creativity and trend-savvy content still get rewarded with widespread visibility here.
  • Instagram: Instagram saw an impressive lift in viewership, (of 29%) which may be part due to Instagram’s new way of measuring views (in 2025 impressions turned into views). The takeaway: Instagram is quickly becoming a powerful place for brands to grow their reach—especially with snackable, visually compelling content.
  • Facebook: Views dipped slightly on Facebook (by 17%), signaling that organic reach is becoming more challenging. To capture attention here, brands need to experiment with format, timing, and hyper-relevant topics to stand out amid the noise.
  • X: The platform saw a notable burst in viewership this year (registering a 50% increase), likely tied to viral moments and broader shifts in platform culture. Short-form, news-driven, and visually engaging content now has a clearer runway to reach broad audiences, offering renewed potential for brands willing to play bold.
In 2025, Instagram’s discovery engine pushed content further and faster than ever. With Reels now driving over 20% of time spent on the platform and expanded to three minutes, brands have more surfaces and more time to earn attention. Discovery no longer depends on follower count. Video-first content and collaborations are what the algorithm rewards, allowing even smaller brands to reach thousands organically and generate meaningful views without relying solely on paid spend. - Sara Zuehlke, Senior Social Media Strategist at Digible

Strategic tactics to increase your views across platforms

Ready to get your content in front of more eyes? Try these proven tactics to expand and drive up your view counts across every platform:

  • Tap into cultural moments and real-time events: React to trending news, holidays, or viral topics with your brand’s unique angle. Timely, relevant reactions often earn higher views as audiences dive in on what everyone’s already talking about.
  • Encourage team or employee sharing: Motivate internal team members or brand ambassadors to share your content to their networks, multiplying early exposure and attracting new eyes.
  • Leverage eye-catching thumbnails and titles: Design strong, curiosity-driven thumbnails and headlines that stand out and make audiences want to click and watch.
The increase in views is a real opportunity for brands that felt priced out of reach before. As views go up, the pressure to be perfect goes down. What matters more now is showing up consistently with a clear point of view, focusing on creative, relevance, and storytelling rather than constant selling. Views open the door, but long-term brand building, recognition through repetition, and what you do once people are paying attention is what truly drives impact.” - Victoria I. , Brand Manager at fatjoe

Monthly posting frequency benchmarks

Platform

2024 Average Posts per Month

2025 Average Posts per Month

TikTok

15

15

Instagram

20

20

Facebook

47

24

X

50

70

How often you show up matters just as much as what you share. Here’s a quick look at how posting rhythms are evolving—and what that means for your brand’s visibility on each platform:

  • TikTok: Consistency remains key—brands are sticking to a steady output. The platform rewards regular participation, but without overwhelming audiences. It’s all about maintaining momentum with a relaxed but reliable posting rhythm.
  • Instagram: Post volume is holding steady, signaling that on Instagram, it’s quality and variety (think posts, Stories, and Reels), not just quantity, that keeps audiences engaged and algorithms happy.
  • Facebook: A sharp reduction in post frequency (a 48% decrease) points to a more intentional approach, with brands moving away from volume and toward curated, high-value updates that cut through the crowded feed.
  • X: Posting pace has accelerated (by 40%), underscoring the platform’s real-time, always-on nature. Timely, high-volume posting is still the route to relevance—and missing a beat could mean missing the conversation entirely.
The X platform rewards speed and conversation, not polished, curated, or aesthetic content. Brands need to stop treating X like it needs a content calendar. Encourage team members to engage in relevant conversations rather than creating only batched posts.
Set up monitoring or social listening tools for industry keywords, have clear brand guidelines, and let people be human. One authentic reply can outperform a week of scheduled posts. – Bukunmi Weke, Social media Strategist
For well-resourced teams with strong creative and production processes, posting 5 times per week is great as Socialinsider's benchmarks also point out. But for many brands especially a small marketing team chasing frequency quickly becomes a creativity trap.
The number itself isn’t the issue; consistency and value are. I’d always choose fewer, higher quality posts that genuinely resonate over hitting an arbitrary posting target.” – Danielle Mote , Social media specialist, Construct It and BJS

Strategic tactics to optimize your posting strategy

Want to make every post count? Try these practical tactics to fine-tune your posting cadence and keep your audience engaged—no matter the platform:

  • Batch-create and schedule your content: Planning in advance ensures consistency (even on busy weeks) and helps you find the right frequency without burning out.
  • Mix formats and content types: Don’t just rely on the same kind of post—rotate videos, images, carousels, Stories, or even live sessions to engage different audience segments and keep your feed fresh.
  • Use analytics to spot your sweet spot: Monitor when your audience is most active and which posting patterns yield the highest engagement, then fine-tune your calendar accordingly.

Methodology

Within this social media benchmarking report, Socialinsider provide a representative sample of international brands with an active presence on TikTok, Instagram, Facebook, and Twitter, between January 2024 - December 2025. The findings of this study are based on the analysis of 70M social media posts.

Socialinsider define social media engagement rate as measurable interactions on Facebook, Instagram, Twitter, and TikTok posts, including comments, reactions, and shares, with the particularities for each platform.

Facebook engagement rate per post (by followers): Facebook engagement rate per post is calculated as the sum of reactions, comments, and shares on the post divided by the total number of fans that page has. The result is then multiplied by 100.

Instagram engagement rate per post (by followers): Instagram engagement rate per post is calculated as the sum of likes and comments on the post divided by the total number of followers that page has. The result is then multiplied by 100.

Twitter engagement rate per post (by followers): Twitter engagement rate per post is calculated as the sum of likes and Retweets received on the Tweet divided by the total number of followers that page has. The result is then multiplied by 100.

TikTok engagement rate per post (by followers): TikTok engagement rate is calculated as the sum of likes, comments, shares, and saves on the post divided by the total number of followers that page has. The result is then multiplied by 100.

Average likes per post: represents how many likes a post receives on average.

Average comments per post: represents how many comments a post receives on average.

Average shares per post: represents how many shares a post receives on average.

Average views per post: represents how many views a post receives on average.

Note: This post originally appeared on SocialInsider and is republished here with permission.

Reviewed by Ayaz Khan.

Read next: Content Marketing Job Trends: Mid-Level Down 70%+, Senior Up 300%+, AI Now Expected in 34% of Senior Roles


by External Contributor via Digital Information World