Wednesday, March 25, 2026

Online ad fraud is a feature, not a bug

By Benjamin Kessler

Image: Erik Mclean / Unsplash

Technological advancements and the dynamics of the platform economy make rooting out fraud more complicated than it may seem.

With print media circulation and broadcast television viewership in free fall, a lot is riding on the online advertising space being able to take up the slack. The good news is, digital ad spend is booming.

The bad news? A good chunk of that money is chasing a mirage.

Online ad fraud—where ad publishers falsely inflate engagement metrics (impressions, clicks, etc.) to boost revenues—is a growing problem that eats upwards of 20 percent of global ad spend.

Min Chen and Abhishek Ray, both professors in the information systems and operations management area at Costello College of Business at George Mason University, are researching how online ad networks, such as Google Ads, can improve upon existing anti-fraud methods. Their recently published paper in Management Science explores deep-rooted dynamics of the online ad ecosystem that make eliminating fraud even more complicated than it may seem at first glance. The paper was co-authored by Subodha Kumar of Temple University.

The researchers used a game-theoretic model to replicate the interconnected decision-making of the three players involved: advertisers, publishers, and the networks that serve as go-between.

“The way the ecosystem works is that the platforms in the middle, the ad networks, shares the benefit from the transaction,” Chen explains. “People have been arguing whether the network is incentivized to put their best efforts behind deterring fraud, since the fraudulent traffic benefits the networks too. So we tried to create a model to capture this.”

“If the advertisers rely solely on the reports from the ad networks, they may be at risk. They should use third-party tools to audit the performance better.” — Min Chen, information systems and operations management professor at the Costello College of Business at George Mason University

In addition, the model incorporates the two main fraud deterrents that networks routinely use. One is technological—platforms can adopt tougher standards for fraud detection, widening the scope of suspicious activity that gets flagged. The other is economic—lowering payments to all publishers so as to disincentivize large-scale fraud.

Surprisingly, the researchers find that the online ad economy works best when the two approaches seem to be working at cross-purposes. A tightening in fraud detection technology, paired with high payments for publishers, may sometimes produce the best outcomes for advertisers, publishers, and networks, as the market evolves.

The reason is rooted in the imperfect nature of fraud detection. To be sure, detection systems are improving all the time, especially with the advent of AI. But fraudsters do their best to blend in and adapt, using technological tools that often outpace those of their pursuers. “You cannot catch all the fraud, and if you try, you are going to mis-detect a lot of non-fraud,” Chen says.

Tougher fraud detection, then, will always mean more false positives, no matter how good the technology gets. To counter this inherent unfairness that penalizes good and bad actors alike, the ad network’s payment to publishers need to go up. Otherwise, publishers may take their business elsewhere—especially those most valuable to the system, i.e. those that are trustworthy — thereby decreasing the advertisers’ valuation on ad traffic.

“These ad networks are kind of a unique system where you can be monetarily rewarded for being honest, or punished for being dishonest,” Ray says. “What we discover for this system is there can be a way in which we can give carrots to people, not just sticks.”

On a similar note, the researchers find that an attempt to purge “bad apple” advertisers from the system can backfire due to false positives. In fact, fraud can sharply increase if networks, believing they have solved the problem, relax their fraud detection standards and raise incentives for the remaining advertisers. “Since the publishers who produce the fraudulent traffic are fewer now, the ad network may no longer need to maintain a strict detection policy. This can encourage the remaining ones to commit much more fraud,” Chen explains.

To Ray and Chen, online ad fraud is, in at least one sense, no different from older forms of malfeasance that are found in all free societies. “We need to have some kind of mechanism for managing the level of fraud, because the fraud detection method is never going to be perfect, whether it’s financial fraud, accounting fraud, etc.,” Chen says.

But as an example of the contemporary platform economy, the online advertising ecosystem is also distinctive, in that its de facto regulatory authority has skin in the game. The ad networks’ mixed incentives—as both beneficiaries and inhibitors of fraud—can undermine integrity and trust within an already-compromised system.

“If the advertisers rely solely on the reports from the ad networks, they may be at risk,” Chen says. “They should use third-party tools to audit the performance better.”

Editor’s Note: This post was originally published on George Mason University News and republished on DIW with permission.

Reviewed by Asim BN.

Read next: 

• Why you may be paying more than you need to for digital subscriptions

• Researchers Pioneer New Technique to Stop LLMs from Giving Users Unsafe Responses


by External Contributor via Digital Information World

Researchers Pioneer New Technique to Stop LLMs from Giving Users Unsafe Responses

By Matt Shipman, NC State News

Image: Nahrizul Kadri / Unsplash

Researchers have identified key components in large language models (LLMs) that play a critical role in ensuring these AI systems provide safe responses to user queries. The researchers used these insights to develop and demonstrate AI training techniques that improve LLM safety while minimizing the “alignment tax,” meaning the AI becomes safer without significantly affecting performance.

LLMs, such as ChatGPT, are being used for an increasing number of applications – including people asking for advice or instructions on how to perform a variety of tasks. The nature of some of these applications means that it is important for LLMs to generate safe responses to user queries.

“We don’t want LLMs to tell people to harm themselves or to give them information they can use to harm other people,” says Jung-Eun Kim, corresponding author of a paper on the work and an assistant professor of computer science at North Carolina State University.

At issue is a model’s safety alignment, or training protocols designed to ensure that the AI’s outputs are consistent with human values.

“There are two challenges here,” says Kim. “The first challenge is the so-called alignment tax, which refers to the fact that incorporating safety alignment has an adverse effect on the accuracy of a model’s outputs.”

“The second challenge is that existing LLMs generally incorporate safety alignment at a superficial level, which makes it possible for users to circumvent safety features,” says Jianwei Li, first author of the paper and a Ph.D. student at NCState. “For example, if a user asks for instructions to steal money, a model will likely refuse. But if a user asks for instructions to steal money in order to help people, the model would be more likely to provide that information.

“This second challenge can be exacerbated when users ‘fine-tune’ an LLM – modifying it to operate in a specific domain,” says Li. “For example, an LLM may have good safety performance. But if a user wants to modify that LLM for use in the context of a specific business or organization, the user may train that LLM on additional data. Previous research shows us that fine-tuning can weaken safety performance.

“Our goal with this work was to provide a better understanding of existing safety alignment issues and outline a new direction for how to implement a non-superficial safety alignment for LLMs.”

To that end, the researchers created the Superficial Safety Alignment Hypothesis (SSAH), which neatly captures how safety alignment currently works in LLMs. Basically, it holds that superficial safety alignment views a user request as binary, either safe or unsafe. In addition, the SSAH notes that LLMs currently make the binary determination on whether to answer the request at the beginning of the answer-generating process. If the request is deemed safe, a response is generated and provided to the user. If the request is deemed not safe, the model declines to generate a response.

The researchers also identified safety-critical “neurons” in LLM neural networks that are critical for determining whether the model should fulfill or refuse a user request.

“We found that ‘freezing’ these specific neurons during the fine-tuning process allows the model to retain the safety characteristics of the original model while adapting to new tasks in a specific domain,” says Li.

“And we demonstrated that we can minimize the alignment tax while preserving safety alignment during the fine-tuning process,” says Kim.

“The big picture here is that we have developed a hypothesis that serves as a conceptual framework for understanding the challenges associated with safety alignment in LLMs, used that framework to identify a technique that helps us address one of those challenges, and then demonstrated that the technique works,” says Kim.

“Moving forward, our work here highlights the need to develop techniques that will allow models to continuously re-evaluate and re-select their reasoning direction – safe or unsafe –throughout the response generation process,” says Li.

The paper, “Superficial Safety Alignment Hypothesis,” will be presented at the Fourteenth International Conference on Learning Representations (ICLR2026), being held April 23-27 in Rio de Janeiro, Brazil.

The researchers have made relevant code and additional information available at: https://ssa-h.github.io/.

This post was originally published on NC State News and republished here with permission.

Reviewed by Ayaz Khan.

Read next: 

• Using your AI chatbot as a search engine? Be careful what you believe

• Why you may be paying more than you need to for digital subscriptions


by External Contributor via Digital Information World

Why you may be paying more than you need to for digital subscriptions

Erhan Kilincarslan, University of Huddersfield


Image: 
Vitaly Gariev / Unsplash

The way we watch TV, listen to music, order groceries and take photos has changed in the past decade or so. For many of us, all of these activities involve a monthly payment.

Subscriptions have quietly become a major part of household spending across the world. But many people underestimate how much they actually pay. And there is evidence which suggests that the design of subscription services – combined with common human traits – can make these payments easy to overlook.

In the UK, consumers spend around £26 billion a year subscribing to everything from digital media to cosmetics and coffee. (Around 69% of UK households subscribe to at least one video streaming service such as Netflix or Amazon Prime Video.)

And a few small monthly payments can quickly add up. Data from Barclays bank suggests that individual consumers spend £50.60 on – so more than £600 a year. It also shows that spending on digital content and subscription services has increased by nearly 50% since 2020. In households where several people hold subscriptions, the combined spending can be considerably higher.

The result is a subscription economy that is growing faster than many consumers realise. And one reason households underestimate their spending is that some subscriptions continue running even when people no longer use them.

The UK government estimates that of the 155 million subscriptions currently active in the UK, nearly 10 million are unwanted – at a cost to consumers of £1.6 billion each year.

The charity Citizens Advice has calculated that over £300 million a year is spent on subscriptions that people are not actually using, often because they automatically renewed after a free trial.

In many cases the individual payments are small, which makes them easy to miss in a bank statement.

Behavioural economics offers one explanation. Research shows that people tend to evaluate spending using what’s known as “mental accounting” – the tendency to treat small payments separately instead of thinking about how they add up overall. As a result, people group purchases into categories rather than looking at the total amount leaving their bank account.

A £9.99 streaming subscription or a £4.99 app service may not feel significant on its own. But when several subscriptions accumulate, the combined cost can become substantial.

Another factor is automatic renewal. Many services continue charging unless customers actively cancel. This interacts with what behavioural scientists call “status quo bias”, the tendency to stick with the default option.

When cancelling requires effort or attention, people often postpone the decision and continue paying.

Consumer groups have also raised concerns about so called subscription traps. These occur when people are unintentionally signed up to recurring payments or find it difficult to cancel them.

It has been claimed that more than 20 million adults in the UK have signed up to a subscription without realising it and about 4.7 million people are still paying for one they did not knowingly sign up to.

These cases often involve free trials that automatically convert into paid subscriptions or online sign up processes where the recurring payment is not clearly explained.

Researchers studying digital interfaces have also identified design practices that make subscriptions easier to start than to cancel, sometimes described as “dark patterns” in online design.

New rules

The growing scale of the problem has attracted regulatory attention. The UK government has introduced measures aimed at tackling subscription traps, including clearer information about recurring payments and easier cancellation processes. A consultation is now taking place on how these rules will be implemented before they come fully into force.

The goal is to ensure that consumers understand the financial commitment they are entering when signing up to a subscription service.

The new measures will probably help reduce some accidental subscriptions, particularly those created through unclear sign-up processes or free trials that automatically convert into paid plans. And it seems sensible to make sure that subscription contracts contain clearer information and easier cancellation rights to help consumers avoid unwanted recurring payments.

But behavioural factors such as inertia and automatic renewal mean the problem may not disappear entirely. Even when cancellation is straightforward, consumers often delay reviewing small recurring payments, allowing subscriptions to continue.

For households, digital spending often feels invisible. Subscriptions are typically spread across multiple platforms and paid automatically through bank cards or direct debits. Without a deliberate review of monthly statements, it can be difficult to see how much these payments add up to.

Subscriptions can offer convenience and flexibility. But as the subscription economy continues to grow, it can also quietly increase household spending in ways that many consumers barely notice.The Conversation

Erhan Kilincarslan, Reader in Accounting and Finance, University of Huddersfield

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Asim BN.

Read next:

• Using your AI chatbot as a search engine? Be careful what you believe

• Instagram, Facebook, TikTok Engagement Rose in Q1 2026 While Snapchat Declined


by External Contributor via Digital Information World

Instagram, Facebook, TikTok Engagement Rose in Q1 2026 While Snapchat Declined

By Adam Blacker, Apptopia

Every quarter, we look at average time spent per daily active user across major US social platforms using Apptopia’s consumer device panel. The Q1 2026 data stands out for one reason: three of four platforms grew engagement year-over-year. Snapchat didn’t.

Comparing Q1 2025 to Q1 2026, Instagram [NASDAQ: META] grew Average Time Spent per DAU by 9.8%. Facebook grew 10.3%. TikTok grew 6.7%. Snapchat [NYSE: SNAP] declined 2.5%.

That alone would be notable. What makes it more significant is where Snap was twelve months ago. Q1 2025 was the strongest first quarter in Snap’s recent history on this metric. Time spent surged 35.8% versus Q1 2024. The 17-25 cohort, Snap’s core franchise demographic, spiked 43.6%. The product was gaining traction across every age group.


Q1 2026 reversed nearly all of it. The 17-25 cohort went from +43.6% to -0.5%. The 26-35 group went from +21.6% to -0.4%. Full-year 2025 data confirms Snap carried momentum through the middle of the year; annual growth was 16.0% for all users, meaning the reversal is recent.

The wider pattern is just as telling. Over the three Q1 periods in our study, Snap’s time spent growth rates were 8.2%, then 35.8%, then -2.5%. That’s a 38pp spread between the highest and lowest readings. Facebook’s equivalent spread was 4 points (6.3%, 8.1%, 10.3%). TikTok’s was 5 points. Instagram’s was 10.5 points, decelerating gradually from a high base. Snap is the outlier on consistency by a wide margin. Its average Q1 growth of 13.8% looks similar to Instagram’s 14.0%, but the path is a spike and a crash versus a steady glide. For anyone building a forward estimate around Snap’s engagement trends, that volatility is the problem. You can underwrite a growth rate that compounds quarter after quarter. You can’t underwrite one that swings 38 points.


“When one platform reverses while the rest of the sector keeps growing, it’s not something macro going on,” said Tom Grant, VP of Research at Apptopia. “If Gen Z were broadly pulling back from social apps, you’d see it everywhere. You do not. So not only is SNAP seeing a Q1 decline while others rise, it is now experiencing rising volatility as a business.”

The rest of the competitive set held up. Facebook posted its third consecutive Q1 acceleration, with growth concentrated in the 26-45 age range — the highest-CPM demographic in digital advertising. Instagram grew across every cohort, led by 26-35 at 14.1%. TikTok still commands the most absolute time per user in every age group, roughly 2x Facebook and 2x Instagram, and its time spent growth of 6.7% was positive if unspectacular.

Time spent across major social platforms continues to grow (time spent on mobile as well), but the consistency of that growth increasingly separates the pack. For investors, the Q1 data suggests the more durable engagement stories right now sit with Meta and TikTok, while Snap’s trajectory remains the one that needs the most proving out.

Note: This post was originally published on Apptopia blog and is republished on DIW with permission.

Reviewed by Irfan Ahmad.

Read next:

• Americans Spend an Average of 6.3 Hours Daily on Mobile Devices; Older Users Log Up to 358 Minutes Across 17 Apps

• Using your AI chatbot as a search engine? Be careful what you believe
by External Contributor via Digital Information World

Tuesday, March 24, 2026

How AI English and human English differ – and how to decide when to use artificial language

Laura Aull, University of Michigan

Suspicion and affection. Apprehension and excitement. Most people have mixed feelings about AI English, whether or not they always recognize it. When reading text generated by AI, people feel it sounds off, or fake. When reading English by a human, people are more likely to feel it has a characteristic voice or a personal touch.

Image: Airam Dato-on - Pexels

What exactly makes English sound human, or sound like AI? And does it matter if AI English never truly achieves a human feel?

I research the institutionalization of English. There is a long, problematic history of people feeling positively or negatively toward different kinds of English, rewarding how it is spoken or written by some sectors of society and devaluing how it is used by others.

When generative AI language tools came along, they scaled up these problems. English-based large language models are trained on text from the public internet. Human instructions tell the models to sound like formal English. Because of that, large language models end up trained on all the bias baked into standardized human texts and ideas.

In my work, I encounter people who would never trust the internet to tell them what is right and wrong, yet they trust generative AI to tell them how to write.

Human vs. AI

The first step to becoming a more informed user of AI English is to try to understand what people mean when they say writing sounds human. This understanding will improve your AI literacy. Most importantly, it will allow you to learn to recognize two qualities that make human English different from AI English: variation and readability.

Human English contains persistent, if subtle, linguistic patterns of variation and readability. By contrast, AI uses what I call exam English – a rather formal, dense English that is favored in academic tests and papers. It is less varied and less readable. People perceive it as robotic, but they also perceive it as smart.

Here’s a quick test: Read the two text messages below and guess which one is by a human and which one is by ChatGPT.

“i’m not sure how to break this to you. there’s no easy way to put it…i can’t make the friday-night fun. sorry. however, feel free to text me during the evening if there are any lulls in conversation. anyway, hope ur exotic trip goes well. see u next term.”

“Hey! I’m really sorry, but I won’t be able to make it Friday night. I hope you all have a great time, and I’ll see you next term!”

A human reader would probably notice several patterns right away. The first message has more “textese”: It defaults to lowercase and includes phonetic spellings “ur” and “u.” The second text has exam English capital letters, commas and spelling.

People are likely to have other impressions, too. Perhaps the first text feels more personal, and less sure of itself. Maybe the second text feels stiff, like it was written by an acquaintance. The first text contains different kinds of phrases and clauses, while the second text repeats the same clause structure four times.

On some level, human readers pick up on such patterns. Most people would say that the first text is by a human and the second is by AI. Indeed, the second passage was generated by ChatGPT.

Even this basic illustration shows that human English includes variation in word usage and grammatical structures that breaks up information and conveys personal meaning. AI English has less variation and more dense noun phrases. In research studies, these patterns appear repeatedly across genres and registers.

Some AI English patterns change

AI writing tools evolve, and large language models vary. GPT 5 was infamously cold-sounding compared with its predecessor GPT 4, for example.

But the patterns I am talking about are likely to persist. AI English favors what exam English has always rewarded: homogeneity and information density. And thus far, instructional tuning – training AI models to follow human instruction – only makes AI English less like human English. It doesn’t help that AI writing is part of what AI bots train on.

The net effect today is that AI English has been trained on English that is much more narrow than actual, collective human English in practice. Humans, by contrast, don’t just use language that is probable, but language that is possible – based on the varied language use they have observed, their creative capacity for new utterances and their propensity to blend personal and impersonal language patterns.

AI models can produce conventionally correct, smart-sounding language, but that language lacks the variation, accessibility and creativity that make language human.

How AI and human English can coexist

If you can become more aware of differences between AI and human English, those insights can help you use both language forms more productively. Here are a few steps to take:

Use language labels. When describing a given passage, use labels like “dense,” “plain,” “interpersonal” or “informational”, not social labels like “sounds smart” or “sounds off.” Consider exploring the actual patterns in human and AI English and trying to describe language patterns, not feelings about them, in other words.

Use AI tools selectively. Not only does human English have more accessible and varied patterns, it also engages the brain more than using AI language tools. To help prevent AI English from overshadowing human varied language in the world, use AI selectively.

Use curated tools. Tools like small language models and programs that you can add to a web browser to root out bias, such as Bias Shield, can help people make principled choices about AI English use. Tools such as translingual chatbots can also bring to AI English much more of the global variation in human English.

Be conscious of what sounds smart, and why. A century and a half of exam English makes it easy to think that dense, impersonal writing patterns are smart. But like any language patterns, they have pros and cons. They are not particularly personable or readable, especially for diverse audiences, and they are not representative of the range of global English in use today.

There can be good reasons to use exam English, but not just because AI bots generate it, or because people have learned to perceive it as smarter.

At its best, AI English is a language database driven by statistics. It’s big, but it’s canned. History tells us that a full range of global human English gives people the greatest possibilities for expression and connection.The Conversation

Laura Aull, Professor of English and Linguistics, University of Michigan

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Disclosure statement: Laura Aull does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Partners: University of Michigan University of Michigan provides funding as a founding partner of The Conversation US.

Reviewed by Ayaz Khan.

Read next: Blissful (A)Ignorance: People rarely notice AI-written messages in everyday communication


by External Contributor via Digital Information World

Friday, March 20, 2026

Blissful (A)Ignorance: People rarely notice AI-written messages in everyday communication

This news release did not use artificial intelligence—and even if it did, you wouldn’t suspect it.

Image: Solen Feyissa - Pexels

These days, you may be reading AI-written news more often than you think. The same can be said for emails, texts, and social media sites, according to a new study by researchers at the University of Michigan and Duke University.

The study found that undisclosed AI use does not trigger suspicion among people. When AI use is disclosed or strongly suspected (when people already pay a lot of attention to AI), people typically judge senders negatively, said Andras Molnar, U-M assistant professor of psychology and study co-author.

“For example, when we already suspect that someone generated their message using AI, we tend to think of them as less friendly, less trustworthy, less authentic and so on, compared to when the same text is genuinely human-written,” he said. “This ‘AI penalty’ has been widely documented in past studies.”

What the “AI penalty” suggests is that people, on average, lean toward the negative interpretation that focuses on the person (e.g., the person was lazy) instead of the more positive interpretation that takes into account the situation (e.g., there was a lot of time pressure).

However, under more realistic conditions, audiences may be uncertain, or even completely unaware, of communicators’ potential use of AI. Molnar, along with lead author Jiaqi Zhu of Duke, conducted two online experiments of more than 1,300 U.S. adults to examine how both explicit disclosure and uncertainty regarding AI use affect social impressions in realistic communication contexts (e.g., email, social media, texting).

Their research, published in Computers in Human Behavior, highlights that even though there are these massive penalties in social interactions when AI use is known, people don’t naturally suspect AI use: Participants in realistic situations treated messages of unknown origin as if they were known to be genuinely human-written. In other words, those who use AI as a shortcut most likely get away with it and keep their positive impressions.

Molnar said that concerns about widespread rejection of AI-assisted communication may be overstated for now, though attitudes could shift as AI awareness grows.

Contact: Jared Wadley.

Study: Blissful (A)Ignorance: Despite the widespread adoption of AI in communication, people do not suspect AI use in realistic contexts

This post was originally published by the University of Michigan News and is republished here with permission.

Reviewed by Irfan Ahmad.

Read next: 

• Content Marketers Embrace AI in Content Creation

• A better method for identifying overconfident large language models
by External Contributor via Digital Information World

Content Marketers Embrace AI in Content Creation

by Felix Richter, Data Journalist Statista

Less than four years after the release of ChatGPT marked the beginning of the AI era, artificial intelligence has become an integral part of the content marketing toolkit. From drafting text and generating visuals to analyzing campaign performance, AI-powered tools are being used for many day-to-day tasks, ideally helping teams to save time on routine tasks and make time for creative and strategic thinking.

According to the Statista+ Content Marketing Trend Study 2026, content creation is currently the most common application of AI tools. Just over half of the 252 surveyed B2B content marketing professionals said that their department uses AI to produce text, images or videos. Analytical tasks are another major use case, with 45 percent relying on AI for reporting and performance measurement.

Beyond these core areas, many marketers are also integrating AI into supporting processes. Around 4 in 10 respondents reported using AI for customer service as well as for ideation and inspiration. Others apply the technology to automate workflows, manage knowledge and documentation or for technical tasks such as search engine optimization. At 4 percent, only a small minority of organizations reported not having started using AI tools at all.

For more insights on AI in content marketing, download the 8th edition of our B2B Content Marketing Trend Study for free here.

AI dominates content marketing, aiding creation, analytics, customer service, ideation, workflow, and SEO, per Statista study.

This post was originally published on Statista and is republished under Creative Commons License CC BY-ND.

Reviewed by Asim BN.

Read next: A better method for identifying overconfident large language models
by External Contributor via Digital Information World