Saturday, March 28, 2026

Is the AI black box right on time?

by Inderscience

Irrespective of the ethics and the apocalyptic predictions, artificial intelligence (AI) has already become a central component of economic and institutional decision-making. Research in the International Journal of Intelligent Systems Design and Computing has gone beyond an industry-specific analysis of the state-of-the-AI-art and offers a detailed framework of how the many different AI tools are being adopted.

The main point that arises from the analysis is that while AI technologies are being used widely across sectors, organizations do not yet have a strategy that allows AI to be integrated in a way that balances innovation with accountability.

AI encompasses so-called machine learning for recognising patterns in data, natural language processing that can interpret and human language, and generative tools that produce text, images, video, computer code, and other output. All these tools are changing many sectors from healthcare diagnostics to processing industrial and financial data, to produce hit pop songs and accompanying videos.

Education and business operations are undergoing similar shifts. Adaptive learning platforms in education adjust course material to suit the way individual students learn. In retail and logistics, AI is being used to refine supply chains, manage inventory, and personalize the customer "experience". Even in the world of law, law enforcement is using AI to assess crime scenes and weigh evidence, while judges are using these tools to summarise their concluding remarks from massive briefs.

One of the most pressing issues highlighted by the research is data privacy, as AI systems depend on large volumes of often sensitive and personal information. In addition, there is the notion of algorithmic transparency, wherein we are are losing the ability to understand how a given AI system is arriving at a specific decision. Indeed, many of the most advanced AI models now work essentially as black boxes, meaning their internal processes simply cannot be interpreted…perhaps without resorting to another AI to do the interpretation! Such a lack of transparency might undermine trust in high-stakes contexts such as medical diagnoses or judicial decisions.

To address the issues, the researchers propose a framework based on stakeholder theory, which maintains an emphasis on the importance of all parties affected by the decisions AI might make. In the business context, they stress that organisations should not focus solely on efficiency or profit, they must have perspective that them to weigh the interests of employees, customers, regulators, and society at large when adopting AI. This might only come about, of course, with governance, regulations, and ethical obligations.

Idemudia, E.C. (2025) 'Artificial intelligence's effect and influence on multiple disciplines and sectors', Int. J. Intelligent Systems Design and Computing, Vol. 3, Nos. 3/4, pp.254–274.
DOI: 10.1504/IJISDC.2025.152183.

Image: Immo Wegmann - Unsplash

Edited and reviewed by Ayaz Khan.

Originally published by Inderscience and republished here with permission. Editor’s note: Typo corrected (“bot” to “not”).

Read next: AI makes rewilding look tame – and misses its messy reality
by External Contributor via Digital Information World

AI makes rewilding look tame – and misses its messy reality

Mike Jeffries, Northumbria University, Newcastle

AI-generated rewilding images present neat, idealized landscapes, ignoring ecological messiness and controversial species realities.
‘Create an image of what rewilding in England looks like’, according to ChatGPT. Image generated by The Conversation using ChatGPT.CC BY-SA

Humans have always imagined the natural world. From Ice Age cave paintings to the modern day, we depict the animals and landscapes we value – and ignore those we don’t.

Now artificial intelligence is doing the imagining for us. And when asked to picture “rewilded” Britain, it produces landscapes that are strikingly similar – and tame.

Two geographers at the University of Aberdeen recently did exactly this. In their research they present examples of how widely used AI chatbots (Gemini, ChatGPT and others) generated images of rewilded landscapes in the UK. The bots were prompted with commands such as “Can you produce an image of what rewilding in Scotland looks like?” or “Create an image of what rewilding in England looks like”, tailored to each bot’s style.

The authors recognise that the commands are very general, but that gives the bots free rein. The images generated were then compared using both the composition (for example point of view, scale, lighting) and content (what is in the picture and what is not, primarily the habitat types, species or humans).

A landscape without risk

The AI rewilded landscapes were all very similar, all but one featuring distant hills, grading politely to a valley foreground of open meadow or heath with a stream or pool. A golden light plays across the scenes, illuminating foreground flowers. Ponies and deer feature routinely, plus the occasional Highland cow. Perhaps unsurprisingly there were no humans, nor any human presence shown by buildings or other artefacts.

Two AI-generated images of rewilded landscapes
Images generated by the Aberdeen researchers using ChatGPT of rewilding in Scotland (left) and England (right). Note the similarity to the image generated by The Conversation using the same prompt (at the top of this article). Wartmann & Cary / ChatGPT, CC BY-SA

There was also no mess, no decay, no death, no animals likely to provoke a sharp intake of breath. No wolves, lynx, bears or bison, the creatures that routinely haunt the real arguments about rewilding.

Two AI-generated images of rewilded landscapes
Copilot’s take on rewilding in Scotland (left) and England (right). Wartmann & Cary / ChatGPT, CC BY-SA

The pictures were achingly dull, polite, as the authors point out “ordered and harmonious bucolic”.

Only experts get the messy version

AI really can generate images of ecologically accurate rewilding. This one made with Gemini, for instance, captures the messiness and chaos of a genuinely rewilded British landscape:

Gemini prompt: ‘A hyper-realistic, wide-angle landscape photograph of the British countryside 50 years after a large-scale rewilding project. The scene is defined by 'ecological messiness’ and structural diversity: thickets of thorny scrub like blackthorn and hawthorn transitioning into expanding groves of self-seeded oak and birch. No straight lines or mown grass. The ground is a mosaic of tall tussocky grasses, rotting fallen logs (deadwood), and muddy wallows created by free-roaming herbivores. In the mid-ground, a small herd of Exmoor ponies or Iron Age pigs are rooting through the undergrowth. The vegetation is dense and layered, featuring wild dog rose, brambles, and stands of willow in damp hollows. The lighting is the soft, dampened silver of a British overcast afternoon, highlighting the textures of lichen, moss, and wet leaves. No fences, no roads, no manicured edges—just a complex, tangled, and thriving wild ecosystem.‘ Gemini / The Conversation, CC BY-SA

However, it only does this when given highly specific instructions about species, landscapes, habitat types, and so on. In other words, you need to know what a rewilded landscape should look like in order to get a convincing image of one.

For most users, the result is something else entirely: a lowest common denominator vision of nature.

AI is copying our sanitised vision of the future

The sanitised AI landscapes produced in the recent study are not surprising. The Aberdeen researchers note the models draw inspiration from available sources, including the social media and websites of environmental initiatives and NGOs promoting rewilding such as Cairngorm Connect and Knepp Estate Rewilding. Their visuals often used aerial perspectives, from inaccessible vantage points using drones. Animals tended to be both iconic but also lovable such as beavers or wildcats.

People and our structures such as homes or farm buildings were largely missing. Reptiles, amphibians and invertebrates were notably absent too.

Wolves, bison, rewilded forest
Rewilding images are more accurate when they display natural processes like scavenging or storm damage. (Image generated by The Conversation using Gemini and a detailed prompt). The Conversation / Gemini, CC BY-SA

A particular concern of the authors’ is that the imagery used by the NGOs excludes processes, species and people who might challenge a narrow, conventional view of prettified nature. No wonder the AI was conjuring the sanitised landscapes, although actual rewilding routinely creates landscapes that are an aesthetic challenge, in particular messy, scrubby terrain.

We’ve always argued about what nature should look like

Visual imagery has long had a powerful influence on our view of nature. Wild landscapes in the UK were regarded with disdain by the more genteel classes. The writer Daniel Defoe, in his 1726 travelogue touring throughout Britain, characterised the Lake District as “All Barren and wild, of no use or advantage to man or beast…Unpassable hills…. All the pleasant part of England is at an end”. He wasn’t a fan.

The Romantic movement turned this bias on its head and venerated the sublime or sometimes terrible beauty of the landscape. For example Caspar David Friedrich’s famed painting of 1818, Wanderer above a sea of fog, with a lone adventurer gazing into the distant view of summits and clouds from a crag.

There is a touch of the sublime to the AI landscapes, certainly the viewpoint from on high. However a challenge for rewilding projects is that the resulting landscapes can be distinctly ugly and messy, certainly, neither wistfully pretty nor the dramatic sublime.

AI-generated image of wild pigs and horses in a rewilded Britain
The messy reality of a rewilded Britain. (Image generated by The Conversation using Gemini and a detailed 376 word prompt). The Conversation / Gemini, CC BY-SA

Rewilded sites are often scrubby and untidy. This can be on a large scale as natural processes kick in and open habitat scrubs over. Scrub habitat can be superb for wildlife, for example the Knepp Estate credits the regeneration of willow scrub for the return of iconic butterfly the purple emperor. The trouble is that scrub looks untidy and uncared for.

This has become a particularly common criticism of nature recovery projects, especially in urban settings: road verges unmown, weeds in pavements, parks less manicured. Some researchers call it an aesthetic backlash. The AI wildscapes are largely free of scrub which is no surprise because this does not feature much on the image sources the AI drew upon. This is a risk for projects in the real world. If the public comes to expect nature recovery to look neat and picturesque, then the messy reality may be harder to accept.

No scrub, no wolves, no people. AI has created a very tame rewilding.The Conversation

Mike Jeffries, Associate Professor, Ecology, Northumbria University, Newcastle

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read next: ‘Dangerous’ AI child sexual abuse reaches record high as public backs clampdown on ‘uncensored’ tools


by External Contributor via Digital Information World

Friday, March 27, 2026

Fragmented phone use — not total screen time — is the main driver of information overload, study finds

by Tiina Aulanko-JokirinneSarah Hudson

Frequent micro-checks and bursts of messaging are most strongly linked to feeling overloaded — and these habits are the hardest to change, says research from Aalto University.

Image: Muhmed Alaa El-Bank / Unsplash

Amid hot discussion on screen time, social media use and the impact of digital devices on our well-being, a seven-month study from Aalto University in Finland sheds new light on what overwhelms users the most –– and the results aren’t what you might think.

‘Screen time does matter, but the heaviest users aren’t the most overloaded,’ says doctoral researcher Henrik Lassila. ‘Those who feel most overwhelmed are the ones who return to their phone again and again for brief moments and then put it down shortly after.’

The seven-month study followed the digital behaviour of nearly 300 adults in Germany across smartphones and computers. Participants completed repeated surveys about information overload, while all apps and websites used were logged, creating a rich longitudinal dataset of real-world device use.

The findings show that fragmented use occurs most often on mobile devices and especially in messaging. For example, watching a short clip, locking the screen, then returning a few minutes later — patterns that create gaps and constant task switching. These ‘bursty’ routines were most strongly associated with feeling overwhelmed, even when total time spent on devices was similar.

‘We feel overloaded when we can’t process all the incoming information and our minds feel ‘full’ or stressed,’ Lassila says. ‘Information overload is linked with negative emotions, which can in turn drive more checking — a vicious cycle.’ While the study doesn’t directly address the question of why fragmented checking is so stressful, Lassila suggests that task-switching has been identified in other studies as particularly cognitively tiring.

Interestingly, although fragmented use often includes messaging, the study found that more time spent messaging did not by itself correspond to higher digital overwhelm. Rather, it was the short, frequent returns to the device that mattered most.

Hard habits to break

Earlier surveys have suggested that people quit social media when they feel a sense of digital overwhelm. The new study found little evidence for that. ‘People find it hard to change their behaviour,’ says Professor Janne Lindqvist. ‘Surprisingly, highly overloaded and non-overloaded participants used their devices for roughly the same total time over the study period. Those at the highest levels of overload tended to stay there, and those not overloaded rarely became overloaded.’

According to the researchers, device use and the feeling of overload are tightly woven into daily routines, making them difficult to change. One practical idea is a ‘micro-check tracker’ that would show users how often they return to their phones in short bursts. ‘You don’t need to respond to every ping immediately. Do one thing at a time,’ Lindqvist advises. ‘Ideally, turn off non-essential notifications and be present with whatever you’re doing.’

In a follow-up study currently under peer review, the team also finds that overload correlates with psychological stress, negative emotions - and anxiety.

‘These days many of us are on our phones repeatedly,’ Lindqvist says. ‘Try batching: check messages twice a day and reply in one session. Based on our findings, you may feel less stressed.’

The paper, ‘Stop Fiddling With Your Phone and Go Offline’, will be presented at CHI 2026, the leading conference on human–computer interaction, and is available online here.

Note: This post was originally published on Aalto University and is republished here with permission.

Reviewed by Irfan Ahmad.

Read next: 

• Research Shows TikTok Spreads Inaccurate Mental Health Content More Than Other Social Media Platforms


by External Contributor via Digital Information World

Thursday, March 26, 2026

Research Shows TikTok Spreads Inaccurate Mental Health Content More Than Other Social Media Platforms

By UEA Communications

Image: Solen Feyissa - Pexels

Researchers investigated the accuracy of mental health and neurodivergence information across social media platforms including YouTube, TikTok, Facebook, Instagram and X (formerly Twitter).

A substantial proportion of TikTok posts about ADHD and autism are misleading - according to a new study from the University of East Anglia (UEA).

Researchers investigated the accuracy of mental health and neurodivergence information across social media platforms including YouTube, TikTok, Facebook, Instagram and X (formerly Twitter).

They found that these platforms are awash with misleading or unsubstantiated mental health content - and that TikTok is the worst offender.

The study also reveals that posts about neurodivergence such as autism and ADHD contained higher levels of misinformation than many other mental health topics.

Dr Eleanor Chatburn, from UEA’s Norwich Medical School, said: “Our work uncovered misinformation rates on social media as high as 56 per cent. This highlights how easily engaging videos can spread widely online, even when the information isn’t always accurate.

“Social media has become an important place where many young people learn about mental health, but the quality of this information can vary greatly. This means that misleading content can circulate quickly, particularly if there aren’t accessible and reliable sources available.”

How the research happened

The team analysed more than 5,000 social media posts about mental health topics including autism, ADHD, schizophrenia, bipolar disorder, depression, eating disorders, OCD, anxiety and phobias.

The systematic review is the first to examine mental health and neurodivergence information across multiple social media platforms.

TikTok shows higher levels of misinformation

The study found that TikTok frequently contained higher levels of inaccurate or unsubstantiated mental health content than other platforms.

Dr Alice Carter undertook the research as part of her doctoral thesis. She said: “When we looked closely at TikTok content, studies reported that 52 per cent of ADHD-related videos and 41 per cent of autism videos analysed were inaccurate.

“By contrast, YouTube averaged 22 per cent misinformation, while Facebook averaged just under 15 per cent,” she added.

Why misinformation is such a problem

Dr Chatburn said: “Metal health information on social media matters because many young people now turn to these platforms to understand their symptoms and possible diagnoses.

“TikTok content has been linked to young people increasingly believing they may have mental health or neurodevelopmental conditions. While this questioning can be a helpful starting point, it’s important these questions lead to proper clinical assessment with a professional.

“As well as leading to misunderstanding of serious conditions and pathologising ordinary behaviour, misinformation can also lead to delayed diagnosis for people that actually do need help.

“When false ideas spread, they can feed stigma and make people less likely to reach out for support when they really need it.

“It can also make mental illness seem scary or hopeless, which creates even more fear and misunderstanding.

“On top of that, when people come across misleading advice about treatments, especially ones that aren’t backed by evidence, it can delay them from getting proper care and ultimately make things worse.”

Professionals vs influencers - who should we trust?

Unsurprisingly, the review found that content created by healthcare professionals was consistently more accurate. However, professional voices still represent only a small share of mental health content circulating on these platforms.

Dr Carter said: “In the case of ADHD on TikTok for example, just three per cent of professional videos contained misinformation - compared to 55 per cent of videos by non professionals.

“While lived-experience can play an important role, with personal stories helping people to feel understood and raising awareness of mental health conditions, it is vital to ensure that accurate and evidence-based information from clinicians and trusted organisations is also visible and easy to find.

“TikTok’s algorithms are also designed to push rapidly engaging content and this is a major driver of misinformation.

“Once users show interest in a topic, they are bombarded with similar posts - creating powerful echo chambers that can reinforce false or exaggerated claims.

“It is a perfect storm for misinformation to go viral faster than facts can catch up."

YouTube Kids - a rare bright spot

YouTube Kids was found to contain no misinformation for anxiety and depression, and only 8.9 per cent for ADHD - a result attributed to the platform’s stricter moderation rules.

By contrast, standard YouTube was described as “highly inconsistent”, with videos ranging from poor to moderately reliable, depending heavily on the topic, channel and influencer.

Clinicians must become creators

The review concludes with a call for health organisations and clinicians to create and promote better evidence-based content.

The team have also called for improved content moderation, standardised tools for assessing online mental health information, and clearer definitions of misinformation.

The Quality of Mental Health and Neurodivergence-Related Information on Social Media: A Systematic Review’ is published in The Journal of Social Media Research.

Note: This article was originally published by the University of East Anglia, and is republished with permission.

Reviewed by Asim BN.

Read next: 

• 72% of Gen Z Say Customer Reviews Are Most Credible Brand Influence, Survey Finds

• AI Has Made Marketing Faster, But It Hasn’t Improved Brand Engagement or Differentiation


by External Contributor via Digital Information World

Your voice, your typing, your sleep – what workplace wellbeing apps are really analysing

Mohammad Hossein Amirhosseini, University of East London

Image: Cottonbro studio / Pexels

A workplace wellbeing app might seem like a simple and helpful tool – a mood check-in, some stress management advice, or a chatbot asking how your week has gone. But behind that supportive language, some systems are also quietly analysing your voice, writing style and digital behaviour for signs of psychological distress.

These tools are already on the market – aimed at workplaces, universities and healthcare. They are framed as early-intervention systems that promise to cut costs and identify problems before they become serious. Unfortunately, companies are under no obligation to report using them, so data about how widespread they are is lacking.

The basic idea behind these tools is that behaviour leaves patterns. Artificial intelligence (AI) systems trained on large datasets learn to recognise signals associated with particular mental health conditions, and when similar signals appear in new data, the system produces a probability estimate.

For many people, the surprising part is how much ordinary behaviour can reveal. Voice recordings can pick up changes in rhythm, pitch and hesitation. Language models can analyse word choice and emotional tone. Smartphone data has also been explored as a way of tracking changes in sleep, movement and social interaction – all without the person doing anything out of the ordinary.

But detecting a statistical signal is very different from identifying a genuine problem. Human behaviour is deeply contextual. Someone may speak slowly because they are tired, nervous or communicating in a second language. Reduced online activity might simply reflect a busy week.

Even well-designed systems will make mistakes. A person who is genuinely struggling may not show the behavioural patterns the system was trained to recognise, while someone else may be incorrectly flagged as being in distress.

The pressure to develop these tools is real. The World Health Organization estimates that depression and anxiety cost the global economy US$1 trillion (£800 million) a year in lost productivity. Universities report rising demand for counselling, and employers are dealing with burnout and stress-related absence. Automated early-warning systems can seem like an attractive answer.

When wellbeing becomes surveillance

But this technology can change something fundamental about how mental health is understood. Traditionally, mental health is assessed through conversations between a person and a therapist, where context matters enormously. These systems work differently, inferring psychological states from behavioural traces that were never intended to communicate emotional information.

Once those inferences are made, they can influence decisions well beyond healthcare. Assessments of someone’s emotional state could shape workplace programmes, student support systems or insurance models, affecting how institutions judge a person’s reliability or suitability for a role. In effect, psychological states become a new kind of data.

There are particular risks for some groups. Neurodivergent people often communicate in ways that differ from the norms assumed by many datasets. Someone speaking in a second language may pause more frequently, producing speech patterns an algorithm could misinterpret. A person going through grief or illness may display signals that resemble those associated with mental health conditions – without actually having one.

Used carefully by healthcare professionals, these tools could have genuine value – helping therapists spot early warning signs of deteriorating mental health. But the same capability looks very different when deployed across a workplace or university without people’s knowledge.

At a minimum, people should know when these tools are being used, what data is being analysed and whether the system has been independently tested. A claim that software can detect distress is not, on its own, enough.The Conversation

Mohammad Hossein Amirhosseini, Associate Professor, Computer Science and Digital Technologies, University of East London

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Asim BN.

Read next: 

Artificial Intelligence: Friend or Foe?


by External Contributor via Digital Information World

‘Manners for machines’: how new rules could stop AI scrapers destroying the internet

T.J. Thomson, RMIT University; Daniel Angus, Queensland University of Technology; Jake Goldenfein, The University of Melbourne, and Kylie Pappalardo, Queensland University of Technology


Australians are among the most anxious in the world about artificial intelligence (AI).

This anxiety is driven by fears AI is used to spread misinformation and scam people, anxiety over job losses, and the fact AI companies are training their models on others’ expertise and creative works without compensation.

AI companies have used pirated books and articles, and routinely send bots across the web to systematically scrape content for their models to learn from. That content may come from social media platforms such as Reddit, university repositories of academic work, and authoritative publications like news outlets.

In the past, online scraping was subject to a kind of detente. Although scraping may sometimes have been technically illegal, it was needed to make the internet work. For instance, without scraping there would be no Google. Website owners were OK with scraping because it made their content more available, according with the vision of the “open web”.

Under these conditions, scraping was managed through principles such as respect, recognition, and reciprocity. In the context of AI, those are now faltering.

A new online landscape

Many news outlets are now blocking web scrapers. Creators are choosing not to use certain platforms or are posting less.

Barriers are being put in place across the open web. When only some can afford to pay to access news and information, then democracy, scientific innovation and creative communities are all harmed.

Exceptions to copyright infringement, such as fair dealing for research or study, were legislated long before generative AI became publicly available. These exceptions are no longer fit for purpose in an AI age.

The Australian government has ruled out a new copyright exception for text and data mining. This signals a commitment to supporting Australia’s creative industries, but leaves great uncertainty about how creative content can be managed legally and at scale now that AI companies are crawling the web.

In response, the international nonprofit Creative Commons has proposed a new voluntary framework: CC Signals.

Creative Commons licences allow creators to share content and specify how it can be used. All licences require credit to acknowledge the source, but various additional restrictions can be applied. Creators can ask others not to modify their work, or not to use it for commercial purposes. For example, The Conversation’s articles are available for reuse under a CC BY-ND licence, which means they must be credited to the source and must not be remixed, transformed, or built upon.

Summary of CC licences. Creative Commons

How would CC Signals work?

The proposed CC Signals framework lets creators decide if or how they want their material to be used by machines. It aims to strike a balance between responsible AI use and not stifling innovation, and is based on the principles of consent, compensation, and credit.

Simplistically, CC Signals work by allowing a “declaring party” – such as a news website – to attach machine-readable instructions to a body of content. These instructions specify what combinations of machine uses are permitted, and under what conditions.


CC Signals are standardised, and both humans and machines can understand them.

This proposal arrives at a moment that closely mirrors the early days of the web, when norms around automated access (crawling and scraping) were still being worked out in practice rather than law.

A useful historical parallel is robots.txt, a simple file web hosts use to signal which parts of a site can be accessed by the bots that crawl the web and look for content. It was never enforceable, but it became widely adopted because it provided a clear, standardised way to communicate expectations between content hosts and developers.

CC Signals could operate in much the same spirit. But, as with any system, it has potential benefits as well as drawbacks.

The pros

The framework provides more nuance and flexibility than the current scrape/don’t scrape environment we’re in. It offers creators more control over the use of their content.

It also has the potential to affect how much high-quality content is available for scraping. Without access to high-quality data, AI’s biases are exacerbated and make the technology less useful.

The framework might also benefit smaller players who don’t have the bargaining power to negotiate with big tech companies but who, nonetheless, desire remuneration, credit, or visibility for their work.

The cons

The greatest challenge with CC Signals is likely to be a practical one – how to calculate, and then enforce, the monetary or in-kind support required by some of the signals.

This is also a major sticking point with content industry proposals for collective licensing schemes for AI. Calculating and distributing licence fees for the thousands, if not millions, of internet works that are accessed by generative AI systems around the world is a logistical nightmare.

Creative Commons has said it plans to produce best-practice guides for how to make contributions and give credit under the CC Signals. But this work is still in progress.

Where to from here?

Creative Commons asserts that the CC Signals framework is not so much a legal tool as an attempt to define “manners for machines”. Manners is a good way to look at this.

The legal and practical hurdles to implementing effective copyright management for AI systems are huge. But we should be open to new ideas and frameworks that foreground respect and recognition for creators without shutting down important technological developments.

CC Signals is an imperfect framework, but it is a start. Hopefully there are more to come.The Conversation

T.J. Thomson, Associate Professor of Visual Communication & Digital Media, RMIT University; Daniel Angus, Professor of Digital Communication, Director of QUT Digital Media Research Centre, Queensland University of Technology; Jake Goldenfein, Associate Professor, Melbourne Law School, The University of Melbourne, and Kylie Pappalardo, Associate Professor, School of Law, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Asim BN.

Read next:

• Top AI Chatbots Gather Extensive User Data, Led by Meta AI and Google Gemini

Online ad fraud is a feature, not a bug


by External Contributor via Digital Information World

Wednesday, March 25, 2026

Online ad fraud is a feature, not a bug

By Benjamin Kessler

Image: Erik Mclean / Unsplash

Technological advancements and the dynamics of the platform economy make rooting out fraud more complicated than it may seem.

With print media circulation and broadcast television viewership in free fall, a lot is riding on the online advertising space being able to take up the slack. The good news is, digital ad spend is booming.

The bad news? A good chunk of that money is chasing a mirage.

Online ad fraud—where ad publishers falsely inflate engagement metrics (impressions, clicks, etc.) to boost revenues—is a growing problem that eats upwards of 20 percent of global ad spend.

Min Chen and Abhishek Ray, both professors in the information systems and operations management area at Costello College of Business at George Mason University, are researching how online ad networks, such as Google Ads, can improve upon existing anti-fraud methods. Their recently published paper in Management Science explores deep-rooted dynamics of the online ad ecosystem that make eliminating fraud even more complicated than it may seem at first glance. The paper was co-authored by Subodha Kumar of Temple University.

The researchers used a game-theoretic model to replicate the interconnected decision-making of the three players involved: advertisers, publishers, and the networks that serve as go-between.

“The way the ecosystem works is that the platforms in the middle, the ad networks, shares the benefit from the transaction,” Chen explains. “People have been arguing whether the network is incentivized to put their best efforts behind deterring fraud, since the fraudulent traffic benefits the networks too. So we tried to create a model to capture this.”

“If the advertisers rely solely on the reports from the ad networks, they may be at risk. They should use third-party tools to audit the performance better.” — Min Chen, information systems and operations management professor at the Costello College of Business at George Mason University

In addition, the model incorporates the two main fraud deterrents that networks routinely use. One is technological—platforms can adopt tougher standards for fraud detection, widening the scope of suspicious activity that gets flagged. The other is economic—lowering payments to all publishers so as to disincentivize large-scale fraud.

Surprisingly, the researchers find that the online ad economy works best when the two approaches seem to be working at cross-purposes. A tightening in fraud detection technology, paired with high payments for publishers, may sometimes produce the best outcomes for advertisers, publishers, and networks, as the market evolves.

The reason is rooted in the imperfect nature of fraud detection. To be sure, detection systems are improving all the time, especially with the advent of AI. But fraudsters do their best to blend in and adapt, using technological tools that often outpace those of their pursuers. “You cannot catch all the fraud, and if you try, you are going to mis-detect a lot of non-fraud,” Chen says.

Tougher fraud detection, then, will always mean more false positives, no matter how good the technology gets. To counter this inherent unfairness that penalizes good and bad actors alike, the ad network’s payment to publishers need to go up. Otherwise, publishers may take their business elsewhere—especially those most valuable to the system, i.e. those that are trustworthy — thereby decreasing the advertisers’ valuation on ad traffic.

“These ad networks are kind of a unique system where you can be monetarily rewarded for being honest, or punished for being dishonest,” Ray says. “What we discover for this system is there can be a way in which we can give carrots to people, not just sticks.”

On a similar note, the researchers find that an attempt to purge “bad apple” advertisers from the system can backfire due to false positives. In fact, fraud can sharply increase if networks, believing they have solved the problem, relax their fraud detection standards and raise incentives for the remaining advertisers. “Since the publishers who produce the fraudulent traffic are fewer now, the ad network may no longer need to maintain a strict detection policy. This can encourage the remaining ones to commit much more fraud,” Chen explains.

To Ray and Chen, online ad fraud is, in at least one sense, no different from older forms of malfeasance that are found in all free societies. “We need to have some kind of mechanism for managing the level of fraud, because the fraud detection method is never going to be perfect, whether it’s financial fraud, accounting fraud, etc.,” Chen says.

But as an example of the contemporary platform economy, the online advertising ecosystem is also distinctive, in that its de facto regulatory authority has skin in the game. The ad networks’ mixed incentives—as both beneficiaries and inhibitors of fraud—can undermine integrity and trust within an already-compromised system.

“If the advertisers rely solely on the reports from the ad networks, they may be at risk,” Chen says. “They should use third-party tools to audit the performance better.”

Editor’s Note: This post was originally published on George Mason University News and republished on DIW with permission.

Reviewed by Asim BN.

Read next: 

• Why you may be paying more than you need to for digital subscriptions

• Researchers Pioneer New Technique to Stop LLMs from Giving Users Unsafe Responses


by External Contributor via Digital Information World