Friday, March 13, 2026

Why exposing young children to AI content could have irreversible consequences

Sarah Whitcombe-Dobbs, University of Canterbury

Artificial intelligence (AI) already affects many areas of daily life, including the lives of young children.

Image: Ron Sinda / Unsplash

Many families give screens to children younger than two, and AI-generated content is increasing on the popular YouTube Kids channel – and it plays automatically.

Most parents are not able to monitor everything their child sees online. Some AI-generated content can be both frightening and attractive to young children, including violence and sexual content using engaging animals and characters.

Early childhood education centres are also using AI to support learning, particularly for children with developmental differences. This includes those who do not learn to speak easily or who have other communication problems related to autism or intellectual disability.

In the US, many parents report their children are using AI for school work. The encouragement for early childhood centres, schools and parents to use AI with children is based on short-term studies, but the long-term impacts are unknown.

The only way to know how AI may affect young children would be through well-designed longitudinal studies. But by the time robust evidence emerged, a whole generation would have grown up exposed – and if there are indeed harmful effects, these may be irreversible.

There are already some alarm bells ringing over AI’s potential impact.

New Zealand research shows high use of screens during early childhood is associated with poor language, social and relational functioning.

Many children love to use screens, and AI is likely to be similarly rewarding because AI models are endlessly patient and instantly responsive to the topics of your choosing and do not seem to demand anything.

Human development during early childhood

Like all mammals, human infants are bound by biological processes and have evolved to develop in social groups in close physical connection with others. Everything we know about child development highlights the importance of face-to-face connection.

Children learn about themselves and the world through all their senses. They learn to communicate through “serve-and-return” interactions – responsive, back-and-forth exchanges between them and their caregiver. This includes physical touch, emotion and play. Collectively, these interactions help shape brain architecture.

Based on their experiences during the first few years of life, children form models, or templates, of how intimate relationships work. These relational templates endure throughout their lives and influence close relationships in adulthood.

Children also learn about emotional regulation, seeking and receiving comfort and conflict resolution during the preschool years. All the while, their brains are forming, with foundational structures that require good experiences to function well throughout life.

We do not yet know what the impact will be on children’s capacity for human relationships if they are exposed to AI while their physiological, neurological and emotional regulatory systems are developing. It is unclear how longer-term AI exposure may affect children’s understanding of other people and their development of empathy.

Normal social interactions in childhood include conflict, negotiation, resolution and play with other children. These interactions involve non-verbal communication, risk estimation, relational repair and decision making.

It’s unclear how instantly responsive and engaging AI will affect these aspects of childhood. It is possible that children experiencing many AI-mediated social interactions may find it more difficult to navigate real-world relationships, especially when there is conflict.

It is also possible that children will develop a preference for AI engagement over real-life engagement with family or friends.

Young children find it harder to distinguish fantasy from reality. This quality is delightful for adults and children alike, involving imaginary play, silliness and amusement. Yet AI-generated fantasy may be persuasive to an overwhelming degree, potentially leading to children being confused about reality and the consciousness of others.

Potential for both harm and help

If infants and children don’t have sufficient real-world experiences, their emerging cognitive capacities for detecting reality and interpreting sensory inputs may be affected.

There is much excitement about the potential for AI-assisted tools to aid children with disabilities in their development of social communication. This seems likely to have benefits such as earlier detection of neuro-developmental differences. There may also be risks if these interventions replace real-life interactions with other children and adults.

What will be the daily experiences for children with extra learning needs? Parents may be happy with AI-enhanced learning, but less happy if this is provided in lieu of a real teacher aide.

The introduction of AI seems inevitable and it is already affecting our children. We know that connection, touch, reciprocal and language-rich environments, and unstructured play are important during early childhood development.

To adopt AI into our children’s spaces without knowing the consequences is an experiment with outcomes that may not be reversible. Given the uncertainty, families should at least have the freedom to choose an AI-free environment for their children.The Conversation

Sarah Whitcombe-Dobbs, Senior Lecturer in Child and Family Psychology, University of Canterbury

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Disclosure statement Sarah Whitcombe-Dobbs is a member of the New Zealand Labour Party. She receives funding from the Lottery Grants Board and Oranga Tamariki. Partners: University of Canterbury University of Canterbury provides funding as a member of The Conversation NZ.


Reviewed by Asim BN.

Read next: Mobile Accounts for Nearly 60 Percent of Web Traffic


by External Contributor via Digital Information World

Thursday, March 12, 2026

AI doesn’t ‘see’ the way that you do, and that could be a problem when it categorizes objects and scenes

Arryn Robbins, University of Richmond; Eben W. Daggett, New Mexico State University, and Michael Hout, New Mexico State University

Image: Anya Chernykh unsplash

Even with no fur in frame, you can easily see that a photo of a hairless Sphynx cat depicts a cat. You wouldn’t mistake it for an elephant.

But many artificial intelligence vision systems would. Why? Because when AI systems learn to categorize objects, they often rely on visual cues – like surface texture or simple patterns in pixels. This tendency makes them vulnerable to getting confused by small changes that have little effect on human perception.

A vision system aligned more closely with human perception – one that perhaps emphasizes shape, for instance – might still confuse the cat for another similarly shaped mammal, like a tiger; but it is unlikely to indicate an elephant.

The kinds of mistakes an AI makes reveal how it organizes visual information, with potential limitations that become concerning in higher-stakes settings.

Stickers and graffiti on a stop sign could serve as an adversarial attack, confusing AI in autonomous vehicles. rick/FlickrCC BY

Imagine an autonomous vehicle approaching a vandalized stop sign. While a human driver recognizes the sign from its shape and context, an AI that relies on pixel patterns may misclassify it, pushing the altered sign out of the category “sign” altogether and into a different group of images that it identifies as similar, such as a billboard, advertisement or other roadside object.

Together, these problems point to a misalignment between how humans perceive the visual world and how AI represents it.

We are experts in visual perception, and we work at the intersection of human and machine perception. People organize visual input into objects, meaning and relationships shaped by experience and context. AI models don’t organize visual information the same way. This key difference explains why AI sometimes fails in surprising ways.

Seeing objects, not features

Imagine that in front of you is a small, opaque object with both straight and curved edges. But you don’t see those features; you just see your coffee mug.

Vision isn’t a camera, passively recording the world. Instead, your brain rapidly turns the light your eyes absorb into objects you recognize and understand, organizing experience into structured mental representations.

Researchers can understand how these representations are structured by examining how people judge similarity. Your coffee mug is not like your computer, but it’s similar to a glass of water despite differences in appearance. That judgment reflects how the mug is mentally represented: not just in terms of appearance, but also what the mug is used for and how it fits into everyday activities.

Importantly, the mental organization of representations is flexible. Which aspects of an object stand out change with context and goals. If packing a moving box, shape and size matter most, so your mug might be placed anywhere it fits. But when putting it away in a cupboard, it goes next to other drinkware. The mug hasn’t changed, only the way it is organized in your mind.

Human visual perception is adaptive, driven by meaning and tied to how we interact with the world.

Aligning AI with humans

AI systems, however, organize visual input in fundamentally different ways than people – not because they are machines, but because of how narrowly they are trained. When an AI is trained to categorize a cat or an elephant, it only needs to learn which visual patterns lead to the correct label, not how those animals relate to each other or fit into the broader world.

In contrast, humans learn within a broader context. When we learn what an elephant is, we weave that representation into the tapestry of everything else we have learned: animals, size, habitats and more. Because AI is graded only on label accuracy, it can rely on shortcuts that work in training but sometimes fail in the real world.

The issue of representational alignment refers to whether AI organizes information in ways that resemble how people do. It’s not to be confused with value alignment, which refers to the challenge of making sure AI systems pursue outcomes and goals that humans intend.

Because human learning embeds new information into a web of prior knowledge, the relationships between new and existing concepts can be studied and measured. This means that representational alignment may be a solvable problem and a step toward addressing broader alignment challenges.

One approach to representational alignment focuses on building AI systems that behave like humans on psychological tasks, allowing researchers to compare representations directly. For example, if people judge a cat as more similar to a dog than to an elephant, the goal is to build AI models that arrive at those same judgments.

One promising technique involves training AI on human similarity judgments collected in the lab. In these studies, human participants might be shown three images and asked which two objects are more similar; for example, whether a mug is more like a glass or a bowl. Including this data during training encourages AI systems to learn how objects relate to one another, producing representations that better reflect how people understand the world.

Alignment beyond vision

Representational alignment matters beyond vision systems, and AI researchers are taking notice. As AI increasingly supports high-stakes decisions, differences between how machines and humans represent the world will have real consequences, even when an AI system appears highly accurate. For example, if an AI analyzing medical images learns to associate the source of an image or repeated image artifacts with disease rather than the real visual signs of the disease itself, that is obviously problematic.

AI doesn’t necessarily need to process information exactly the way people think, but training AI using principles drawn from human perception and cognition – such as similarity, context and relational structure – can lead to safer, more accurate and more ethical systems.The Conversation

Arryn Robbins, Assistant Professor of Psychology, University of Richmond; Eben W. Daggett, Affiliated Faculty of Psychology, New Mexico State University, and Michael Hout, Associate Dean of Research and Professor of Psychology, New Mexico State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Asim BN.

Read next: The End of the Landline Era: Mobile Phones Redefine Global Communication


by External Contributor via Digital Information World

Wednesday, March 11, 2026

The End of the Landline Era: Mobile Phones Redefine Global Communication

Landlines once defined modern communication, but their role is gradually disappearing. Worldwide data shows mobile subscriptions now vastly outnumber fixed-line connections, while in the United States the share of households with a landline has plunged over the past two decades.

Mobile Phones Connect Far More People Than Landlines Ever Did

By Tristan Gaudiaut - Data Journalist, Statista

Today marks the 150th anniversary of the first telephone call, a historic moment on March 10, 1876, when Alexander Graham Bell successfully transmitted the famous words, “Mr. Watson, come here, I want to see you.” Since then, telephony has undergone several technological revolutions, reshaping how people connect and communicate. The beginning of the 21st century, in particular, has brought sweeping change, with the decline of landlines and the rapid rise of mobile phones.

For most of the 20th century, landline telephones formed the backbone of global communications. However, the turn of the millennium marked the beginning of the decline phase for this technology. According to the International Telecommunication Union (via World Bank), in 1990, there were 9.8 landline subscriptions per 100 people worldwide, a figure that nearly doubled to 19.2 by 2006. Yet this dominance did not last. While landline subscriptions peaked in the early 2000s, mobile subscriptions began to rise rapidly. From fewer than 10 subscriptions per 100 people before 2000, mobile penetration reached 50 per 100 people by 2007 and 100 per 100 people by 2017.

Today, the numbers tell a clear story. In 2025, there are 111.5 mobile subscriptions per 100 people worldwide, compared with just 9.9 landline subscriptions (a figure that has fallen back to roughly 1990 levels). Mobile phones have not only replaced landlines but have also connected far more people than fixed-line networks ever did. While significant disparities remain in terms of network technology and coverage, mobile phones have leapfrogged traditional landline infrastructure in many regions, particularly in developing countries, providing billions of people with access to the internet, financial services and important information.

This chart shows the number of mobile and landline phone subscriptions per 100 people worldwide from 1990 to 2025.

Source: Statista / ITU / World Bank

Landline Phones Are a Dying Breed

Felix Richter - Data Journalist, Statista

As smartphones have become a constant companion for most people in the United States, landline phones are rapidly losing their relevance. In 2004, more than 90 percent of U.S. adults lived in households that had an operational landline phone - now it’s little more than 20 percent. That’s according to data provided by the Centers of Disease Control and Prevention, which has been tracking phone ownership in the U.S. as a by-product of its biannual National Health Interview Survey since 2004.

If the trend continues, and there’s little reason to believe it won’t, landline phones could soon become an endangered species, much like the VCR and other technological relics before it.

Landline Phones Are a Dying Breed
Source: Statista / CDC National Health Interview Survey

Reviewed by Ayaz Khan.

Read next:

• Behind the feed: New research explores how social media algorithms shape our digital lives

• Social Media’s Annual Great Purge: Facebook and X Remove More Fake Accounts Than Their Active Users, TikTok Deletes Half Its Fake Accounts

by External Contributor via Digital Information World

Behind the feed: New research explores how social media algorithms shape our digital lives

By Lindsey Massimiani Pepe

New research from the University of Miami examines how platform algorithms govern the relationship between creators, consumers, and advertisers, and what that means for everyday users.

Image: Mariia Shalabaieva / Unsplash

Every time you scroll, like or share on a social media platform, an algorithm is watching, learning and deciding what you see next. But how many of us stop to think about what’s actually driving those decisions, and what’s at stake when we don’t?

That question sits at the center of new research co-authored by Robert W. Gregory, associate professor of business technology, and Ola Henfridsson, professor of business technology and associate dean, both at the University of Miami Patti and Allan Herbert Business School, and Mareike Möhlmann of Bentley University.

Published in the Journal of Management Information Systems, the study examines how platforms like YouTube use algorithms to police, recommend, and monetize content, and what that means for the millions of people who use them every day. The researchers introduce the concept of “algorithmic stakeholder governance” to describe how platforms use automated systems to manage and balance the competing interests of creators, consumers and advertisers.

Many people turn to social media because it feels more direct and personal than traditional media. In practice, though, every piece of content a user encounters has already been filtered, ranked and shaped by algorithms designed primarily to maximize engagement on the platform. “The algorithm is sitting in the middle of every human interaction on these platforms,” Gregory said. “At the end of the day, everything you see on social media is being shaped by it.”

The study examines the relationship among three groups that make platforms like YouTube function: creators who produce content, consumers who watch it and advertisers who fund it. Each group has its own interests, and those interests don’t always align. YouTube’s algorithms are constantly working to balance all three, deciding what gets promoted, what gets restricted and who gets paid, in a way that keeps the entire ecosystem running at scale. The research draws on 66 in-depth interviews with creators, consumers, advertisers and YouTube executives, as well as nearly 3,000 user forum posts and 35 official YouTube press releases.

What the research makes clear, however, is that algorithms alone can only go so far. These are sophisticated systems, but they learn and improve based on the input they receive. The feedback loop only gets stronger when users engage actively and deliberately.

Whether that human involvement actually helps depends entirely on how people choose to engage. Some engage passively, scrolling without much reflection and quietly conforming to the platform’s norms without realizing they are doing so. The researchers call this “unreflective endorsing,” and it matters because those passive behaviors feed directly back into the algorithm, reinforcing whatever patterns are already in place.

Users who engage more deliberately tell a different story. When people flag content, request human reviews of automated decisions or provide intentional feedback to the platform, they are actively shaping how the algorithm learns and evolves. For entrepreneurs and content creators, this is particularly relevant. “If you understand how the actions you choose on the platform are shaped by these algorithmic systems, you can shape these network effects to your advantage,” Gregory said. For example, a business owner who systematically manages their channel, reporting spam and understanding which content the algorithm rewards, is working with the system rather than being carried along by it.

Just as earlier generations gradually learned to evaluate different news sources and media institutions, users today can learn to do the same with social media. For Gregory, it is both a personal responsibility and a cultural moment still taking shape. “We have to grow up as a society and ask questions,” he said. The most important first step, he argues, is recognizing that what appears in a feed is the result of deliberate design, not a neutral window onto the world — and that understanding how these systems work is ultimately what gives users the agency to make more informed choices about where and how they participate online.

This work arrives at a moment of significant momentum for Miami Herbert’s Business Technology Department, which recently earned the No. 1 national ranking for research productivity in information systems from the Association of Information Systems Research Rankings Service, the first time the University of Miami has achieved that distinction. Gregory ranked No. 106 among information systems scholars worldwide, reflecting the department’s strength in producing work that is academically rigorous and relevant beyond the classroom.

The paper, “Algorithmic Stakeholder Governance on Content Platforms: A Lead Role Perspective,” is published in the Journal of Management Information Systems.

Note: This post was originally published on the University of Miami Patti and Allan Herbert Business School and republished here with permission.

Reviewed by Irfan Ahmad.

Read next:

• It’s tempting to offload your thinking to AI. Cognitive science shows why that’s a bad idea

• 78% of Workers “Voluntold” to Take Extra Tasks, 53% Get No Raise, 41% Report Burnout, AI Integration Often Increases Workload


by External Contributor via Digital Information World

Tuesday, March 10, 2026

78% of Workers “Voluntold” to Take Extra Tasks, 53% Get No Raise, 41% Report Burnout, AI Integration Often Increases Workload

Workers said they’re doing three jobs at once in their current roles, but over half haven’t had a raise or promotion for their hard work, according to a new study.

Image: Vitaly Gariev / Unsplash

A recent survey of 2,000 employed Americans investigated the many contributing factors behind this increase in workload and what workers need to work sustainably.

Each year, respondents said they have nine new tasks added to their plates, on average, with this uptick in responsibilities happening at an exponential rate.

According to the study findings, the majority of workers (78%) have been “voluntold” to do something in the last year, having been assigned new work that they didn’t apply for or agree to, but were expected to tackle anyway.

More than one in 10 (12%) have even been “voluntold” to do extra work in the last day.

Conducted by Talker Research and commissioned by Office Beacon, the study also uncovered how workplace data differs by age group and industry.

And the study found that Gen Z workers (17%) and logistics or field-based workers (15%) were the groups most likely to have been handed new tasks as recently as in the last day.

The most common reason behind these new and involuntary responsibilities? A simple lack of staffing was the most common reason cited across all industries (37%).

Twenty-eight percent of workers also said this increase in work happened without a discussion with their management, and nearly one in five (17%) said the new responsibilities were framed as temporary but became permanent.

Yet of those who’ve involuntarily received new work responsibilities, 53% never received a raise or promotion, with service (56%) and healthcare workers (55%) being the least likely to receive these things in light of new duties.

Zooming in, nearly all of those who’ve been “voluntold” to do additional work in recent years (91%) said these new tasks fall outside their original job description, and most (55%) do not feel very qualified to do them.

The trend of being “voluntold” to do extra work has also had a negative impact on workers’ preexisting responsibilities, with nearly three-quarters (74%) saying their new assignments have hurt their ability to do their job to the best of their abilities.

Four in ten employees (40%) even agreed, “I love my job, but I don’t feel like I can keep up with it anymore,” with Gen Z (55%) and healthcare workers (47%) being the most likely to feel this way.

“AI is now a permanent fixture in the workplace, and inevitably a part of the workplace wellness conversation,” said Pranav Dalal, chief executive officer and founder of Office Beacon. “What’s missing from the AI/workplace picture, though, is a healthy awareness that AI should be used as a tool to support and empower workers, enabling them to do their jobs better. Workplace leaders need to be aware that burnout has a very real impact on their workers’ wellbeing, and AI is a support tool that should be helping with burnout, not creating it.”

The survey revealed that 41% of workers suffer from burnout at work, resulting in job dissatisfaction (54%), worsened mental health (46%) and workers questioning their abilities to even do their jobs well (32%).

Healthcare (49%) and service workers (41%) have struggled the most with burnout, and baby boomers are the unhappiest in their roles due to work fatigue (69%).

Forty percent of employees even admitted that in the last three years, they’ve considered leaving their jobs because responsibilities were added to their workload without giving them the proper support.

Artificial intelligence (AI) is a big part of this discussion, and 39% of the workers polled said their companies have introduced AI tools or automations into their workflow in the last three years.

Of those, only a small handful (7%) said AI tools have decreased their workload. In contrast, 43% said that with AI integrated at their company, their responsibilities have multiplied. And less than a third (31%) said AI has made their work much more efficient.

With AI in the picture, training on how to use it is more important than ever. And of those using AI in their workflow, most (72%) did receive training on how to use it.

“This increase in workload due to AI indicates a leadership issue,” continued Dalal. “This study found that most workers using AI received training for it. So why do workers still feel burnt out, and why are many still not feeling much more efficient in their roles? This indicates a larger toxic corporate culture issue, where leaders are heaping more and more on their employees’ plates. With AI as a tool, the opposite should be happening.”

When asked about AI training effectiveness, most of the workers who received AI training (87%) felt it was adequate — pointing to a leadership issue at the root of burnout, rather than AI.

Many surveyed (39%) also said they would save more time at work if they were taught to use AI tools by a human rather than a self-guided program, course, or automated training.

Along with better, more effective AI training, millennials (40%), Gen X (37%) and baby boomers (42%) said additional pay or recognition for all the work they do would be the most helpful thing.

And the thing that Gen Z said would improve their work the most? They simply want better communication from their management (33%), according to the data.

Note: This post was originally published by Talker Research and republished here with permission.

Reviewed by Ayaz Khan.

Read next: It’s tempting to offload your thinking to AI. Cognitive science shows why that’s a bad idea
by External Contributor via Digital Information World

Social Media’s Annual Great Purge: Facebook and X Remove More Fake Accounts Than Their Active Users, TikTok Deletes Half Its Fake Accounts

By Surfshark

The scenario of communicating with fake users and interacting with scam content on social media has become a daily reality. For some major platforms such as Facebook and TikTok, the situation is frustrating, as they have to remove billions of fake accounts and pieces of spam content from the platforms annually. The number of fake users significantly surpasses Facebook and X active user base.
Facebook removes more fake accounts annually than it has active users Facebook, with 3 billion active users, removes 4.5 billion fake accounts annually—a volume 1.5 times its user base.

Image: Surfshark

In the dark market, fake social media account prices start from as little as $0.08. The trend is worrying: being exposed to huge amounts of lies on social media makes it easy for a regular user to get scammed.

Key insights

  • In an effort to protect users from scams and misinformation and to keep experience authentic, social media platforms are constantly removing massive number of fake accounts. An analysis of transparency reports reveals the staggering scale of this cleanup: Facebook alone removes an average of 4.5 billion fake accounts annually, followed by TikTok at 1 billion, X (formerly Twitter) at 671 million, and LinkedIn at 112 million. Additionally, YouTube terminates an average of 25 million channels annually, resulting in the removal of 311 million spam-related videos hosted on those channels.
  • Social media platforms must also aggressively combat spam generated by these fake accounts, with moderation efforts often reaching into the billions. Facebook leads in content removals, purging an average of 4.7 billion pieces of spam content annually. YouTube follows by removing 3.6 billion spam-related comments from videos each year. TikTok also reports high volumes, deleting an average of 1.4 billion comments and 671 million videos attributed to fake accounts. In comparison, Instagram removes 271 million pieces of spam content, while LinkedIn eliminates 200 million instances of spam and scam content annually.
  • Comparing removal volumes to active users reveals the enormous scale of platform moderation. On some platforms, the number of annual removals rivals or even exceeds the entire active user base. Facebook, with 3 billion active users, removes 4.5 billion fake accounts annually — a volume 1.5 times its user count. Similarly, X reports removing 671 million accounts for platform manipulation and spam each year, a figure that surpasses its 570 million active users.
  • Other platforms demonstrate a significant, though less extreme, ratio. TikTok, with 1.9 billion active users, removes roughly 1 billion fake accounts annually, equivalent to over half its user base. LinkedIn, serving 310 million active users, removes 112 million fake accounts per year, representing more than a third of its active user base. Finally, YouTube's moderation is also notable. With an estimated 60 million active channels, its annual termination of 25 million spam channels means it removes a quantity equivalent to over 40% of its active channel base each year.
  • Fake accounts are often used by bad actors to artificially boost engagement — writing fake comments, giving likes, or following other accounts. The business model typically works in two steps: first, hackers acquire as many fake accounts as they can, either by creating them or purchasing them in bulk. Once enough accounts are gathered, hackers begin using them for their own ends or selling services to promote questionable products, increase engagement, or push political agendas. Fake social media account prices start from as little as $0.08, and the price increases depending on the account's age, number of friends or followers, country of origin, and platform. Of course, such services violate the terms of service of social media platforms, which are constantly working to reduce this behavior.

Methodology and sources

This study collected data from the official transparency reports of Meta, Google, TikTok, X (formerly Twitter), and LinkedIn. The data focused on removed spam content and fake account numbers across these social media platforms. Specifically, Surfshark gathered information for Facebook, YouTube, X, Instagram, TikTok, and LinkedIn.

Data collection began in 2021, as this was the earliest year for which Facebook, YouTube, TikTok, and LinkedIn all had available data. X (formerly Twitter) and Instagram, however, only had data available from the second half of 2024.

Since some platforms reported data quarterly and others semi-annually, Surfshark adjusted the figures by calculating average annual numbers for spam and fake content. Additionally, Surfshark collected data on monthly active users and cross-referenced these figures with the number of removed fake accounts.

For the complete research material behind this study, visit here.

Data was collected from:

Meta. Community Standards Enforcement Report.
Google. YouTube Community Guidelines enforcement.
X. Global Transparency Report.
TikTok. Community Guidelines Enforcement Report.
LinkedIn. Community Report.

Note: This post was originally published on Surfshark Research and republished with permission on Digital Information World (DIW).

Reviewed by Ayaz Khan.

Read next:

• Americans Spend an Average of 6.3 Hours Daily on Mobile Devices; Older Users Log Up to 358 Minutes Across 17 Apps

• New research warns charities against ‘AI shortcut’ to empathy


by External Contributor via Digital Information World

AI and work: an expert assesses how far this revolution still has to run

Vivek Soundararajan, University of Bath
Image: Matheus Bertelli / Pexels

Every week brings fresh claims about AI transforming the workplace. A CEO declares a revolution. A think piece predicts millions of jobs vanishing overnight. The noise is relentless.

But strip away the hype and there is a simpler question. In developed economies, what has AI actually changed about work so far? The answer turns out to be more interesting, and more uneven, than either side suggests.

What’s real

Let’s start with what the evidence supports. AI is delivering genuine productivity gains in specific kinds of knowledge-based and service work. An experimental study found that professionals using ChatGPT for writing tasks took 40% less time to complete them, with an 18% improvement in quality (as evaluated by their colleagues in blind testing).

And another study of more than 5,000 customer service agents found a 15% increase in issues resolved per hour. An industry experiment involving realistic, complex tasks done with management consultants found they completed the work 25% faster and produced results that were deemed to be 40% higher in quality (again, judged by experts in blind tests). Randomised trials involving nearly 5,000 software developers documented a 26% increase in completed tasks.

These are not small numbers. And adoption is moving fast. A US survey found that nearly four in ten workers were using generative AI at work by mid-2025. This pace of adoption outstrips the early years of both the personal computer and the internet. Across countries in the Organisation for Economic Co-operation and Development (OECD), firms report that AI integration into business functions is accelerating.

So the productivity story is real, particularly in text-heavy, codifiable tasks across legal, finance, marketing and customer service. That much is not hype.

What’s overstated

But the apocalyptic predictions have not yet materialised. Employment across OECD countries remains historically robust. A review of the research-based evidence produced in the US in early 2026 found that despite rapid adoption, AI has so far caused little in the way of widespread job losses or pay cuts. And a study (as yet unpublished) that tracked AI chatbot use in Danish workplaces found essentially zero effects on earnings or recorded hours, even among heavy users and early adopters.

Why? Because many jobs still require tacit knowledge, physical presence, sound judgement and the kind of contextual awareness that AI cannot yet replicate. And adoption is far more uneven than the headline numbers suggest. While AI use among firms in the US soared between 2023 and 2025, a report found fewer firms had actually embedded it into their operations. The information sector, for example, adopted it at roughly ten times the rate of hospitality.

One economic modelling exercise estimates that AI might add somewhere between 1% and 1.6% to US GDP over the next decade. This is significant, but it is far short of the transformative claims.

The gap between productivity gains in controlled studies and real transformation inside organisations remains enormous. The revolution, for most workplaces, has not yet arrived.

What’s under-reported

Here is where the story gets more consequential and where the commentary falls short. The distributional effects of AI within developed economies deserve far more attention. Not everyone is experiencing this transformation the same way.

The evidence on who benefits is strikingly consistent. Less experienced workers see the biggest gains from AI tools. A study found that AI narrowed the gap between the most and least productive staff, with the largest improvements among lower-ability workers.

In customer service, novice agents benefited most. The most experienced staff experienced little improvement and, in some cases, slight quality declines. The industry experiment mentioned above found below-average performers improved by 43%, while top performers gained 17%. So the biggest gains go to the least experienced workers, narrowing the gap between top and bottom performers within firms.

That sounds like good news. But there’s a catch.

While AI may compress skills inside firms, the broader labour market is telling a different story. Entry-level roles are shrinking in AI-exposed occupations. The routine tasks that once justified hiring juniors – jobs which provided learning opportunities for those on the bottom rung – are the first to be automated.

Economic theory has long warned that automation displaces workers from tasks, and the creation of new tasks to counterbalance this is neither automatic nor guaranteed. An estimated 60% of jobs in advanced economies face some AI exposure.

In most realistic scenarios, inequality worsens without deliberate intervention – partly because higher-income workers hold more capital assets and stand to gain from rising returns on AI-related investments.

The pattern that is emerging is this: AI helps those already inside the door while quietly narrowing the door for those trying to get in.

Paying attention to the right question

Sector matters. Firm size matters. Job type matters. The AI transition is not one story. It is many – unfolding at different speeds, with different consequences, depending on where you sit in the economy.

The debate has been stuck between breathless optimism and existential dread. Neither is useful. The evidence points somewhere more uncomfortable: a transformation that is real but partial, fast in some corners and stalled in others – and distributing its costs and benefits in ways that are shaped by existing inequalities.

If the productivity gains are genuine, the question is: who captures them? If entry-level work is disappearing, what replaces it? And if the gap between firms that adopt and those that cannot is widening, the focus should be on what we are building in response. Just talking about it won’t be enough.The Conversation

Vivek Soundararajan, Professor of Work and Equality, University of Bath

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Asim BN.

Read next:

• New research warns charities against ‘AI shortcut’ to empathy

• Teaching teens critical thinking could be key to challenging fake news, AI slop and toxic social media


by External Contributor via Digital Information World