Wednesday, March 11, 2026

Behind the feed: New research explores how social media algorithms shape our digital lives

By Lindsey Massimiani Pepe

New research from the University of Miami examines how platform algorithms govern the relationship between creators, consumers, and advertisers, and what that means for everyday users.

Image: Mariia Shalabaieva / Unsplash

Every time you scroll, like or share on a social media platform, an algorithm is watching, learning and deciding what you see next. But how many of us stop to think about what’s actually driving those decisions, and what’s at stake when we don’t?

That question sits at the center of new research co-authored by Robert W. Gregory, associate professor of business technology, and Ola Henfridsson, professor of business technology and associate dean, both at the University of Miami Patti and Allan Herbert Business School, and Mareike Möhlmann of Bentley University.

Published in the Journal of Management Information Systems, the study examines how platforms like YouTube use algorithms to police, recommend, and monetize content, and what that means for the millions of people who use them every day. The researchers introduce the concept of “algorithmic stakeholder governance” to describe how platforms use automated systems to manage and balance the competing interests of creators, consumers and advertisers.

Many people turn to social media because it feels more direct and personal than traditional media. In practice, though, every piece of content a user encounters has already been filtered, ranked and shaped by algorithms designed primarily to maximize engagement on the platform. “The algorithm is sitting in the middle of every human interaction on these platforms,” Gregory said. “At the end of the day, everything you see on social media is being shaped by it.”

The study examines the relationship among three groups that make platforms like YouTube function: creators who produce content, consumers who watch it and advertisers who fund it. Each group has its own interests, and those interests don’t always align. YouTube’s algorithms are constantly working to balance all three, deciding what gets promoted, what gets restricted and who gets paid, in a way that keeps the entire ecosystem running at scale. The research draws on 66 in-depth interviews with creators, consumers, advertisers and YouTube executives, as well as nearly 3,000 user forum posts and 35 official YouTube press releases.

What the research makes clear, however, is that algorithms alone can only go so far. These are sophisticated systems, but they learn and improve based on the input they receive. The feedback loop only gets stronger when users engage actively and deliberately.

Whether that human involvement actually helps depends entirely on how people choose to engage. Some engage passively, scrolling without much reflection and quietly conforming to the platform’s norms without realizing they are doing so. The researchers call this “unreflective endorsing,” and it matters because those passive behaviors feed directly back into the algorithm, reinforcing whatever patterns are already in place.

Users who engage more deliberately tell a different story. When people flag content, request human reviews of automated decisions or provide intentional feedback to the platform, they are actively shaping how the algorithm learns and evolves. For entrepreneurs and content creators, this is particularly relevant. “If you understand how the actions you choose on the platform are shaped by these algorithmic systems, you can shape these network effects to your advantage,” Gregory said. For example, a business owner who systematically manages their channel, reporting spam and understanding which content the algorithm rewards, is working with the system rather than being carried along by it.

Just as earlier generations gradually learned to evaluate different news sources and media institutions, users today can learn to do the same with social media. For Gregory, it is both a personal responsibility and a cultural moment still taking shape. “We have to grow up as a society and ask questions,” he said. The most important first step, he argues, is recognizing that what appears in a feed is the result of deliberate design, not a neutral window onto the world — and that understanding how these systems work is ultimately what gives users the agency to make more informed choices about where and how they participate online.

This work arrives at a moment of significant momentum for Miami Herbert’s Business Technology Department, which recently earned the No. 1 national ranking for research productivity in information systems from the Association of Information Systems Research Rankings Service, the first time the University of Miami has achieved that distinction. Gregory ranked No. 106 among information systems scholars worldwide, reflecting the department’s strength in producing work that is academically rigorous and relevant beyond the classroom.

The paper, “Algorithmic Stakeholder Governance on Content Platforms: A Lead Role Perspective,” is published in the Journal of Management Information Systems.

Note: This post was originally published on the University of Miami Patti and Allan Herbert Business School and republished here with permission.

Reviewed by Irfan Ahmad.

Read next:

• It’s tempting to offload your thinking to AI. Cognitive science shows why that’s a bad idea

• 78% of Workers “Voluntold” to Take Extra Tasks, 53% Get No Raise, 41% Report Burnout, AI Integration Often Increases Workload


by External Contributor via Digital Information World

Tuesday, March 10, 2026

78% of Workers “Voluntold” to Take Extra Tasks, 53% Get No Raise, 41% Report Burnout, AI Integration Often Increases Workload

Workers said they’re doing three jobs at once in their current roles, but over half haven’t had a raise or promotion for their hard work, according to a new study.

Image: Vitaly Gariev / Unsplash

A recent survey of 2,000 employed Americans investigated the many contributing factors behind this increase in workload and what workers need to work sustainably.

Each year, respondents said they have nine new tasks added to their plates, on average, with this uptick in responsibilities happening at an exponential rate.

According to the study findings, the majority of workers (78%) have been “voluntold” to do something in the last year, having been assigned new work that they didn’t apply for or agree to, but were expected to tackle anyway.

More than one in 10 (12%) have even been “voluntold” to do extra work in the last day.

Conducted by Talker Research and commissioned by Office Beacon, the study also uncovered how workplace data differs by age group and industry.

And the study found that Gen Z workers (17%) and logistics or field-based workers (15%) were the groups most likely to have been handed new tasks as recently as in the last day.

The most common reason behind these new and involuntary responsibilities? A simple lack of staffing was the most common reason cited across all industries (37%).

Twenty-eight percent of workers also said this increase in work happened without a discussion with their management, and nearly one in five (17%) said the new responsibilities were framed as temporary but became permanent.

Yet of those who’ve involuntarily received new work responsibilities, 53% never received a raise or promotion, with service (56%) and healthcare workers (55%) being the least likely to receive these things in light of new duties.

Zooming in, nearly all of those who’ve been “voluntold” to do additional work in recent years (91%) said these new tasks fall outside their original job description, and most (55%) do not feel very qualified to do them.

The trend of being “voluntold” to do extra work has also had a negative impact on workers’ preexisting responsibilities, with nearly three-quarters (74%) saying their new assignments have hurt their ability to do their job to the best of their abilities.

Four in ten employees (40%) even agreed, “I love my job, but I don’t feel like I can keep up with it anymore,” with Gen Z (55%) and healthcare workers (47%) being the most likely to feel this way.

“AI is now a permanent fixture in the workplace, and inevitably a part of the workplace wellness conversation,” said Pranav Dalal, chief executive officer and founder of Office Beacon. “What’s missing from the AI/workplace picture, though, is a healthy awareness that AI should be used as a tool to support and empower workers, enabling them to do their jobs better. Workplace leaders need to be aware that burnout has a very real impact on their workers’ wellbeing, and AI is a support tool that should be helping with burnout, not creating it.”

The survey revealed that 41% of workers suffer from burnout at work, resulting in job dissatisfaction (54%), worsened mental health (46%) and workers questioning their abilities to even do their jobs well (32%).

Healthcare (49%) and service workers (41%) have struggled the most with burnout, and baby boomers are the unhappiest in their roles due to work fatigue (69%).

Forty percent of employees even admitted that in the last three years, they’ve considered leaving their jobs because responsibilities were added to their workload without giving them the proper support.

Artificial intelligence (AI) is a big part of this discussion, and 39% of the workers polled said their companies have introduced AI tools or automations into their workflow in the last three years.

Of those, only a small handful (7%) said AI tools have decreased their workload. In contrast, 43% said that with AI integrated at their company, their responsibilities have multiplied. And less than a third (31%) said AI has made their work much more efficient.

With AI in the picture, training on how to use it is more important than ever. And of those using AI in their workflow, most (72%) did receive training on how to use it.

“This increase in workload due to AI indicates a leadership issue,” continued Dalal. “This study found that most workers using AI received training for it. So why do workers still feel burnt out, and why are many still not feeling much more efficient in their roles? This indicates a larger toxic corporate culture issue, where leaders are heaping more and more on their employees’ plates. With AI as a tool, the opposite should be happening.”

When asked about AI training effectiveness, most of the workers who received AI training (87%) felt it was adequate — pointing to a leadership issue at the root of burnout, rather than AI.

Many surveyed (39%) also said they would save more time at work if they were taught to use AI tools by a human rather than a self-guided program, course, or automated training.

Along with better, more effective AI training, millennials (40%), Gen X (37%) and baby boomers (42%) said additional pay or recognition for all the work they do would be the most helpful thing.

And the thing that Gen Z said would improve their work the most? They simply want better communication from their management (33%), according to the data.

Note: This post was originally published by Talker Research and republished here with permission.

Reviewed by Ayaz Khan.

Read next: It’s tempting to offload your thinking to AI. Cognitive science shows why that’s a bad idea
by External Contributor via Digital Information World

Social Media’s Annual Great Purge: Facebook and X Remove More Fake Accounts Than Their Active Users, TikTok Deletes Half Its Fake Accounts

By Surfshark

The scenario of communicating with fake users and interacting with scam content on social media has become a daily reality. For some major platforms such as Facebook and TikTok, the situation is frustrating, as they have to remove billions of fake accounts and pieces of spam content from the platforms annually. The number of fake users significantly surpasses Facebook and X active user base.
Facebook removes more fake accounts annually than it has active users Facebook, with 3 billion active users, removes 4.5 billion fake accounts annually—a volume 1.5 times its user base.

Image: Surfshark

In the dark market, fake social media account prices start from as little as $0.08. The trend is worrying: being exposed to huge amounts of lies on social media makes it easy for a regular user to get scammed.

Key insights

  • In an effort to protect users from scams and misinformation and to keep experience authentic, social media platforms are constantly removing massive number of fake accounts. An analysis of transparency reports reveals the staggering scale of this cleanup: Facebook alone removes an average of 4.5 billion fake accounts annually, followed by TikTok at 1 billion, X (formerly Twitter) at 671 million, and LinkedIn at 112 million. Additionally, YouTube terminates an average of 25 million channels annually, resulting in the removal of 311 million spam-related videos hosted on those channels.
  • Social media platforms must also aggressively combat spam generated by these fake accounts, with moderation efforts often reaching into the billions. Facebook leads in content removals, purging an average of 4.7 billion pieces of spam content annually. YouTube follows by removing 3.6 billion spam-related comments from videos each year. TikTok also reports high volumes, deleting an average of 1.4 billion comments and 671 million videos attributed to fake accounts. In comparison, Instagram removes 271 million pieces of spam content, while LinkedIn eliminates 200 million instances of spam and scam content annually.
  • Comparing removal volumes to active users reveals the enormous scale of platform moderation. On some platforms, the number of annual removals rivals or even exceeds the entire active user base. Facebook, with 3 billion active users, removes 4.5 billion fake accounts annually — a volume 1.5 times its user count. Similarly, X reports removing 671 million accounts for platform manipulation and spam each year, a figure that surpasses its 570 million active users.
  • Other platforms demonstrate a significant, though less extreme, ratio. TikTok, with 1.9 billion active users, removes roughly 1 billion fake accounts annually, equivalent to over half its user base. LinkedIn, serving 310 million active users, removes 112 million fake accounts per year, representing more than a third of its active user base. Finally, YouTube's moderation is also notable. With an estimated 60 million active channels, its annual termination of 25 million spam channels means it removes a quantity equivalent to over 40% of its active channel base each year.
  • Fake accounts are often used by bad actors to artificially boost engagement — writing fake comments, giving likes, or following other accounts. The business model typically works in two steps: first, hackers acquire as many fake accounts as they can, either by creating them or purchasing them in bulk. Once enough accounts are gathered, hackers begin using them for their own ends or selling services to promote questionable products, increase engagement, or push political agendas. Fake social media account prices start from as little as $0.08, and the price increases depending on the account's age, number of friends or followers, country of origin, and platform. Of course, such services violate the terms of service of social media platforms, which are constantly working to reduce this behavior.

Methodology and sources

This study collected data from the official transparency reports of Meta, Google, TikTok, X (formerly Twitter), and LinkedIn. The data focused on removed spam content and fake account numbers across these social media platforms. Specifically, Surfshark gathered information for Facebook, YouTube, X, Instagram, TikTok, and LinkedIn.

Data collection began in 2021, as this was the earliest year for which Facebook, YouTube, TikTok, and LinkedIn all had available data. X (formerly Twitter) and Instagram, however, only had data available from the second half of 2024.

Since some platforms reported data quarterly and others semi-annually, Surfshark adjusted the figures by calculating average annual numbers for spam and fake content. Additionally, Surfshark collected data on monthly active users and cross-referenced these figures with the number of removed fake accounts.

For the complete research material behind this study, visit here.

Data was collected from:

Meta. Community Standards Enforcement Report.
Google. YouTube Community Guidelines enforcement.
X. Global Transparency Report.
TikTok. Community Guidelines Enforcement Report.
LinkedIn. Community Report.

Note: This post was originally published on Surfshark Research and republished with permission on Digital Information World (DIW).

Reviewed by Ayaz Khan.

Read next:

• Americans Spend an Average of 6.3 Hours Daily on Mobile Devices; Older Users Log Up to 358 Minutes Across 17 Apps

• New research warns charities against ‘AI shortcut’ to empathy


by External Contributor via Digital Information World

AI and work: an expert assesses how far this revolution still has to run

Vivek Soundararajan, University of Bath
Image: Matheus Bertelli / Pexels

Every week brings fresh claims about AI transforming the workplace. A CEO declares a revolution. A think piece predicts millions of jobs vanishing overnight. The noise is relentless.

But strip away the hype and there is a simpler question. In developed economies, what has AI actually changed about work so far? The answer turns out to be more interesting, and more uneven, than either side suggests.

What’s real

Let’s start with what the evidence supports. AI is delivering genuine productivity gains in specific kinds of knowledge-based and service work. An experimental study found that professionals using ChatGPT for writing tasks took 40% less time to complete them, with an 18% improvement in quality (as evaluated by their colleagues in blind testing).

And another study of more than 5,000 customer service agents found a 15% increase in issues resolved per hour. An industry experiment involving realistic, complex tasks done with management consultants found they completed the work 25% faster and produced results that were deemed to be 40% higher in quality (again, judged by experts in blind tests). Randomised trials involving nearly 5,000 software developers documented a 26% increase in completed tasks.

These are not small numbers. And adoption is moving fast. A US survey found that nearly four in ten workers were using generative AI at work by mid-2025. This pace of adoption outstrips the early years of both the personal computer and the internet. Across countries in the Organisation for Economic Co-operation and Development (OECD), firms report that AI integration into business functions is accelerating.

So the productivity story is real, particularly in text-heavy, codifiable tasks across legal, finance, marketing and customer service. That much is not hype.

What’s overstated

But the apocalyptic predictions have not yet materialised. Employment across OECD countries remains historically robust. A review of the research-based evidence produced in the US in early 2026 found that despite rapid adoption, AI has so far caused little in the way of widespread job losses or pay cuts. And a study (as yet unpublished) that tracked AI chatbot use in Danish workplaces found essentially zero effects on earnings or recorded hours, even among heavy users and early adopters.

Why? Because many jobs still require tacit knowledge, physical presence, sound judgement and the kind of contextual awareness that AI cannot yet replicate. And adoption is far more uneven than the headline numbers suggest. While AI use among firms in the US soared between 2023 and 2025, a report found fewer firms had actually embedded it into their operations. The information sector, for example, adopted it at roughly ten times the rate of hospitality.

One economic modelling exercise estimates that AI might add somewhere between 1% and 1.6% to US GDP over the next decade. This is significant, but it is far short of the transformative claims.

The gap between productivity gains in controlled studies and real transformation inside organisations remains enormous. The revolution, for most workplaces, has not yet arrived.

What’s under-reported

Here is where the story gets more consequential and where the commentary falls short. The distributional effects of AI within developed economies deserve far more attention. Not everyone is experiencing this transformation the same way.

The evidence on who benefits is strikingly consistent. Less experienced workers see the biggest gains from AI tools. A study found that AI narrowed the gap between the most and least productive staff, with the largest improvements among lower-ability workers.

In customer service, novice agents benefited most. The most experienced staff experienced little improvement and, in some cases, slight quality declines. The industry experiment mentioned above found below-average performers improved by 43%, while top performers gained 17%. So the biggest gains go to the least experienced workers, narrowing the gap between top and bottom performers within firms.

That sounds like good news. But there’s a catch.

While AI may compress skills inside firms, the broader labour market is telling a different story. Entry-level roles are shrinking in AI-exposed occupations. The routine tasks that once justified hiring juniors – jobs which provided learning opportunities for those on the bottom rung – are the first to be automated.

Economic theory has long warned that automation displaces workers from tasks, and the creation of new tasks to counterbalance this is neither automatic nor guaranteed. An estimated 60% of jobs in advanced economies face some AI exposure.

In most realistic scenarios, inequality worsens without deliberate intervention – partly because higher-income workers hold more capital assets and stand to gain from rising returns on AI-related investments.

The pattern that is emerging is this: AI helps those already inside the door while quietly narrowing the door for those trying to get in.

Paying attention to the right question

Sector matters. Firm size matters. Job type matters. The AI transition is not one story. It is many – unfolding at different speeds, with different consequences, depending on where you sit in the economy.

The debate has been stuck between breathless optimism and existential dread. Neither is useful. The evidence points somewhere more uncomfortable: a transformation that is real but partial, fast in some corners and stalled in others – and distributing its costs and benefits in ways that are shaped by existing inequalities.

If the productivity gains are genuine, the question is: who captures them? If entry-level work is disappearing, what replaces it? And if the gap between firms that adopt and those that cannot is widening, the focus should be on what we are building in response. Just talking about it won’t be enough.The Conversation

Vivek Soundararajan, Professor of Work and Equality, University of Bath

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Asim BN.

Read next:

• New research warns charities against ‘AI shortcut’ to empathy

• Teaching teens critical thinking could be key to challenging fake news, AI slop and toxic social media


by External Contributor via Digital Information World

Monday, March 9, 2026

New research warns charities against ‘AI shortcut’ to empathy

By UEA Communications

A new report from the University of East Anglia (UEA) warns that the potential reputational damage of charities using AI-generated images in their campaigns is more complex than many organisations realise.

It comes as humanitarian budgets tighten and production pressures increase, with many charities and NGOs turning to AI tempted by the offers of speed, cost efficiency and creative flexibility.

High-tech shortcut backfiring

The study suggests the charity and development sector’s "high-tech shortcut" to empathy is backfiring. While AI offers a cheaper, faster way to produce campaign visuals, it risks breaking the fundamental bond of trust between charities and the public, say the authors.

The report, Artificial Authenticity, analysed 171 AI-generated images and more than 400 public comments surrounding campaigns from 17 organisations, including Amnesty International, Plan International, the World Health Organization (WHO) and WWF.

The findings reveal a worrying shift - that when AI images are used, the humanitarian cause effectively disappears from the conversation. The researchers found the introduction of AI fundamentally reshapes how the public engages with charity.

Co-author David Girling, from UEA’s School of Global Development, said: “Charities exist because people care about other people. The moment when audiences start questioning whether what they are seeing is real, the emotional connection that drives support is put at risk. 

“The debate about the ethics of AI is increasingly polarised. AI is not inherently wrong, but if it begins to overshadow the human story at the heart of charitable work, organisations could lose far more in trust than they gain in efficiency.” 

What the research shows

Key findings from the study, published today, include:

  • Nearly 70 per cent of the AI images analysed were designed to appear photorealistic. Poverty was the dominant theme, accounting for around a third of the images (51 of 171), and often featuring children, followed by environment (35) and human rights (32) themed images. 
  • While 85 per cent of images were appropriately captioned as AI generated, this disclosure did not protect the cause and organisations from backlash, even when transparently labelled.
  • In undisclosed campaigns, the audience adopted an "investigative tone." Instead of evaluating the charity's work, commenters focused entirely on whether the images were artificial or not. 
  • The report also found significant public backlash against "message-medium misalignment". For example, environmental organisations like WWF Denmark faced criticism for using energy-intensive AI tools to promote sustainability, an irony not lost on a climate-conscious public who labelled the move "ecocidal" 
  • For some organisations, mock visuals are seen as a way to balance storytelling with safeguarding and dignity. Therefore, using AI-generated imagery could reduce the number of people who would have been otherwise re-traumatised by the process of being photographed or filmed for campaign purposes. However, the study shows that donors often reject these "fake" images, prioritising their own need for an "authentic witness" over the beneficiary’s right to privacy.

The researchers found the public response was far from simple. In some cases, people welcomed AI as a way to protect vulnerable individuals from exploitation. In others, they criticised it as a distraction from real solutions, particularly in emotionally sensitive campaigns such as cancer or famine.

When AI is used, discussion often shifts away from the cause and towards debates about technology and trust. Of the comments analysed: 141 focused on AI ethics and authenticity concerns, not the charitable cause; 122 critiqued technical execution and visual quality; only 80 (less than 20 per cent) actually engaged with the humanitarian issue itself.

Audiences increasingly sceptical

Co-author Deborah Adesina, a former Master’s student in the School of Global Development and now a media, communications and development consultant, said: “Ultimately, the future of charity storytelling will not hinge on technological capability alone. It will depend on whether organisations can maintain legitimacy, transparency and moral coherence in an environment where audiences are increasingly media literate and increasingly sceptical.

“For communications teams who opt to include generative AI in their workflow, proper training in ethical prompt engineering will be crucial to avoid reputational harm and unintended bias.”

The study, Artificial Authenticity: The Rise of Images Generated by Artificial Intelligence in Charity and Development Communications, maps current practice and offers practical recommendations for charities, fundraisers and sector leaders navigating this rapidly evolving digital landscape.

These include working with technology providers and AI companies to develop charity-sector-specific AI tools with built-in bias detection, stereotype alerts, and ethical guardrails tailored to humanitarian representation.

In addition, if choosing to use AI-generated imagery, organisations should co-create it with local communities by involving them in the creative process, including generating AI prompts and approving final imagery to ensure they are accurate and culturally appropriate.

The full report and the database of AI-generated charity images are available at www.charity-advertising.co.uk. 


Image: Masjid Pogung Dalangan / Unsplash

This post was originally published by the University of East Anglia Norwich and is republished here with permission.

Reviewed by Ayaz Khan.

Read next:

• Teaching teens critical thinking could be key to challenging fake news, AI slop and toxic social media
by External Contributor via Digital Information World

Saturday, March 7, 2026

Teaching teens critical thinking could be key to challenging fake news, AI slop and toxic social media

By Taylor & Francis

How critical thinking skills could empower teens to navigate the digital world safely

Social media is where teenagers spend most of their time, either scrolling and sharing, or sometimes falling into the traps of fake news, toxic content and online drama. But what if we could equip our young people to challenge harmful narratives and protect themselves from the darker side of the internet?

In a world where everyone documents their lives online and algorithms dictate what people see, while apps mine personal data and misinformation spreads, teenagers are at the epicentre of this digital storm.

So how can we help them to navigate this complex landscape? Dr Maree Davies, Senior Lecturer at the University of Auckland, believes the answer lies in critical thinking.

In her new book, Teaching Critical Thinking to Teenagers: How Kids Can Be Street Smart about AI, Algorithms, Fake News, and Social Media, she suggests parents and educators can equip teens with the tools to allow them to navigate the digital world safely and responsibly.

Critical thinking involves being able to objectively – without emotion – analyse and assess something, and make a reasoned judgement on its value or purpose. Skills include logical reasoning, evaluating different forms of evidence and unbiased analysis.

Critical thinking skills are challenging for many, but particularly teenagers, whose prefrontal cortex are still developing (the part of the brain capable of logical processing). However, Dr Davies argues it is not only possible to teach young people to begin building and honing these vital skills, but it is also a crucial time to do so.

As well as being able to spot fake news and conspiracy theories, Dr Davies suggests equipping teens with critical thinking skills can also protect them against the addictive nature of social media and profound online harms such as sextortion, revenge porn, and online bullying.

Why critical thinking matters

Whether it’s TikTok, Instagram, or Snapchat, young people are constantly consuming huge amounts of content tailored to their likes and interests. But what they might not realise is how algorithms shape what they see, often reinforcing biases and pushing them into echo chambers.

Often young people are exposed to this content without a developed understanding of how algorithms work.

“Teenagers today are not just passive consumers of content; they are active participants in a digital ecosystem that can empower or exploit them,” Dr Davies argues. “Critical thinking is the key to breaking free from this cycle.”

Dr Davies argues that critical thinking is more vital than ever, and can help teens make informed decisions.

“Teenagers need to understand that the digital world is not neutral,” she explains. “It’s shaped by societal forces, commercial interests, and algorithms designed to influence their behaviour. By teaching them to think critically, we give them the tools to discern truth from falsehood, resist manipulation, and engage ethically online.”

Teaching critical thinking: the role of parents and educators

Dr Davies says shielding teenagers from the internet is not the solution. Instead, educators and parents must take an active role in preparing teens to navigate the digital world wisely.

“We can’t control the internet, but we can empower teenagers to challenge harmful narratives, engage in respectful dialogue, and become informed citizens,” she states. “By fostering these abilities, we can help teenagers thrive in a world where information is abundant but truth is often elusive.”

Dr Davies advocates taking a hands-on approach to teaching critical thinking.

She recommends parents and guardians speak often to their teens about fake news, and how it is designed to provoke emotional reactions and avoid scrutiny, so it can spread fast. She encourages adults to advise teens to slow down and think before sharing, and demonstrate this behaviour when talking about things seen online.

Additionally, she suggests showing teens how to evaluate sources, seek multiple perspectives and trace information back to its original context – such as checking sources, finding credible academic papers and using trusted news sites. By developing these skills, teens can identify misinformation and resist the urge to share it.

“Critical thinking isn’t just about analysing information, it’s about connecting ideas to personal experiences, respecting diverse perspectives, and remaining open to change,” she explains. “We need to encourage teens to approach the digital world with empathy, resilience, and a willingness to adapt their views based on evidence.

“It’s not about lecturing them, it’s about giving them practical skills they can use every day, in the same way you help your child to learn to read, write or tie their shoelaces.”

Building resilience in

The psychological toll of the digital age is undeniable. From the addictive nature of social media to the harmful effects of online bullying, teenagers face unique challenges that can impact their mental health and wellbeing.

Dr Davies draws on renowned psychologist Albert Bandura’s theories of self-efficacy and moral disengagement to explain why some individuals behave unethically online and how teens can protect themselves.

Being open and honest with your teen about the dangers online can help to build trust with teens and foster open conversations about sensitive issues, she explains, where teens feel comfortable seeking help and navigating challenges like sextortion and online bullying.

“Teaching self-regulation and critical thinking can teens build resilience against these challenges,” she explains. “It equips them to recognize manipulative tactics, resist harmful behaviours, and maintain their mental health in an increasingly digital world.”

Image: Cullen Jones / Unsplash

Further information:

Teaching Critical Thinking to Teenagers: How Kids Can Be Street Smart about AI, Algorithms, Fake News and Social Media,by Maree Davies (Routledge, 2026)

ISBN: Paperback: 9781032944906 | Hardback 9781032944913 | eBook 9781003570998
Maree Daviesis a Senior Lecturer at the University of Auckland, New Zealand. Her research focuses on how to ensure critical thinking is accessible to all teenagers.

Taylor & Francis contact: Becky Parker-Ellis, Media Relations Manager, Email: newsroom@taylorandfrancis.com, Tel.: +(44) 7818 911310.

Note: This post was originally published on the Taylor & Francis Group newsroom and is republished here with permission.

Reviewed by Ayaz Khan.

Read next:

• Americans Spend an Average of 6.3 Hours Daily on Mobile Devices; Older Users Log Up to 358 Minutes Across 17 Apps

• New Survey Debunks Digital Detox Myth: 60% Never Switch Off, 45% Can’t Last 12 Hours Offline

by External Contributor via Digital Information World

Americans Spend an Average of 6.3 Hours Daily on Mobile Devices; Older Users Log Up to 358 Minutes Across 17 Apps

By Adam Blacker | Apptopia

We took a look at Apptopia’s U.S. consumer panel data spanning January 2023 through December 2025 to understand high-level trends in mobile time spent, app usage and engagement depth. The big number is 6.3 hours per day. That is the average amount of time people are spending using their mobile phones each day. If you assume 8 hours of sleep each day, that’s 39% of the day devoted to being on your mobile device.

US Mobile Screen Time Climbs to 6.3 Hours Daily; Older Users Lead With 17 Apps Per Day

The average U.S. mobile user spent 5.5 hours per day on their phone in January 2023. By December 2025, that figure climbed to 6.3 hours, an increase of nearly 51 minutes per day, or about 15.6%. To put it differently, Americans are now spending roughly 190 hours per month on mobile. That’s more than a full-time work week every month, just on your phone. Although, it does feel weird to still be calling it a phone, doesn’t it?


To have a moment of self-promotion, I have to say that for investors in consumer businesses, this makes mobile consumer activity data even more valuable to your investment theses.

Older users flipped the script

Now here’s the finding I didn’t expect. In January 2023, younger users averaged 288 minutes per day versus 281 for older users, a small but expected gap. Everyone assumes the kids are glued to their phones. But starting around mid-2024, older users overtook younger users in daily screen time and never looked back.

Younger users are defined as those being aged 17-25. Older users are being defined as those aged 36+.

By the end of 2025, older users were consistently logging 340 to 358 minutes per day, often outpacing younger users by 10 to 17 minutes. In July 2025, the gap hit its widest: older users at 341 minutes versus 325 for younger. Older users surpassed younger users for time spent in 14 of the last 17 months of the dataset.


Over the full period, older users grew their daily time by 27.5%, compared to 24.9% for younger users. It’s hard to know exactly why this is happening but I have two theories. The first is that younger Americans are actively trying to disengage from technology. The other is that there is increasingly an app for everything mundane in life, which tend to be things older users would be leveraging. These are called companion apps. Think of dishwashers, house lights, hearing aids, grills, toothbrushes, etc.

But younger users go deeper

While older users spend more total time, they spread it across more apps. Older users went from opening about 15 apps per day in early 2023 to more than 17 by the end of 2025, a 13.4% increase.

Younger users are opening fewer apps, roughly 14.6 per day, but spend 25.4 minutes per app, which is 59% more per-app time than the overall average. They’re locked into their sessions. For platforms competing for younger eyeballs, this means you either win big or you’re invisible. There’s less room in the middle.

The 90+ minute apps are on the rise

Looking at the distribution of how much time apps on a person’s device receive each month, the 90+ minute bucket grew the fastest, from about 8 apps per device in January 2023 to nearly 10 by December 2025. That’s a 24.6% increase. Meanwhile, light-touch apps receiving under 5 minutes per month barely budged, growing just 1.4%.


This means consumers are adding apps that command real time. The share of apps in that 90+ minute tier rose from 27.5% of all used apps to 30.4%. Almost one in three apps on the average device is now getting an hour and a half or more of attention per month. These are typically apps like Netflix, YouTube, TikTok, Google Maps, mobile games, etc.

Mobile data’s growing importance for investors

This data reframes how you should think about the mobile consumer wallet and attention pool. The total addressable time grew 15.6% in three years with no signs of flattening. The older user surge has real revenue implications. Older consumers tend to have higher disposable income and higher average order values. The fact that this cohort grew mobile time by 27.5% and now uses 17+ apps per day means they are increasingly reachable and transactable through mobile.

Reviewed by Asim BN.

Note: This post was originally published on Apptopia blog and is republished on DIW with permission.

Read next: 

Smartphone Brands: Shifting Loyalty?

• Most workers embrace AI, but 84% worry about the risks, study says

by External Contributor via Digital Information World