Saturday, March 14, 2026

Social media influencers increase the toxicity and power of misinformation, research shows

By Cardiff University

Social media influencers (SMIs) can perpetuate the flow of misinformation online because of the unique relationship they have with their followers, research led by Cardiff Business School finds.

Study finds influencers spread brand misinformation more effectively than regular users, triggering toxic engagement and online hostility.
Image: dlxmedia.hu / Pexels

Published in the journal Psychology & Marketing, academics analysed brand-related misinformation and associated user comments spanning 47 brands, across nine industries, over a three-year period. It is the first study to measure the extent and types of toxicity generated by influencers versus regular users.

Brands increasingly rely on SMIs to reach and engage with their target audiences, investing a record $33bn in influencer marketing in 2025.

Influencers have become integral to product endorsement in recent years, but the communities they create can also rally behind inaccurate posts, blindly attacking brands, researchers show.

Regular social media users are usually confronted and attacked for spreading misinformation, the study found. They are therefore motivated to steer the conversation towards more civil tones and correct falsities as engagement grows. Influencers have the exact opposite incentive because their profits increase with engagement.

The analysis shows toxicity peaks when influencers discuss socio‐political issues, where public stakes are higher.

Lead author Dr Giandomenico Di Domenico said: “We know that social media influencers often have huge followings that can be extremely useful for brands looking to increase sales. This research shows the negative impact of what happens when influencers decide to endorse or amplify misinformation. Our findings show influencers generate more toxicity than regular users, amplifying content under the same conditions that enhance their visibility and influence.

“While regular users might see inaccurate posts called out and critiqued, the unique parasocial bond influencers have with their communities means these groups are much more likely to get behind an idea without interrogating its veracity.

“This means these posts do not simply attract more attention; they actually transform dispersed individual reactions into collective, belief‐driven antagonism. Misinformation introduced within these relationships therefore has much more traction and potential for harm.”

"It could be a challenge to persuade some influencers to be more responsible in their posts – as this greater engagement actually leads to higher profits. As they have an incentive to maintain high levels of engagement, sharing divisive, polarizing, and arousing contents such as misinformation might represent a clear business strategy." — Giandomenico Di Domenico, Lecturer in Marketing and Strategy

Researchers identified two influencer‐specific mechanisms that can boost the reach and power of misinformation: Legitimation, where the influencer adds weight to a theory; and community enmeshment, where their community rallies behind it.

When these two elements combine, researchers say they sustain “toxic echo chambers”, converting credibility and parasocial bonds into “collective antagonism”. As engagement increases, it produces a “self‐reinforcing toxicity–engagement spiral”.

In early 2025, several social media influencers on TikTok shared viral videos alleging that luxury brands such as Hermès, Louis Vuitton, and Chanel secretly manufacture their goods in Chinese factories while falsely marketing them as “Made in France” or “Made in Italy.”

The influencers presented their claims as exposés of industry deceit, despite offering no verifiable evidence to support them. The videos amassed millions of views and stimulated widespread debate among users concerning authenticity, ethical conduct, and transparency within the luxury sector, positioning the implicated brands at the centre of online criticism and misinformation.

Dr Giandomenico Di Domenico said: “This case highlights a growing paradox in influencer culture. Despite the positive impact of SMIs on marketing outcomes, their prominence also introduces new risks, particularly when controversial or misleading content sparks toxic reactions directed at brands.

“Understanding whether toxicity unfolds differently when misinformation originates from regular users versus SMIs is vital, given the distinct levels of influence, credibility, and audience engagement they command.”

Don't You Know That You're Toxic? How Influencer‐ Driven Misinformation Fuels Online Toxicity is published in the journal Psychology & Marketing and available to view here.

Reviewed by Irfan Ahmad.

Note: This post was originally published on Cardiff University and is republished here with permission.

Read next: Gen Z holds companies to account for greenwashing


by External Contributor via Digital Information World

Gen Z holds companies to account for greenwashing

By Juan F. Samaniego / Sònia Armengou

Informing people about sustainability is more important than ever, but it is only effective if done honestly, according to a study led by the UOC

Companies increasingly want to talk about sustainability, but not everyone believes equally in their commitments. The focus of corporate communication has shifted towards sustainability in response to increasingly serious environmental issues, international campaigns such as the UN's 2030 Agenda, regulatory pressures in certain markets, interest in more environmentally friendly investments and a growing number of environmentally aware consumers. As a result, environmental matters have become a key part of corporate reputation in recent years.

However, not all organizations that claim to be sustainable are [seen as sustainable] and not everyone shares the same views of these corporate commitments. Members of Gen Z, those born between 1995 and 2009, are especially sensitive to greenwashing and seem prepared to shun companies that are not consistent with their message. This is the conclusion of a new study led by Elisenda Estanyol, a researcher in the Learning, Media and Entertainment Research Group (GAME) and a member of the Faculty of Information and Communication Sciences Studies at the Universitat Oberta de Catalunya (UOC). Researchers from Pompeu Fabra University and the MERCO Corporate Reputation Business Monitor also participated in the study.

"The most striking thing is that Gen Z isn't indifferent or complacent: they actively observe, assess and judge the companies' behaviour in terms of the environment. They don't just consume; they construct a brand's reputation, based on what it does or doesn't do for the environment," said Estanyol, who is also the academic director of the University Master's Degree in Corporate Communication, Protocol and Events at the UOC. "The study shows a generation that is especially sensitive to greenwashing and ready to hold companies accountable when they say one thing and do another."

“Generation Z actively observes, evaluates and judges companies’ environmental behaviour”

Stigmatized sectors and more demanding consumers

One of the key findings of the study, based on the opinions of 8,980 people in three European countries (Spain, Italy and Portugal) and three Latin American countries (Chile, Colombia and Mexico), is that, when it comes to sustainability, not all companies start from the same point. The reputation and environmental commitment of organizations in socially stigmatized sectors, such as tobacco, gambling, fossil fuels or sugary drinks, are generally perceived more negatively. However, these negative perceptions, like the positive ones, vary between countries and population groups.

Europeans tend to be more critical than Latin Americans when assessing companies' environmental commitment. Spain stands out as the most demanding country. According to Estanyol, "this is due to several factors. First, there is greater social and media awareness of the climate crisis. Second, there is a tradition of distrust towards institutions and large corporations, which leads young people to adopt a more sceptical and demanding perspective. Gen Z in Spain does not take environmental commitment for granted: it demands evidence, transparency and tangible results."

Mexico and Colombia lead in positive ratings. This does not necessarily mean that companies in these countries are more sustainable, but that social expectations may be different. "In countries where environmental regulation is less strict or there is less institutional pressure, any visible effort is perceived as a significant advance," said Estanyol, who is also attached to the UOC-TRÀNSIC research centre. "In Europe there is an increasingly demanding regulatory framework, which raises people's expectations. Sustainability is no longer a bonus, it has become a minimum expected standard, which explains a more critical and less forgiving public attitude to corporate behaviour."

Besides location, another factor that makes a difference is gender. In all the countries and generations analysed in the study, women tend to value environmental commitment and corporate reputation more highly than men. This difference is especially clear in Generation X and Millennials, but it is also evident in Gen Z. This highlights the importance of incorporating a gender perspective in the analysis of corporate social responsibility.

Gen Z is demanding, but also recognizes effort

The results of the study confirm that Gen Z is more critical, demanding and active than previous generations. The data, however, suggest certain nuances. Far from showing a systematic distrust of companies, Gen Z values environmental commitment and corporate reputation most positively, especially when they perceive this commitment as credible and consistent. In other words, members of this generation are demanding, but they are also able to recognize companies' hard work and real commitment.

This behaviour points to a key characteristic: Generation Z acts not so much from distrust as from discernment. Their expectations are high, but not indiscriminate. They expect transparency, consistency between discourse and practice, and measurable results. When these conditions are met, the response is positive. "The message for companies is clear: Gen Z is watching and will not forgive inconsistency. Companies that incorporate sustainability in a real and verifiable way can gain reputation and legitimacy; those that merely feign commitment risk losing credibility," Estanyol said.

Towards credible sustainability: key points for companies

Communicating sustainability is more important than ever, but it is only effective if it is done honestly and connects with the realities of different audiences. The study not only diagnoses the demanding circumstances in which companies operate, but also proposes a series of changes to how they communicate to respond to the demands of an increasingly engaged public:

  • Real and verifiable transparency. Environmental commitment must be communicated with clear data and measurable objectives that allow real progress to be verified, beyond marketing messages.
  • Targeted messages. Expectations regarding sustainability vary between countries and socio-demographic groups, so environmental communication must adapt the content and form of messages accordingly.
  • Channels open to participation. Generation Z expects to be able to interact with companies and question corporate discourse, so businesses should focus on digital channels that encourage dialogue and not just one-way communication.
  • Consistency in words and action. Environmental commitment is only credible if it is aligned with real, sustained action. Any contradiction is quickly detected and called out, especially by younger audiences.

"The implication for companies is clear: neutrality is no longer an option," said Estanyol. "For Gen Z, doing things right has a reputational reward, but doing them badly has an immediate cost. Brands face a logic of reward or punishment in which coherence in terms of the environment directly influences trust, reputation and social legitimacy. It's not enough to talk about sustainability: it must be demonstrated constantly."

For more information:

Estanyol, Elisenda, Mas-Manchón, Lluís, Fernández-Cavia, José & Van-Bergen, Pablo. (2025). Raising the bar? How Generation Z perceives corporate reputation and environmental commitment. Young Consumers: Insight and Ideas for Responsible Marketers. https://doi.org/10.1108/YC-06-2025-2596

This research is aligned with the UOC's Digital transition and sustainability research mission and contributes to the following UN Sustainable Development Goals: SDG 11, Sustainable Cities and Communities, SDG 12, Responsible Consumption and Production, and SDG 13, Climate Action.

Image: Markus Spiske / Unsplash

Note: This study was originally published by the Universitat Oberta de Catalunya (UOC) and republished here with permission. The findings are based on a three‑wave cross‑national survey conducted in 2023 with 8,980 participants.

Reviewed by Asim BN.

Read next: AI may be making us think and write more alike


by External Contributor via Digital Information World

Friday, March 13, 2026

AI may be making us think and write more alike

By Julia Grimmett - University of Southern California Dana and David Dornsife College of Letters, Arts and Sciences.

Large language models may be standardizing human expression — and subtly influencing how we think, say computer science and psychology researchers at USC Dornsife.

Artificial intelligence chatbots are standardizing how people speak, write and think. If this homogenization continues unchecked, it risks reducing humanity’s collective wisdom and ability to adapt, argue USC computer scientists and psychologists in an opinion paper published March 11 in the Cell Press journal Trends in Cognitive Sciences.

The researchers — led by Morteza Dehghani, professor of psychology and computer science at the USC Dornsife College of Letters, Arts and Sciences — say that AI developers should incorporate more real-world diversity into large language model (LLM) training sets, not only to help preserve human cognitive diversity, but also to improve chatbots’ reasoning abilities.

“Individuals differ in how they write, reason and view the world,” says study first author Zhivar Sourati, a PhD student at the USC Viterbi School of Engineering. “When these differences are mediated by the same LLMs, their distinct linguistic style, perspective, and reasoning strategies become homogenized, producing standardized expressions and thoughts across users.”

Large language models dampen individuality

Within groups and societies, cognitive diversity bolsters creativity and problem-solving, say the researchers. However, cognitive diversity is shrinking worldwide as billions of people are using the same handful of AI chatbots for an increasing number of tasks, they add. When people use chatbots to help them polish their writing, for example, the writing ends up losing its stylistic individuality, and people feel less creative ownership over what they produce.

“The concern is not just that LLMs shape how people write or speak, but that they subtly redefine what counts as credible speech, correct perspective, or even good reasoning,” says Sourati, a member of Dehghani’s Morality and Language Lab.

The team points to multiple studies showing that LLM outputs are less varied than human-generated writing and that LLM outputs tend to reflect the language, values and reasoning styles of Western, educated, industrialized, rich and democratic societies.

“Because LLMs are trained to capture and reproduce statistical regularities in their training data, which often overrepresent dominant languages and ideologies, their outputs often mirror a narrow and skewed slice of human experience,” says Sourati.

Though studies show that individuals often generate more ideas with more details when they use LLMs, groups of people produce fewer and less creative ideas when they use LLMs than when they simply combine their collective powers, note the researchers.

“Even if people are not the firsthand users of LLMs, LLMs are still going to affect them indirectly,” says Sourati. “If a lot of people around me are thinking and speaking in a certain way, and I do things differently, I would feel a pressure to align with them because it would seem like a more credible or socially acceptable way of expressing my ideas.”

LLMs can reduce the variety of reasoning styles

Beyond language, studies have shown that after interacting with biased LLMs, people’s opinions become more like the LLM that they used.

LLMs also favor linear modes of reasoning such as “chain-of-thought reasoning,” which requires models to show step-by-step reasoning. This emphasis reduces the use of intuitive or abstract reasoning styles, which are sometimes more efficient than linear reasoning, the researchers say.

They also note that LLMs can alter people’s expectations, which can subtly change the direction of a person’s work.

“Rather than actively steering generation, users often defer to model-suggested continuations, selecting options that seem ‘good enough’ instead of crafting their own, which gradually shifts agency from the user to the model,” says Sourati.

The researchers say that AI developers should intentionally incorporate diversity in language, perspectives and reasoning into their models. They emphasize that this diversity should be grounded in the diversity that exists within humans globally, rather than introducing random variation.

“If LLMs had more diverse ways of approaching ideas and problems, they would better support the collective intelligence and problem-solving capabilities of our societies,” said Sourati. “We need to diversify the AI models themselves while also adjusting how we interact with them, especially given their widespread use across tasks and contexts, to protect the cognitive diversity and ideation potential of future generations.”

About the study: USC Viterbi PhD student Alireza Ziabari also contributed to the research, which was supported by funding from the Air Force Office of Scientific Research.


Image: Tara Winstead / Pexels

This post was originally published by the USC Dornsife College of Letters, Arts and Sciences and is republished here with permission.

Reviewed by Ayaz Khan.

Read next:

• New Research Challenges Idea That Humans Can Achieve True Multitasking

• It’s tempting to offload your thinking to AI. Cognitive science shows why that’s a bad idea

by External Contributor via Digital Information World

Why exposing young children to AI content could have irreversible consequences

Sarah Whitcombe-Dobbs, University of Canterbury

Artificial intelligence (AI) already affects many areas of daily life, including the lives of young children.

Image: Ron Sinda / Unsplash

Many families give screens to children younger than two, and AI-generated content is increasing on the popular YouTube Kids channel – and it plays automatically.

Most parents are not able to monitor everything their child sees online. Some AI-generated content can be both frightening and attractive to young children, including violence and sexual content using engaging animals and characters.

Early childhood education centres are also using AI to support learning, particularly for children with developmental differences. This includes those who do not learn to speak easily or who have other communication problems related to autism or intellectual disability.

In the US, many parents report their children are using AI for school work. The encouragement for early childhood centres, schools and parents to use AI with children is based on short-term studies, but the long-term impacts are unknown.

The only way to know how AI may affect young children would be through well-designed longitudinal studies. But by the time robust evidence emerged, a whole generation would have grown up exposed – and if there are indeed harmful effects, these may be irreversible.

There are already some alarm bells ringing over AI’s potential impact.

New Zealand research shows high use of screens during early childhood is associated with poor language, social and relational functioning.

Many children love to use screens, and AI is likely to be similarly rewarding because AI models are endlessly patient and instantly responsive to the topics of your choosing and do not seem to demand anything.

Human development during early childhood

Like all mammals, human infants are bound by biological processes and have evolved to develop in social groups in close physical connection with others. Everything we know about child development highlights the importance of face-to-face connection.

Children learn about themselves and the world through all their senses. They learn to communicate through “serve-and-return” interactions – responsive, back-and-forth exchanges between them and their caregiver. This includes physical touch, emotion and play. Collectively, these interactions help shape brain architecture.

Based on their experiences during the first few years of life, children form models, or templates, of how intimate relationships work. These relational templates endure throughout their lives and influence close relationships in adulthood.

Children also learn about emotional regulation, seeking and receiving comfort and conflict resolution during the preschool years. All the while, their brains are forming, with foundational structures that require good experiences to function well throughout life.

We do not yet know what the impact will be on children’s capacity for human relationships if they are exposed to AI while their physiological, neurological and emotional regulatory systems are developing. It is unclear how longer-term AI exposure may affect children’s understanding of other people and their development of empathy.

Normal social interactions in childhood include conflict, negotiation, resolution and play with other children. These interactions involve non-verbal communication, risk estimation, relational repair and decision making.

It’s unclear how instantly responsive and engaging AI will affect these aspects of childhood. It is possible that children experiencing many AI-mediated social interactions may find it more difficult to navigate real-world relationships, especially when there is conflict.

It is also possible that children will develop a preference for AI engagement over real-life engagement with family or friends.

Young children find it harder to distinguish fantasy from reality. This quality is delightful for adults and children alike, involving imaginary play, silliness and amusement. Yet AI-generated fantasy may be persuasive to an overwhelming degree, potentially leading to children being confused about reality and the consciousness of others.

Potential for both harm and help

If infants and children don’t have sufficient real-world experiences, their emerging cognitive capacities for detecting reality and interpreting sensory inputs may be affected.

There is much excitement about the potential for AI-assisted tools to aid children with disabilities in their development of social communication. This seems likely to have benefits such as earlier detection of neuro-developmental differences. There may also be risks if these interventions replace real-life interactions with other children and adults.

What will be the daily experiences for children with extra learning needs? Parents may be happy with AI-enhanced learning, but less happy if this is provided in lieu of a real teacher aide.

The introduction of AI seems inevitable and it is already affecting our children. We know that connection, touch, reciprocal and language-rich environments, and unstructured play are important during early childhood development.

To adopt AI into our children’s spaces without knowing the consequences is an experiment with outcomes that may not be reversible. Given the uncertainty, families should at least have the freedom to choose an AI-free environment for their children.The Conversation

Sarah Whitcombe-Dobbs, Senior Lecturer in Child and Family Psychology, University of Canterbury

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Disclosure statement Sarah Whitcombe-Dobbs is a member of the New Zealand Labour Party. She receives funding from the Lottery Grants Board and Oranga Tamariki. Partners: University of Canterbury University of Canterbury provides funding as a member of The Conversation NZ.


Reviewed by Asim BN.

Read next: Mobile Accounts for Nearly 60 Percent of Web Traffic


by External Contributor via Digital Information World

Thursday, March 12, 2026

AI doesn’t ‘see’ the way that you do, and that could be a problem when it categorizes objects and scenes

Arryn Robbins, University of Richmond; Eben W. Daggett, New Mexico State University, and Michael Hout, New Mexico State University

Image: Anya Chernykh unsplash

Even with no fur in frame, you can easily see that a photo of a hairless Sphynx cat depicts a cat. You wouldn’t mistake it for an elephant.

But many artificial intelligence vision systems would. Why? Because when AI systems learn to categorize objects, they often rely on visual cues – like surface texture or simple patterns in pixels. This tendency makes them vulnerable to getting confused by small changes that have little effect on human perception.

A vision system aligned more closely with human perception – one that perhaps emphasizes shape, for instance – might still confuse the cat for another similarly shaped mammal, like a tiger; but it is unlikely to indicate an elephant.

The kinds of mistakes an AI makes reveal how it organizes visual information, with potential limitations that become concerning in higher-stakes settings.

Stickers and graffiti on a stop sign could serve as an adversarial attack, confusing AI in autonomous vehicles. rick/FlickrCC BY

Imagine an autonomous vehicle approaching a vandalized stop sign. While a human driver recognizes the sign from its shape and context, an AI that relies on pixel patterns may misclassify it, pushing the altered sign out of the category “sign” altogether and into a different group of images that it identifies as similar, such as a billboard, advertisement or other roadside object.

Together, these problems point to a misalignment between how humans perceive the visual world and how AI represents it.

We are experts in visual perception, and we work at the intersection of human and machine perception. People organize visual input into objects, meaning and relationships shaped by experience and context. AI models don’t organize visual information the same way. This key difference explains why AI sometimes fails in surprising ways.

Seeing objects, not features

Imagine that in front of you is a small, opaque object with both straight and curved edges. But you don’t see those features; you just see your coffee mug.

Vision isn’t a camera, passively recording the world. Instead, your brain rapidly turns the light your eyes absorb into objects you recognize and understand, organizing experience into structured mental representations.

Researchers can understand how these representations are structured by examining how people judge similarity. Your coffee mug is not like your computer, but it’s similar to a glass of water despite differences in appearance. That judgment reflects how the mug is mentally represented: not just in terms of appearance, but also what the mug is used for and how it fits into everyday activities.

Importantly, the mental organization of representations is flexible. Which aspects of an object stand out change with context and goals. If packing a moving box, shape and size matter most, so your mug might be placed anywhere it fits. But when putting it away in a cupboard, it goes next to other drinkware. The mug hasn’t changed, only the way it is organized in your mind.

Human visual perception is adaptive, driven by meaning and tied to how we interact with the world.

Aligning AI with humans

AI systems, however, organize visual input in fundamentally different ways than people – not because they are machines, but because of how narrowly they are trained. When an AI is trained to categorize a cat or an elephant, it only needs to learn which visual patterns lead to the correct label, not how those animals relate to each other or fit into the broader world.

In contrast, humans learn within a broader context. When we learn what an elephant is, we weave that representation into the tapestry of everything else we have learned: animals, size, habitats and more. Because AI is graded only on label accuracy, it can rely on shortcuts that work in training but sometimes fail in the real world.

The issue of representational alignment refers to whether AI organizes information in ways that resemble how people do. It’s not to be confused with value alignment, which refers to the challenge of making sure AI systems pursue outcomes and goals that humans intend.

Because human learning embeds new information into a web of prior knowledge, the relationships between new and existing concepts can be studied and measured. This means that representational alignment may be a solvable problem and a step toward addressing broader alignment challenges.

One approach to representational alignment focuses on building AI systems that behave like humans on psychological tasks, allowing researchers to compare representations directly. For example, if people judge a cat as more similar to a dog than to an elephant, the goal is to build AI models that arrive at those same judgments.

One promising technique involves training AI on human similarity judgments collected in the lab. In these studies, human participants might be shown three images and asked which two objects are more similar; for example, whether a mug is more like a glass or a bowl. Including this data during training encourages AI systems to learn how objects relate to one another, producing representations that better reflect how people understand the world.

Alignment beyond vision

Representational alignment matters beyond vision systems, and AI researchers are taking notice. As AI increasingly supports high-stakes decisions, differences between how machines and humans represent the world will have real consequences, even when an AI system appears highly accurate. For example, if an AI analyzing medical images learns to associate the source of an image or repeated image artifacts with disease rather than the real visual signs of the disease itself, that is obviously problematic.

AI doesn’t necessarily need to process information exactly the way people think, but training AI using principles drawn from human perception and cognition – such as similarity, context and relational structure – can lead to safer, more accurate and more ethical systems.The Conversation

Arryn Robbins, Assistant Professor of Psychology, University of Richmond; Eben W. Daggett, Affiliated Faculty of Psychology, New Mexico State University, and Michael Hout, Associate Dean of Research and Professor of Psychology, New Mexico State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Asim BN.

Read next: The End of the Landline Era: Mobile Phones Redefine Global Communication


by External Contributor via Digital Information World

Wednesday, March 11, 2026

The End of the Landline Era: Mobile Phones Redefine Global Communication

Landlines once defined modern communication, but their role is gradually disappearing. Worldwide data shows mobile subscriptions now vastly outnumber fixed-line connections, while in the United States the share of households with a landline has plunged over the past two decades.

Mobile Phones Connect Far More People Than Landlines Ever Did

By Tristan Gaudiaut - Data Journalist, Statista

Today marks the 150th anniversary of the first telephone call, a historic moment on March 10, 1876, when Alexander Graham Bell successfully transmitted the famous words, “Mr. Watson, come here, I want to see you.” Since then, telephony has undergone several technological revolutions, reshaping how people connect and communicate. The beginning of the 21st century, in particular, has brought sweeping change, with the decline of landlines and the rapid rise of mobile phones.

For most of the 20th century, landline telephones formed the backbone of global communications. However, the turn of the millennium marked the beginning of the decline phase for this technology. According to the International Telecommunication Union (via World Bank), in 1990, there were 9.8 landline subscriptions per 100 people worldwide, a figure that nearly doubled to 19.2 by 2006. Yet this dominance did not last. While landline subscriptions peaked in the early 2000s, mobile subscriptions began to rise rapidly. From fewer than 10 subscriptions per 100 people before 2000, mobile penetration reached 50 per 100 people by 2007 and 100 per 100 people by 2017.

Today, the numbers tell a clear story. In 2025, there are 111.5 mobile subscriptions per 100 people worldwide, compared with just 9.9 landline subscriptions (a figure that has fallen back to roughly 1990 levels). Mobile phones have not only replaced landlines but have also connected far more people than fixed-line networks ever did. While significant disparities remain in terms of network technology and coverage, mobile phones have leapfrogged traditional landline infrastructure in many regions, particularly in developing countries, providing billions of people with access to the internet, financial services and important information.

This chart shows the number of mobile and landline phone subscriptions per 100 people worldwide from 1990 to 2025.

Source: Statista / ITU / World Bank

Landline Phones Are a Dying Breed

Felix Richter - Data Journalist, Statista

As smartphones have become a constant companion for most people in the United States, landline phones are rapidly losing their relevance. In 2004, more than 90 percent of U.S. adults lived in households that had an operational landline phone - now it’s little more than 20 percent. That’s according to data provided by the Centers of Disease Control and Prevention, which has been tracking phone ownership in the U.S. as a by-product of its biannual National Health Interview Survey since 2004.

If the trend continues, and there’s little reason to believe it won’t, landline phones could soon become an endangered species, much like the VCR and other technological relics before it.

Landline Phones Are a Dying Breed
Source: Statista / CDC National Health Interview Survey

Reviewed by Ayaz Khan.

Read next:

• Behind the feed: New research explores how social media algorithms shape our digital lives

• Social Media’s Annual Great Purge: Facebook and X Remove More Fake Accounts Than Their Active Users, TikTok Deletes Half Its Fake Accounts

by External Contributor via Digital Information World

Behind the feed: New research explores how social media algorithms shape our digital lives

By Lindsey Massimiani Pepe

New research from the University of Miami examines how platform algorithms govern the relationship between creators, consumers, and advertisers, and what that means for everyday users.

Image: Mariia Shalabaieva / Unsplash

Every time you scroll, like or share on a social media platform, an algorithm is watching, learning and deciding what you see next. But how many of us stop to think about what’s actually driving those decisions, and what’s at stake when we don’t?

That question sits at the center of new research co-authored by Robert W. Gregory, associate professor of business technology, and Ola Henfridsson, professor of business technology and associate dean, both at the University of Miami Patti and Allan Herbert Business School, and Mareike Möhlmann of Bentley University.

Published in the Journal of Management Information Systems, the study examines how platforms like YouTube use algorithms to police, recommend, and monetize content, and what that means for the millions of people who use them every day. The researchers introduce the concept of “algorithmic stakeholder governance” to describe how platforms use automated systems to manage and balance the competing interests of creators, consumers and advertisers.

Many people turn to social media because it feels more direct and personal than traditional media. In practice, though, every piece of content a user encounters has already been filtered, ranked and shaped by algorithms designed primarily to maximize engagement on the platform. “The algorithm is sitting in the middle of every human interaction on these platforms,” Gregory said. “At the end of the day, everything you see on social media is being shaped by it.”

The study examines the relationship among three groups that make platforms like YouTube function: creators who produce content, consumers who watch it and advertisers who fund it. Each group has its own interests, and those interests don’t always align. YouTube’s algorithms are constantly working to balance all three, deciding what gets promoted, what gets restricted and who gets paid, in a way that keeps the entire ecosystem running at scale. The research draws on 66 in-depth interviews with creators, consumers, advertisers and YouTube executives, as well as nearly 3,000 user forum posts and 35 official YouTube press releases.

What the research makes clear, however, is that algorithms alone can only go so far. These are sophisticated systems, but they learn and improve based on the input they receive. The feedback loop only gets stronger when users engage actively and deliberately.

Whether that human involvement actually helps depends entirely on how people choose to engage. Some engage passively, scrolling without much reflection and quietly conforming to the platform’s norms without realizing they are doing so. The researchers call this “unreflective endorsing,” and it matters because those passive behaviors feed directly back into the algorithm, reinforcing whatever patterns are already in place.

Users who engage more deliberately tell a different story. When people flag content, request human reviews of automated decisions or provide intentional feedback to the platform, they are actively shaping how the algorithm learns and evolves. For entrepreneurs and content creators, this is particularly relevant. “If you understand how the actions you choose on the platform are shaped by these algorithmic systems, you can shape these network effects to your advantage,” Gregory said. For example, a business owner who systematically manages their channel, reporting spam and understanding which content the algorithm rewards, is working with the system rather than being carried along by it.

Just as earlier generations gradually learned to evaluate different news sources and media institutions, users today can learn to do the same with social media. For Gregory, it is both a personal responsibility and a cultural moment still taking shape. “We have to grow up as a society and ask questions,” he said. The most important first step, he argues, is recognizing that what appears in a feed is the result of deliberate design, not a neutral window onto the world — and that understanding how these systems work is ultimately what gives users the agency to make more informed choices about where and how they participate online.

This work arrives at a moment of significant momentum for Miami Herbert’s Business Technology Department, which recently earned the No. 1 national ranking for research productivity in information systems from the Association of Information Systems Research Rankings Service, the first time the University of Miami has achieved that distinction. Gregory ranked No. 106 among information systems scholars worldwide, reflecting the department’s strength in producing work that is academically rigorous and relevant beyond the classroom.

The paper, “Algorithmic Stakeholder Governance on Content Platforms: A Lead Role Perspective,” is published in the Journal of Management Information Systems.

Note: This post was originally published on the University of Miami Patti and Allan Herbert Business School and republished here with permission.

Reviewed by Irfan Ahmad.

Read next:

• It’s tempting to offload your thinking to AI. Cognitive science shows why that’s a bad idea

• 78% of Workers “Voluntold” to Take Extra Tasks, 53% Get No Raise, 41% Report Burnout, AI Integration Often Increases Workload


by External Contributor via Digital Information World