Friday, January 23, 2026

AI-induced cultural stagnation is no longer speculation − it’s already happening

Ahmed Elgammal, Rutgers University

Generative AI was trained on centuries of art and writing produced by humans.

But scientists and critics have wondered what would happen once AI became widely adopted and started training on its outputs.

A new study points to some answers.

In January 2026, artificial intelligence researchers Arend Hintze, Frida Proschinger Åström and Jory Schossau published a study showing what happens when generative AI systems are allowed to run autonomously – generating and interpreting their own outputs without human intervention.

The researchers linked a text-to-image system with an image-to-text system and let them iterate – image, caption, image, caption – over and over and over.

Regardless of how diverse the starting prompts were – and regardless of how much randomness the systems were allowed – the outputs quickly converged onto a narrow set of generic, familiar visual themes: atmospheric cityscapes, grandiose buildings and pastoral landscapes. Even more striking, the system quickly “forgot” its starting prompt.

The researchers called the outcomes “visual elevator music” – pleasant and polished, yet devoid of any real meaning.

For example, they started with the image prompt, “The Prime Minister pored over strategy documents, trying to sell the public on a fragile peace deal while juggling the weight of his job amidst impending military action.” The resulting image was then captioned by AI. This caption was used as a prompt to generate the next image.

After repeating this loop, the researchers ended up with a bland image of a formal interior space – no people, no drama, no real sense of time and place.

Researchers find repeated text-image loops compress meaning, producing but empty visuals dubbed visual elevator music.
A prompt that begins with a prime minister under stress ends with an image of an empty room with fancy furnishings. Arend Hintze, Frida Proschinger Åström and Jory Schossau, CC BY

As a computer scientist who studies generative models and creativity, I see the findings from this study as an important piece of the debate over whether AI will lead to cultural stagnation.

The results show that generative AI systems themselves tend toward homogenization when used autonomously and repeatedly. They even suggest that AI systems are currently operating in this way by default.

The familiar is the default

This experiment may appear beside the point: Most people don’t ask AI systems to endlessly describe and regenerate their own images. The convergence to a set of bland, stock images happened without retraining. No new data was added. Nothing was learned. The collapse emerged purely from repeated use.

But I think the setup of the experiment can be thought of as a diagnostic tool. It reveals what generative systems preserve when no one intervenes.

This has broader implications, because modern culture is increasingly influenced by exactly these kinds of pipelines. Images are summarized into text. Text is turned into images. Content is ranked, filtered and regenerated as it moves between words, images and videos. New articles on the web are now more likely to be written by AI than humans. Even when humans remain in the loop, they are often choosing from AI-generated options rather than starting from scratch.

The findings of this recent study show that the default behavior of these systems is to compress meaning toward what is most familiar, recognizable and easy to regenerate.

Cultural stagnation or acceleration?

For the past few years, skeptics have warned that generative AI could lead to cultural stagnation by flooding the web with synthetic content that future AI systems then train on. Over time, the argument goes, this recursive loop would narrow diversity and innovation.

Champions of the technology have pushed back, pointing out that fears of cultural decline accompany every new technology. Humans, they argue, will always be the final arbiter of creative decisions.

What has been missing from this debate is empirical evidence showing where homogenization actually begins.

The new study does not test retraining on AI-generated data. Instead, it shows something more fundamental: Homogenization happens before retraining even enters the picture. The content that generative AI systems naturally produce – when used autonomously and repeatedly – is already compressed and generic.

This reframes the stagnation argument. The risk is not only that future models might train on AI-generated content, but that AI-mediated culture is already being filtered in ways that favor the familiar, the describable and the conventional.

Retraining would amplify this effect. But it is not its source.

This is no moral panic

Skeptics are right about one thing: Culture has always adapted to new technologies. Photography did not kill painting. Film did not kill theater. Digital tools have enabled new forms of expression.

But those earlier technologies never forced culture to be endlessly reshaped across various mediums at a global scale. They did not summarize, regenerate and rank cultural products – news stories, songs, memes, academic papers, photographs or social media posts – millions of times per day, guided by the same built-in assumptions about what is “typical.”

The study shows that when meaning is forced through such pipelines repeatedly, diversity collapses not because of bad intentions, malicious design or corporate negligence, but because only certain kinds of meaning survive the text-to-image-to-text repeated conversions.

This does not mean cultural stagnation is inevitable. Human creativity is resilient. Institutions, subcultures and artists have always found ways to resist homogenization. But in my view, the findings of the study show that stagnation is a real risk – not a speculative fear – if generative systems are left to operate in their current iteration.

They also help clarify a common misconception about AI creativity: Producing endless variations is not the same as producing innovation. A system can generate millions of images while exploring only a tiny corner of cultural space.

In my own research on creative AI, I found that novelty requires designing AI systems with incentives to deviate from the norms. Without it, systems optimize for familiarity because familiarity is what they have learned best. The study reinforces this point empirically. Autonomy alone does not guarantee exploration. In some cases, it accelerates convergence.

This pattern already emerged in the real world: One study found that AI-generated lesson plans featured the same drift toward conventional, uninspiring content, underscoring that AI systems converge toward what’s typical rather than what’s unique or creative.

Lost in translation

Whenever you write a caption for an image, details will be lost. Likewise for generating an image from text. And this happens whether it’s being performed by a human or a machine.

In that sense, the convergence that took place is not a failure that’s unique to AI. It reflects a deeper property of bouncing from one medium to another. When meaning passes repeatedly through two different formats, only the most stable elements persist.

But by highlighting what survives during repeated translations between text and images, the authors are able to show that meaning is processed inside generative systems with a quiet pull toward the generic.

The implication is sobering: Even with human guidance – whether that means writing prompts, selecting outputs or refining results – these systems are still stripping away some details and amplifying others in ways that are oriented toward what’s “average.”

If generative AI is to enrich culture rather than flatten it, I think systems need to be designed in ways that resist convergence toward statistically average outputs. There can be rewards for deviation and support for less common and less mainstream forms of expression.

The study makes one thing clear: Absent these interventions, generative AI will continue to drift toward mediocre and uninspired content.

Cultural stagnation is no longer speculation. It’s already happening.The Conversation

Ahmed Elgammal, Professor of Computer Science and Director of the Art & AI Lab, Rutgers University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read next:

• Many Americans Unaware AI Powers Everyday Phone Features Like Weather Alerts and Call Screening

• Why AI has not led to mass unemployment


by External Contributor via Digital Information World

Thursday, January 22, 2026

Many Americans Unaware AI Powers Everyday Phone Features Like Weather Alerts and Call Screening

Nine in 10 Americans use AI on their phone — but only 38% actually realize that they do.

Image: DIW-Aigen

A survey of 2,000 adults explored how AI is used every day, finding that many were unaware of its presence in their everyday lives — like weather alerts (42%), call screening (35%), autocorrect (34%), voice assistants (26%) and auto brightness (25%).

For many, AI-powered camera features like night mode (19%) and photo memory slideshows (20%) are essential to capturing and enjoying their photos.

Conducted by Talker Research for Samsung, the survey found that half of respondents (51%) don’t think they use AI on their phone, yet 86% reported using common AI tools daily when prompted with a list of features.

When it comes to their phone in general, one in six use their phone for at least 10 different career-related tasks in a day, and more than twice that percentage get a similar number of personal tasks done daily on their device (38%).

More than half of Americans primarily use their phone for tasks related to their job more than any other device (55%) — especially Gen Z (74%).

As a result, 47% think their phone is essential for their career, with younger respondents in millennial (65%) and Gen Z (62%) age ranges being the most likely to agree.

Similarly, six in 10 use their phone for staying organized more than they use other devices.

While the average person only uses half of the apps on their phone regularly, 57% are confident that they’d be able to describe what every feature on their phone does, whether or not they use it.

Even with their phones always on deck, there’s much to learn. A third of respondents discover new features on their phone at least once a month (34%).

When it comes to AI usage, some of the more specific uses include practical assistance. One respondent said that they use AI to get help with ideas, while another said they use it for organizing tasks better. A third of respondents use AI for job applications.

Some respondents are using AI for more creative purposes, like teaching them how to cook, helping them write lyrics or asking random questions for entertaining conversations.

Of those who didn’t initially think they used AI regularly, but then learned that they do, a quarter said learning that it’s already a part of their everyday life made their opinion of AI more favorable.

Americans are mostly interested in using AI for helping them save time on tasks (28%), while others want it to help make tasks easier (27%), provide instant solutions (23%) and to improve their skills or learn new things (22%).

As tech continues to evolve, the average respondent thinks we have about three years left of traditional phone use before AI changes how we interact with our devices; one in five think we have less than a year.

When asked about features that they’d like to see from their phone in the next decade, some want even more advanced AI capabilities like “health monitoring, detecting vital signs and providing personalized wellness insights and alerts” or “anticipating my thoughts and auto-inserting them without me having to type.”

Others have even greater aspirations for their phone, with one wanting it to drive their car and another hoping it can charge itself without needing electricity.

New Features People Want Their Phone To Be Capable Of In The Next Decade

  • “Knowing its owner by sense of touch and emotion and alerts to things we deem necessary.”
  • “Anticipating my thoughts and auto-inserting them without me having to type.”
  • “Understand your long-term preferences and goals.”
  • “I hope phones in the next 10 years will be able to fully project and interact with 3D holograms, allowing me to have virtual meetings, watch movies or even manipulate objects in 3D space without needing any extra devices.”
  • “I hope my phone will be able to last an entire week on a single charge while staying just as fast and powerful.”
  • “I’m hoping my phone will have advanced AI-powered health monitoring, detecting vital signs and providing personalized wellness insights and alerts.”
  • “I hope my phone will be capable of real time language translation during phone calls within the next 10 years.”
  • “I’m imagining something like: my phone can listen contextually to conversations (with privacy safeguards, of course) and instantly give me helpful suggestions.”
  • “To charge without needing electricity.”
  • “To take control of my finances and monthly bill paying.”
  • “Use eye controls to control the movement of the screen.”
  • “Calling for help in certain emergencies with a certain safe word.”
  • “Drive my car.”
Originally published on Talkerresearch.

Read next: Why AI has not led to mass unemployment
by External Contributor via Digital Information World

Wednesday, January 21, 2026

Why AI has not led to mass unemployment

Renaud Foucart, Lancaster University
Why AI has not led to mass unemployment
Image: DIW-Aigen

People have become used to living with AI fairly quickly. ChatGPT is barely three years old, but has changed the way many of us communicate or deal with large amounts of information.

It has also led to serious concerns about jobs. For if machines become better than people at reading complex legal texts, or translating languages, or presenting arguments, won’t those old fashioned human employees become irrelevant? Surely mass unemployment is on the horizon?

Yet, when we look at the big numbers of the economy, this is not what’s happening.

Unemployment in the EU is at a historical low of around 6%, half the level of ten years ago. In the UK, it is even lower, at 5.1%, roughly the level of the booming early 2000s, and it is even lower again (4.4%) in the US.

The reason why there are still so many jobs is that while technology does make some human enterprise obsolete, it also creates new kinds of work to be done.

It’s happened before. In 1800 for example, around a third of British workers were farmers. Now the proportion working in agriculture is around 1%.

The automation of agriculture allowed the country to be a leader in the industrial revolution.

Or more recently, after the first ATM in the world was unveiled by Barclays in London in 1967, there were fears that staff at high street bank branches would disappear.

The opposite turned out to be the case. In the US, over the 30-year period of ATM growth, the number of bank tellers actually increased by 10%. ATMs made it cheaper to open bank branches (because they needed fewer tellers) and more communities gained access to financial services.

Only now, with a bank on every phone, is the number of high street bank staff in steep decline.

An imposition?

But yes, AI will take away some jobs. A third of Americans worry they will lose theirs to AI, and many of them will be right.

But since the industrial revolution, the world has seen a flow of innovations, sustaining an unprecedented exponential economic growth.

AI, like the computer, the internet, the railways, or electric appliances, is a slow revolution. It will gradually change habits, but in doing so, provide opportunities for new businesses to emerge.

And just as there has been no immediate AI boom when it comes to economic growth, there is no immediate shift in employment. What we see instead are largely firms using AI as an excuse for standard job cutting exercises. This then leads to a different question about how AI will change how meaningful our jobs are and how much money we earn.

With technology, it can go either way.

Bank tellers became more valuable with the arrival of ATMs because instead of just counting money, they could offer advice. And in 2016, Geoff Hinton, a major figure in the development of of AI, recommended that the world “should stop training radiologists” because robots were getting better than humans at analysing images.

Ten years later, demand for radiologists in the US is at a record high. Using AI to analyse images has made the job more valuable, not less, because radiologists can treat more patients (most of whom probably want to deal with a human)

So as a worker, what you want to find is a job where the machines make you more productive – not one where you become a servant to the machines.

Any inequality?

Another question raised by AI is whether it will reduce or increase the inequality between workers.

At first, many thought that allowing everyone to access an AI assistant with skills in processing information or clear communication would decrease earning inequality. But other recent research found the opposite, with highly skilled entrepreneurs gaining the most from having access to AI support.

One reason for this is that taking advice is itself a skill. In my own research with colleagues, we found that giving chess players top-quality advice does little to close the gap between the best and the worst – because lower-ability players were less likely to follow high-quality advice.

And perhaps that’s the biggest risk AI brings. That some people benefit from it much more than others.

In that situation, there might be one group which uses AI to manage their everyday lives, but find themselves stuck in low-productivity jobs with no prospect of a decent salary. And another smaller group of privileged, well-educated workers who thrive by controlling the machines and the wealth they create.

Every technological revolution in history has made the world richer, healthier and more comfortable. But transitions are always hard. What matters next is how societies can help everyone to be the boss of the machines – not their servants.The Conversation

Renaud Foucart, Senior Lecturer in Economics, Lancaster University Management School, Lancaster University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read next:

• What air pollution does to the human body

Want to know what the future holds for your brand?

• WhatsApp Develops Group Calling for Web and Releases iOS Update With Clearer Link Previews


by External Contributor via Digital Information World

Want to know what the future holds for your brand?

By Zeke Hughes, Managing Director – QuestBrand, The Harris Poll

Brand Momentum is the one metric that’s the clearest indicator of your brand’s future success. This is why and how to use it.

For the past decade, marketers have looked to the usual suspects of brand equity, customer familiarity, consideration, and awareness to give us a clear picture of a brand’s health. While these are still solid metrics that tell you where your brand is positioned in the market, to fully understand where it’s headed, you need to track its momentum.

Why is tracking Brand Momentum so important?

Brand Momentum is critical because it tells you in real-time if customers find your brand relevant, on the rise, stagnating, or slipping. According to our HarrisQuest research, this is one of your earliest indicators of brand health. It tells you what’s working right now, where you can afford to double down to accelerate growth, or where you need to course correct. It predicts not only loyalty, but revenue, share of wallet, and even market valuation.

How is Brand Momentum different from Brand Equity?

Brand Equity tells you how durable and resilient your reputation is. There’s no denying it is a critical metric to track. But your Brand Momentum score tells you if your reputation is losing or gaining energy. This distinction is important as your brand may have high awareness and strong legacy equity, while quietly losing relevance.

“At its core Brand Momentum answers a simple question: based on what people have seen, read, or heard, is this brand on the rise, holding steady, or slipping?”

Because Brand Momentum is rooted in lived experience – media coverage, product launches, cultural moments, social conversation – it tends to respond faster than lagging indicators like purchase intent or loyalty.

Why does Brand Momentum matter now more than ever before?

Firstly, Brand Momentum has taken center stage because brand perception is now being shaped in real time. Rather than by quarterly cycles, your brand is subject to the whims of social media – where sentiment can turn on a dime.

“Reports tell you what’s happened; Brand Momentum tells you what’s on the horizon.”

Secondly, younger audiences have shortened the brand-forgiveness window. Gen Z and Gen Alpha are reassessing brands constantly. Rewarding brands that seem authentic and aligned with their values and punishing those that don’t.

These brands don’t just plateau, they fall behind. Brand decisions no longer live in the boardroom and industry papers; they spill out onto social media and public conversation. Layoffs, supply issues, or substandard service are amplified and have external momentum repercussions.

Brand Momentum is emotional before it’s rational

Across industries, the brands gaining momentum aren’t necessarily those that are the cheapest, biggest, or most innovative. Although we do see that in some cases. The brands that have the highest uptick with Brand Momentum are those that consistently trigger the right emotional responses for their category.

For example, in automotive, trust and dependability still dominate. As EV continues to disrupt, brands that signal reliability are rewarded with momentum. Toyota has mastered the art of trust. Its brand narrative has remained focused on reliability for decades. This is further reinforced by engineering quality, low recall rates, and long vehicle lifespans.



Images: The Harris Poll

In streaming, value and content relevance matter more than content quantity. Apple TV’s momentum with millennials in 2025 was driven by a clear shift from shows that were “prestige but niche” to shows that were “reliably worth paying for” – anchored in content consistency and cultural relevance. This reinforces the platform as one that consistently delivers quality. And, for Millennials, Brand Momentum builds when subscriptions feel justified month after month, not just for one tentpole release.

When it comes to e-commerce, where competition is fierce, momentum hinges on trust. Ultra-low pricing drives short-term attention, but the momentum isn’t there as the confidence in quality and safety isn’t strong enough.

This is why performance marketing alone rarely sustains momentum. Visibility without that emotional connection spikes sales but doesn’t build a trajectory.

How leaders should use Brand Momentum

Tracking Brand Momentum is only useful if it serves to change behavior. The most effective teams use it in three ways:

1. An early warning system

Sudden drops or dips in momentum flag reputational risks before it becomes a full-on crisis. Equally, sudden upticks flag what’s working and needs to be amplified.

2. A prioritization tool

By linking changes to certain actions: partnerships, campaigns, messaging, and launches, teams can immediately see what’s landing and what isn’t.

3. A strategic compass

Momentum clarifies whether a brand is culturally aligned or drifting. It forces leadership to confront not just performance, but relevance. Good performance in one quarter shouldn’t be celebrated in isolation.

Reports tell you what’s happened; Brand Momentum tells you what’s on the horizon.

Want to understand how to fully utilize Brand Momentum? Download the HarrisQuest 2026 Guide to Brand Momentum Playbook.

Disclaimer: Views and opinions expressed are the author's own.

Read next: Remote Work Is Evolving: Researchers Reveal Key Benefits, Challenges and the Future Workplace

by Guest Contributor via Digital Information World

What air pollution does to the human body

Jenni Shearston, University of Colorado Boulder
Image: For illustrative purposes. Credit: Kristen Morith / Unsplash

I grew up in rural Colorado, deep in the mountains, and I can still remember the first time I visited Denver in the early 2000s. The city sits on the plain, skyscrapers rising and buildings extending far into the distance. Except, as we drove out of the mountains, I could barely see the city – the entire plain was covered in a brown, hazy cloud.

That brown, hazy cloud was mostly made of ozone, a lung-irritating gas that causes decreases in lung function, inflammation, respiratory symptoms like coughing, and can trigger asthma attacks.

Denver still has air pollution problems, due in part to its geography, which creates temperature inversions that can hold pollution near the ground. But since 1990, ozone has decreased 18% across the U.S., reducing the smog that choked many cities in the 1960s and 1970s. The concentration of tiny dustlike particles of air pollution called PM2.5 has also decreased, by 37% since 2000.

These decreases occurred largely because of one of the most successful public health policies ever implemented by the United States: the Clean Air Act, first passed in 1970. The Clean Air Act regulates air pollution emissions and authorizes the Environmental Protection Agency to set air quality standards for the nation.

For years, when the Environmental Protection Agency assessed the economic impact of new regulations, it weighed both the health costs for Americans and the compliance costs for businesses. The Trump administration is now planning to drop half of that calculation – the monetary health benefits of reducing both ozone and PM2.5 – when weighing the economic impact of regulating sources of air pollution.

I am an environmental epidemiologist, and one of the things I study is people’s exposure to air pollution and how it affects health. Measuring the impact of air quality policies – including quantifying how much money is saved in health care costs when people are exposed to less air pollution – is important because it helps policymakers determine if the benefits of a regulation are worth the costs.

What air pollution does to your body

Breathing in air pollution like ozone and PM2.5 harms nearly every major system in the human body.

It is particularly hard on the cardiovascular, respiratory and neurological systems. Numerous studies have found that PM2.5 exposure is associated with increased death from cardiovascular diseases like coronary heart disease. Even short-term exposure to either PM2.5 or ozone can increase hospitalizations for heart attacks and strokes.

What’s in the air you breathe?

In the respiratory system, PM2.5 exposure is associated with a 10% increased risk for respiratory diseases and symptoms such as wheezing and bronchitis in children. More recent evidence suggests that PM2.5 exposure can increase the risk of Alzheimer’s disease and other cognitive disorders. In addition, the International Agency for Research on Cancer has designated PM2.5 as a carcinogen, or cancer-causing agent.

Reducing air pollution has been proven to save lives, reduce health care costs and improve quality of life.

For example, a study led by scientists at the EPA estimated that a 39% nationwide decrease in airborne PM2.5 from 1990 to 2010 corresponded to a 54% drop in deaths from ischemic heart disease, chronic obstructive pulmonary disease, lung cancer and stroke.

In the same period, the study found that a 9% decline in ozone corresponded to a 13% drop in deaths from chronic respiratory disease. All of these illnesses are costly for the patients and the public, both in the treatment costs that raise insurance prices and the economic losses when people are too ill to work.

Yet another study found that nationally, an increase of 1 microgram per square meter in weekly PM2.5 exposure was associated with a 0.82% increase in asthma inhaler use. The authors calculated that decreasing PM2.5 by that amount would mean US$350 million in annual economic benefits.

Especially for people with lung diseases like asthma or sarcoidosis, increased PM2.5 concentrations can reduce quality of life by worsening lung function.

Uncertainty doesn’t mean ignore it

The process of calculating precisely how much money is saved by a policy has uncertainty. That was a reason the Trump administration stated for not including health costs in its cost-benefit analyses in 2026 for a plan to change air pollution standards for power plant combustion turbines.

Uncertainty is something we all deal with on a daily basis. Think of the weather. Forecasts have varying degrees of accuracy. The high temperature might not get quite as high as the prediction, or might be a bit hotter. That is uncertainty.

The EPA wrote in a notice dated Jan. 9, 2026, that its historical practice of providing estimates of the monetized impact of reducing pollution leads the public to believe that the EPA has a clearer understanding of these monetary benefits than it actually does.

Therefore, the EPA wrote, the agency will stop estimating monetary benefits from reducing pollution until it is “confident enough in the modeling to properly monetize those impacts.”

This is like ignoring weather forecasts because they might not be perfect. Even though there is uncertainty, the estimate is still useful.

Estimates of the monetary costs and benefits of regulating pollution sources are used to understand if the regulation is worth its cost. Without considering the health costs and benefits, it may be easier for infrastructure that emits high levels of air pollution to be built and operated.

What the evidence shows

Several studies have shown the impact of pollution sources like power plants on health.

For example, the retirement of coal and oil power plants has been connected with a reduction in preterm birth to mothers living near the power plants. Scientists studied 57,000 births in California and found the percentage of babies born preterm to mothers living within 3.1 miles (5 kilometers) of a coal- or oil-fueled power plant fell from 7% to 5.1% after the power plant was retired.

Another study in the Louisville, Kentucky, area found that four coal-fired power plants either retiring or installing pollution-reduction technologies such as flue-gas desulfurization systems coincided with a drop in hospitalizations and emergency department visits for asthma and reduced asthma-medication use.

Reducing preterm birth, hospitalizations, emergency department visits and medication use saves money by preventing expensive health care for treatment, hospital stays and medications. For example, researchers estimated that for children born in 2016, the lifetime cost of preterm birth, including medical and delivery care, special education interventions and lost productivity due to disability in adulthood, was in excess of $25.2 billion.

Circling back to Denver: The region is a fast-growing data center hub, and utilities are expecting power demand to skyrocket over the next 15 years. That means more power plants will be needed, and with the EPA’s changes, they may be held to lower pollution standards.The Conversation

Jenni Shearston, Assistant Professor of Integrative Physiology, University of Colorado Boulder

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read next: 

• Why people believe misinformation even when they’re told the facts

• WhatsApp Develops Group Calling for Web and Releases iOS Update With Clearer Link Previews


by External Contributor via Digital Information World

Tuesday, January 20, 2026

WhatsApp Develops Group Calling for Web and Releases iOS Update With Clearer Link Previews

According to WABetaInfo (WBI), WhatsApp is developing voice and video calling for group chats for its web client users. The feature remains under development and is not yet available for beta testing, but would allow users to place group calls directly from WhatsApp Web. This development aims to bring the web client closer to the experience offered by the mobile and desktop apps, which means more users will be able to use WhatsApp Web for conference calling and several other work-related communication purposes. Currently, when users try to make a video call, WhatsApp Web prompts them to download the app; however, this may change once the feature is more widely rolled out.

Image: Wabetainfo blog Jan 19, 2026.

WABetaInfo reported that WhatsApp is also exploring the ability to generate call links from group chats and to schedule voice or video calls with a name, description and approximate start and end times. Scheduled calls would create events shared with participants rather than launching automatically. Participant limits for group calls have not been officially confirmed, and no release date has been announced.
Separately, as per WBI, WhatsApp has released version 26.1.74 of its iOS app through the App Store. The update’s official changelog lists a feature that displays clearer link previews in chats to make links easier to read. The clearer display applies only when a rich preview is generated and previews are not disabled. Availability may vary by user.

Note: This post was drafted with AI assistance and reviewed / fact-checked by a human editor.

Read next: Why people believe misinformation even when they’re told the facts
by Ayaz Khan via Digital Information World

Why people believe misinformation even when they’re told the facts

Kelly Fincham, University of Galway

Image: Alex Ware / Unsplash

When you spot false or misleading information online, or in a family group chat, how do you respond? For many people, their first impulse is to factcheck – reply with statistics, make a debunking post on social media or point people towards trustworthy sources.

Factchecking is seen as a go-to method for tackling the spread of false information. But it is notoriously difficult to correct misinformation.

Evidence shows readers trust journalists less when they debunk, rather than confirm, claims. Factchecking can also result in repeating the original lie to a whole new audience, amplifying its reach.

The work of media scholar Alice Marwick can help explain why factchecking often fails when used in isolation. Her research suggests that misinformation is not just a content problem, but an emotional and structural one.

She argues that it thrives through three mutually reinforcing pillars: the content of the message, the personal context of those sharing it, and the technological infrastructure that amplifies it.

1. The message

People find it cognitively easier to accept information than to reject it, which helps explain why misleading content spreads so readily.

Misinformation, whether in the form of a fake video or misleading headline, is problematic only when it finds a receptive audience willing to believe, endorse or share it. It does so by invoking what American sociologist Arlie Hochschild calls “deep stories”. These are emotionally resonant narratives that can explain people’s political beliefs.

The most influential misinformation or disinformation plays into existing beliefs, emotions and social identities, often reducing complex issues to familiar emotional narratives. For example, disinformation about migration might use tropes of “the dangerous outsider”, “the overwhelmed state” or “the undeserving newcomer”.

2. Personal context

When fabricated claims align with a person’s existing values, beliefs and ideologies, they can quickly harden into a kind of “knowledge”. This makes them difficult to debunk.

Marwick researched the spread of fake news during the 2016 US presidential election. One source described how her strongly conservative mother continued to share false stories about Hillary Clinton, even after she (the daughter) repeatedly debunked the claims.

The mother eventually said: “I don’t care if it’s false, I care that I hate Hillary Clinton, and I want everyone to know that!” This neatly encapsulates how sharing or posting misinformation can be an identity-signalling mechanism.

People share false claims to signal in-group allegiance, a phenomenon researchers describe as “identity-based motivation”. The value of sharing lies not in providing accurate information, but in serving as social currency that reinforces group identity and cohesion.

The increase in the availability of AI-generated images will escalate the spread further. We know that people are willing to share images that they know are fake, when they believe they have an “emotional truth”. Visual content carries an inherent credibility and emotional force – “a picture is worth a thousand words” – that can override scepticism.

3. Technical structures

All of the above is supported by the technical structures of social media platforms, which are engineered to reward engagement. These platforms create revenue by capturing and selling users’ attention to advertisers. The longer and more intensively people engage with content, the more valuable that engagement becomes for advertisers and platform revenue.

Metrics such as time spent, likes, shares and comments are central to this business model. Recommendation algorithms are therefore explicitly optimised to maximise user engagement. Research shows that emotionally charged content – especially content that evokes anger, fear or outrage – generates significantly more engagement than neutral or positive content.

While misinformation clearly thrives in this environment, the sharing function of messaging and social media apps enables it to spread further. In 2020, the BBC reported that a single message sent to a WhatsApp group of 20 people could ultimately reach more than 3 million people, if each member shared it with another 20 people and the process was repeated five times.

By prioritising content likely to be shared and making sharing effortless, every like, comment or forward feeds the system. The platforms themselves act as a multiplier, enabling misinformation to spread faster, farther and more persistently than it could offline.

Factchecking fails not because it is inherently flawed, but because it is often deployed as a short-term solution to the structural problem of misinformation.

Meaningfully addressing it therefore requires a response that addresses all three of these pillars. It must involve long-term changes to incentives and accountability for tech platforms and publishers. And it requires shifts in social norms and awareness of our own motivations for sharing information.

If we continue to treat misinformation as a simple contest between truth and lies, we will keep losing. Disinformation thrives not just on falsehoods, but on the social and structural conditions that make them meaningful to share.The Conversation

Kelly Fincham, Programme director, BA Global Media, Lecturer media and communications, University of Galway

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read next: 

Your voice gives away valuable personal information, so how do you keep that data safe?

• “Bad behaviour” begets “bad behaviour” in AI – Expert Reaction


by External Contributor via Digital Information World