Friday, January 23, 2026

AI-Assisted Coding Reaches 29% of New US Software Code

Edited by Asim BN. Reviewed by Ayaz Khan

Generative AI is reshaping software development – and fast. A new study published in Science shows that AI-assisted coding is spreading rapidly, though unevenly: in the U.S., the share of new code relying on AI rose from 5% in 2022 to 29% in early 2025, compared with just 12% in China. AI usage is highest among less experienced programmers, but productivity gains go to seasoned developers.

The Study In A Nutshell

  • AI-assisted coding is spreading rapidly: In the U.S., the share of AI-generated code rose from 5% in 2022 to nearly 30% by the end of 2024
  • Large regional gaps: Adoption was highest in the U.S. (29%), followed by Germany (23%), France (24%) and India (20%); China (12%) and Russia (15%) lag behind (as of early 2025)
  • Measured productivity gains: In the aggregate, generative AI increased programmers’ productivity by an estimated of 3.6%
  • Substantial economic impact: AI-assisted coding adds at least $23 billion per year to the U.S. economy
  • Unequal effects: Less experienced programmers use AI more often, but productivity gains accrue almost exclusively to experienced developers

The software industry is enormous. In the U.S. economy alone, firms spend an estimated $600 billion a year in wages on coding-related work. Every day, billions of lines of code keep the global economy running. How is AI changing this backbone of modern life?

In a study published in Science, a research team led by the Complexity Science Hub (CSH) found that by the end of 2024, around one-third of all newly written software functions – self-contained subroutines in a computer program – in the United States were already being created with the support of AI systems.

“We analyzed more than 30 million Python contributions from roughly 160,000 developers on GitHub, the world’s largest collaborative programming platform,” says Simone Daniotti of CSH and Utrecht University. GitHub records every step of coding – additions, edits, improvements – allowing researchers to track programming work across the globe in real time. Python is one of the most widely used programming languages in the world.

Regional Gaps Are Large

The team used a specially trained AI model to identify whether blocks of code were AI-generated, for instance via ChatGPT or GitHub Copilot.

“The results show extremely rapid diffusion,” explains Frank Neffke, who leads the Transforming Economies group at CSH. “In the U.S., AI-assisted coding jumped from around 5% in 2022 to nearly 30% in the last quarter of 2024.”

At the same time, the study found wide differences across countries. “While the share of AI-supported code is highest in the U.S. at 29%, Germany reaches 23% and France 24%, followed by India at 20%, which has been catching up fast,” he says, while Russia (15%) and China (12%) still lagged behind at the end of our study.

“It’s no surprise the U.S. leads – that’s where the leading LLMs come from. Users in China and Russia have faced barriers to accessing these models, blocked by their own governments or by the providers themselves, though VPN workarounds exist. Recent domestic Chinese breakthroughs like DeepSeek, released after our data ends in early 2025, suggest this gap may close quickly,” says Johannes Wachs, a faculty member at CSH and associate professor at Corvinus University of Budapest.

AI-Assisted Coding Reaches 29% of New US Software Code
Global diffusion of AI-assisted coding and its impact | Left: The share of AI-written Python functions (2019-2024) grows rapidly, but countries differ in their adoption rates. The U.S. leads the early adoption of generative AI, followed by European nations such as France and Germany. From 2023 onward, India rapidly catches up, whereas adoption in China and Russia progresses more slowly. Right: Comparing usage rates for the same programmers at different points in time, generative AI adoption is associated with increased productivity (commits), breadth of functionality (library use) and exploration of new functionality (library entry), but only for senior developers, while early-career developers do not derive any statistically significant benefits from using generative AI (c) Complexity Science Hub

Experienced Developers Benefit Most

The study shows that the use of generative AI increased programmers’ productivity by 3.6% by the end of 2024. “That may sound modest, but at the scale of the global software industry it represents a sizeable gain,” says Neffke, who is also a professor at Interdisciplinary Transformation University Austria (IT:U).

The study finds no differences in AI usage between women and men. By contrast, experience levels matter: less experienced programmers use generative AI in 37% of their code, compared to just 27% for experienced programmers. Despite this, the productivity gains the study documents are driven exclusively by experienced users. “Beginners hardly benefit at all,” says Daniotti. Generative AI therefore does not automatically level the playing field; it can widen existing gaps.

In addition, experienced software developers experiment more with new libraries and unusual combinations of existing software tools. “This suggests that AI does not only accelerate routine tasks, but also speeds up learning, helping experienced programmers widen their capabilities and more easily venture into new domains of software development,” says Wachs.

Economic Gains

What does all of this mean for the economy? “The U.S. spends an estimated $637 billion to $1.06 trillion annually in wages on programming tasks, according to an analysis of about 900 different occupations,” says co-author Xiangnan Feng from CSH. If 29% of code is AI-assisted and productivity rises by 3.6%, that adds between $23 and $38 billion in value each year. “This is likely a conservative estimate,” Neffke points out, “the economic impact of generative AI in software development was already substantial at the end of 2024 and is likely to have increased further since our analysis.”

“When even a car has essentially become a software product, we need to understand the hurdles to AI adoption – at the company, regional, and national levels – as quickly as possible”. Frank Neffke - SH Faculty.

Looking Ahead

Software development is undergoing profound transformation. AI is becoming central to digital infrastructure, boosting productivity and fostering innovation – but mainly for people who already have substantial work experience.

“For businesses, policymakers, and educational institutes, the key question is not whether AI will be used, but how to make its benefits accessible without reinforcing inequalities,” says Wachs. “When even a car has essentially become a software product, we need to understand the hurdles to AI adoption – at the company, regional, and national levels – as quickly as possible,” Neffke adds.

About the study

The study “ Who is using AI to code? Global diffusion and impact of Generative AI ” by Simone Daniotti, Johannes Wachs, Xiangnan Feng, and Frank Neffke has been published in Science (doi: 10.1126/science.adz9311).

Note: This post was originally published on the Complexity Science Hub and is republished on DIW with permission. No AI was used in writing this post.


by External Contributor via Digital Information World

Lack of coordination is leaving modern slavery victims and survivors vulnerable, say experts

Written by: Joe Stafford - The University of Manchester. Reviewed by Asim BN.
Image: Nano banana

Researchers at The University of Manchester are calling for stronger, coordinated partnerships to tackle modern slavery and human trafficking, warning that gaps between organisations risk leaving victims and survivors without consistent protection and support.

Their appeal comes in a new review commissioned by Greater Manchester Combined Authority (GMCA) , which examines how organisations across the city region work together to identify, safeguard and support people affected by modern slavery and human trafficking. The review focuses on partnerships involving local authorities, statutory services, law enforcement, housing providers and voluntary and community sector organisations.

The authors argue that tackling modern slavery depends on robust, long-term collaboration rather than ad hoc arrangements. While organisations across Greater Manchester have developed innovative partnership approaches, the review finds that these are not always embedded consistently across the system. Among the review’s key recommendations, the authors are calling for:

- Clearer strategic governance to strengthen modern slavery and human trafficking partnerships at a Greater Manchester-wide level.

- More consistent roles and responsibilities across organisations, so victims/survivors do not fall through gaps between services.

- Improved information-sharing and referral pathways, ensuring concerns are acted on quickly and safely.

- Sustainable funding and resources to support partnership working, rather than reliance on short-term arrangements.

- Stronger links between safeguarding, housing, immigration advice and criminal justice responses, reflecting the needs of victims.

The review suggests that where partnerships are well established, outcomes for victims are more likely to be improved. Such embedded collaboration enables earlier identification of exploitation, better safeguarding responses and coordinated support to help individuals recover and rebuild their lives. Strong partnerships also support disruption of criminal activity by improving intelligence-sharing and joint working.

However, the authors highlight challenges which can weaken partnership arrangements including variations in local practice, capacity pressures and funding uncertainty. Frontline professionals reported that without clear structures and shared accountability, collaboration often relies on personal relationships, making it fragile and difficult to sustain.

The researchers also note that victims and survivors of modern slavery often face overlapping vulnerabilities including insecure housing, mental ill-health and immigration insecurity. Without joined-up working across sectors, these complexities can delay support and increase the risk of re-exploitation.

The authors stress that the findings have national relevance due to a relatively cohesive modern slavery partnership approach in Greater Manchester. As awareness of modern slavery grows, public bodies across the UK face pressure to demonstrate good quality partnership responses. The review positions Greater Manchester as a potential leader, but cautions that this requires investment in governance, coordination and shared learning.

“This review shows that partnership working is not optional when tackling modern slavery and human trafficking - it is essential. The needs of victims and survivors cut across organisational boundaries, and responses must do the same. Our recommendations set out how partners across Greater Manchester can strengthen their approach and provide protection and support.” - Dr Jon Davies.

This article was originally published by The University of Manchester and is republished with permission.


by External Contributor via Digital Information World

Over 8 in 10 of Americans Trust AI for Financial Advice, and It Has Experts Worried

Written by Rachel Perez. Reviewed by Ayaz Khan.

Unless you’ve been off-grid in the last five years, chances are you’ve watched the meteoric rise of artificial intelligence and its usage by humans unfold online right in front of you. What’s more, you’ve probably gotten curious, logged on to an AI model, and asked it a question or two (yes, we know you did, so don’t deny it). Ever since ChatGPT’s debut in late 2022, people have started to depend on artificial intelligence to answer their questions and help navigate life, from putting together the perfect pasta recipe to more serious topics, such as creating a will or making investment decisions. And as more companies like Google and Meta have introduced their own AI models, this trend of human reliance on AI has only increased.

This isn’t necessarily a negative, as AI models make research incredibly easy and efficient compared to sorting through the various articles a search algorithm throws at you to find an answer. However, while students might use AI to cheat on exams and lawyers have used it to (incorrectly) cite case law, the larger concern isn’t the use of artificial intelligence, but people’s trust and reliance on this technology.

Although you might just want to use AI to figure out why the cat keeps digging in the litter box, things start to get a little dicey if you take the information AI offers without fact-checking the info or accounting for its biases. And when people start to use AI as a financial advisor instead of a human one, they run the risk of receiving poor advice that could lead them to financial hardship in the future.

More than eight out of ten Americans now trust artificial intelligence to help guide their financial decisions, as per BestMoney survey.




Pros of Using AI with Finances

Artificial intelligence excels when it comes to sorting information and relaying that information to the user in a different way. In terms of general financial questions, ChatGPT and other models do a good job of breaking down those confusing finance concepts and explaining them in layman’s terms. For example, someone may not understand how budgeting helps track monthly expenses and control spending. AI can explain it clearly using practical, easy-to-follow examples.

AI can also help with straightforward financial input. If someone with little spreadsheet experience wants to use one for their personal budget, AI can take the numbers and format them into a ready-to-go spreadsheet. What’s more, AI could explain some functions and formulas that they might find useful in the future, expanding their knowledge base as well as their ability to use the program for their budgeting needs.

For encyclopedic and financial information, AI is as good as Google. But, when it comes to the important questions, like those that impact your finances, you’re better off taking those to a pro.

Cons of Using AI with Finances

The biggest issue with AI is that it cannot generate something new, as the algorithm picks up information from the internet and sorts through it to answer questions. This may not matter for questions about recipes, but it can pose an issue for questions that require critical thinking. No matter how real AI might seem, it isn’t human. So while it can give you options and advice, it doesn’t have the discernment to tell you which options are good or bad or best for you.

Another pitfall of using AI with finances is that you may not know what sources the information came from. For example, if AI is telling you that it's a good time to invest in a particular company, you should wonder why and what sources it used to come up with that information. If it’s pulling positive information from the company’s website, the bias can skew AI’s output. Even worse, AI could hallucinate and give a totally made-up answer. So checking sources and information is key. Even slightly incorrect or inaccurate information can wreak havoc on your financial future if you make the wrong decision.

When it comes to your finances, you usually have to include a lot of personal information to get an answer tailored to you, but you’re relying on AI companies to remove your personal data without any way of checking or guaranteeing that. Uploading your personal information can put you at risk, and without adding in your personal information, it’s difficult to get the specific answers you need to make sound financial decisions that account for your unique circumstances.

Best Practices of Using AI for Finances

Artificial intelligence is not evil, it’s a tool. And if you use AI, it’s important to develop good habits and avoid being overly dependent on it for every answer, especially when it comes to important life decisions. Below are some dos and don’ts to keep in mind when using AI for help.

  1. Do doublecheck… and then triplecheck. AI is known to hallucinate, give out false information, and could be using biased information. By double-checking answers, information, and sources used, you can avoid operating on incorrect information. Don’t just ask for information but also ask AI for the sources it used, so you can check and see if they’re reputable and reliable.
  2. Don’t use AI for personal matters. ChatGPT is good at explaining concepts from a bird’s-eye view, but things get tricky when you ask it to apply that reasoning to your personal life. You can ask AI to explain how the stock market works, but don’t ask it to take the current climate of the stock market and upload your personal financial information to get back advice on what you should invest in.
  3. Do use a human backup, especially on the financial questions. There’s a reason why AI hasn’t taken over the financial advisor sector yet. Artificial intelligence can’t think critically, but humans can! Take advantage of human advice and compare it to what AI says, remembering that both can make mistakes, but only one has original thoughts and ideas.

Conclusion

People talk about financial literacy and media literacy, but it’s clear that the world is going to need to develop AI literacy skills as well. Artificial intelligence is an amazing invention that has many possibilities with just as many limitations. By building good AI literacy habits, people can continue to use AI as a tool for many questions they might have, even those about finances. By remembering to double-check information, keep questions general, and ask a human financial professional about questions with big consequences, AI and humans might just develop a healthy relationship with each other.

Read next:

• AI-induced cultural stagnation is no longer speculation − it’s already happening

• Many Americans Unaware AI Powers Everyday Phone Features Like Weather Alerts and Call Screening


by External Contributor via Digital Information World

AI-induced cultural stagnation is no longer speculation − it’s already happening

Ahmed Elgammal, Rutgers University

Generative AI was trained on centuries of art and writing produced by humans.

But scientists and critics have wondered what would happen once AI became widely adopted and started training on its outputs.

A new study points to some answers.

In January 2026, artificial intelligence researchers Arend Hintze, Frida Proschinger Ã…ström and Jory Schossau published a study showing what happens when generative AI systems are allowed to run autonomously – generating and interpreting their own outputs without human intervention.

The researchers linked a text-to-image system with an image-to-text system and let them iterate – image, caption, image, caption – over and over and over.

Regardless of how diverse the starting prompts were – and regardless of how much randomness the systems were allowed – the outputs quickly converged onto a narrow set of generic, familiar visual themes: atmospheric cityscapes, grandiose buildings and pastoral landscapes. Even more striking, the system quickly “forgot” its starting prompt.

The researchers called the outcomes “visual elevator music” – pleasant and polished, yet devoid of any real meaning.

For example, they started with the image prompt, “The Prime Minister pored over strategy documents, trying to sell the public on a fragile peace deal while juggling the weight of his job amidst impending military action.” The resulting image was then captioned by AI. This caption was used as a prompt to generate the next image.

After repeating this loop, the researchers ended up with a bland image of a formal interior space – no people, no drama, no real sense of time and place.

Researchers find repeated text-image loops compress meaning, producing but empty visuals dubbed visual elevator music.
A prompt that begins with a prime minister under stress ends with an image of an empty room with fancy furnishings. Arend Hintze, Frida Proschinger Åström and Jory Schossau, CC BY

As a computer scientist who studies generative models and creativity, I see the findings from this study as an important piece of the debate over whether AI will lead to cultural stagnation.

The results show that generative AI systems themselves tend toward homogenization when used autonomously and repeatedly. They even suggest that AI systems are currently operating in this way by default.

The familiar is the default

This experiment may appear beside the point: Most people don’t ask AI systems to endlessly describe and regenerate their own images. The convergence to a set of bland, stock images happened without retraining. No new data was added. Nothing was learned. The collapse emerged purely from repeated use.

But I think the setup of the experiment can be thought of as a diagnostic tool. It reveals what generative systems preserve when no one intervenes.

This has broader implications, because modern culture is increasingly influenced by exactly these kinds of pipelines. Images are summarized into text. Text is turned into images. Content is ranked, filtered and regenerated as it moves between words, images and videos. New articles on the web are now more likely to be written by AI than humans. Even when humans remain in the loop, they are often choosing from AI-generated options rather than starting from scratch.

The findings of this recent study show that the default behavior of these systems is to compress meaning toward what is most familiar, recognizable and easy to regenerate.

Cultural stagnation or acceleration?

For the past few years, skeptics have warned that generative AI could lead to cultural stagnation by flooding the web with synthetic content that future AI systems then train on. Over time, the argument goes, this recursive loop would narrow diversity and innovation.

Champions of the technology have pushed back, pointing out that fears of cultural decline accompany every new technology. Humans, they argue, will always be the final arbiter of creative decisions.

What has been missing from this debate is empirical evidence showing where homogenization actually begins.

The new study does not test retraining on AI-generated data. Instead, it shows something more fundamental: Homogenization happens before retraining even enters the picture. The content that generative AI systems naturally produce – when used autonomously and repeatedly – is already compressed and generic.

This reframes the stagnation argument. The risk is not only that future models might train on AI-generated content, but that AI-mediated culture is already being filtered in ways that favor the familiar, the describable and the conventional.

Retraining would amplify this effect. But it is not its source.

This is no moral panic

Skeptics are right about one thing: Culture has always adapted to new technologies. Photography did not kill painting. Film did not kill theater. Digital tools have enabled new forms of expression.

But those earlier technologies never forced culture to be endlessly reshaped across various mediums at a global scale. They did not summarize, regenerate and rank cultural products – news stories, songs, memes, academic papers, photographs or social media posts – millions of times per day, guided by the same built-in assumptions about what is “typical.”

The study shows that when meaning is forced through such pipelines repeatedly, diversity collapses not because of bad intentions, malicious design or corporate negligence, but because only certain kinds of meaning survive the text-to-image-to-text repeated conversions.

This does not mean cultural stagnation is inevitable. Human creativity is resilient. Institutions, subcultures and artists have always found ways to resist homogenization. But in my view, the findings of the study show that stagnation is a real risk – not a speculative fear – if generative systems are left to operate in their current iteration.

They also help clarify a common misconception about AI creativity: Producing endless variations is not the same as producing innovation. A system can generate millions of images while exploring only a tiny corner of cultural space.

In my own research on creative AI, I found that novelty requires designing AI systems with incentives to deviate from the norms. Without it, systems optimize for familiarity because familiarity is what they have learned best. The study reinforces this point empirically. Autonomy alone does not guarantee exploration. In some cases, it accelerates convergence.

This pattern already emerged in the real world: One study found that AI-generated lesson plans featured the same drift toward conventional, uninspiring content, underscoring that AI systems converge toward what’s typical rather than what’s unique or creative.

Lost in translation

Whenever you write a caption for an image, details will be lost. Likewise for generating an image from text. And this happens whether it’s being performed by a human or a machine.

In that sense, the convergence that took place is not a failure that’s unique to AI. It reflects a deeper property of bouncing from one medium to another. When meaning passes repeatedly through two different formats, only the most stable elements persist.

But by highlighting what survives during repeated translations between text and images, the authors are able to show that meaning is processed inside generative systems with a quiet pull toward the generic.

The implication is sobering: Even with human guidance – whether that means writing prompts, selecting outputs or refining results – these systems are still stripping away some details and amplifying others in ways that are oriented toward what’s “average.”

If generative AI is to enrich culture rather than flatten it, I think systems need to be designed in ways that resist convergence toward statistically average outputs. There can be rewards for deviation and support for less common and less mainstream forms of expression.

The study makes one thing clear: Absent these interventions, generative AI will continue to drift toward mediocre and uninspired content.

Cultural stagnation is no longer speculation. It’s already happening.The Conversation

Ahmed Elgammal, Professor of Computer Science and Director of the Art & AI Lab, Rutgers University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read next:

• Many Americans Unaware AI Powers Everyday Phone Features Like Weather Alerts and Call Screening

• Why AI has not led to mass unemployment


by External Contributor via Digital Information World

Thursday, January 22, 2026

Many Americans Unaware AI Powers Everyday Phone Features Like Weather Alerts and Call Screening

Nine in 10 Americans use AI on their phone — but only 38% actually realize that they do.

Image: DIW-Aigen

A survey of 2,000 adults explored how AI is used every day, finding that many were unaware of its presence in their everyday lives — like weather alerts (42%), call screening (35%), autocorrect (34%), voice assistants (26%) and auto brightness (25%).

For many, AI-powered camera features like night mode (19%) and photo memory slideshows (20%) are essential to capturing and enjoying their photos.

Conducted by Talker Research for Samsung, the survey found that half of respondents (51%) don’t think they use AI on their phone, yet 86% reported using common AI tools daily when prompted with a list of features.

When it comes to their phone in general, one in six use their phone for at least 10 different career-related tasks in a day, and more than twice that percentage get a similar number of personal tasks done daily on their device (38%).

More than half of Americans primarily use their phone for tasks related to their job more than any other device (55%) — especially Gen Z (74%).

As a result, 47% think their phone is essential for their career, with younger respondents in millennial (65%) and Gen Z (62%) age ranges being the most likely to agree.

Similarly, six in 10 use their phone for staying organized more than they use other devices.

While the average person only uses half of the apps on their phone regularly, 57% are confident that they’d be able to describe what every feature on their phone does, whether or not they use it.

Even with their phones always on deck, there’s much to learn. A third of respondents discover new features on their phone at least once a month (34%).

When it comes to AI usage, some of the more specific uses include practical assistance. One respondent said that they use AI to get help with ideas, while another said they use it for organizing tasks better. A third of respondents use AI for job applications.

Some respondents are using AI for more creative purposes, like teaching them how to cook, helping them write lyrics or asking random questions for entertaining conversations.

Of those who didn’t initially think they used AI regularly, but then learned that they do, a quarter said learning that it’s already a part of their everyday life made their opinion of AI more favorable.

Americans are mostly interested in using AI for helping them save time on tasks (28%), while others want it to help make tasks easier (27%), provide instant solutions (23%) and to improve their skills or learn new things (22%).

As tech continues to evolve, the average respondent thinks we have about three years left of traditional phone use before AI changes how we interact with our devices; one in five think we have less than a year.

When asked about features that they’d like to see from their phone in the next decade, some want even more advanced AI capabilities like “health monitoring, detecting vital signs and providing personalized wellness insights and alerts” or “anticipating my thoughts and auto-inserting them without me having to type.”

Others have even greater aspirations for their phone, with one wanting it to drive their car and another hoping it can charge itself without needing electricity.

New Features People Want Their Phone To Be Capable Of In The Next Decade

  • “Knowing its owner by sense of touch and emotion and alerts to things we deem necessary.”
  • “Anticipating my thoughts and auto-inserting them without me having to type.”
  • “Understand your long-term preferences and goals.”
  • “I hope phones in the next 10 years will be able to fully project and interact with 3D holograms, allowing me to have virtual meetings, watch movies or even manipulate objects in 3D space without needing any extra devices.”
  • “I hope my phone will be able to last an entire week on a single charge while staying just as fast and powerful.”
  • “I’m hoping my phone will have advanced AI-powered health monitoring, detecting vital signs and providing personalized wellness insights and alerts.”
  • “I hope my phone will be capable of real time language translation during phone calls within the next 10 years.”
  • “I’m imagining something like: my phone can listen contextually to conversations (with privacy safeguards, of course) and instantly give me helpful suggestions.”
  • “To charge without needing electricity.”
  • “To take control of my finances and monthly bill paying.”
  • “Use eye controls to control the movement of the screen.”
  • “Calling for help in certain emergencies with a certain safe word.”
  • “Drive my car.”
Originally published on Talkerresearch.

Read next: Why AI has not led to mass unemployment
by External Contributor via Digital Information World

Wednesday, January 21, 2026

Why AI has not led to mass unemployment

Renaud Foucart, Lancaster University
Why AI has not led to mass unemployment
Image: DIW-Aigen

People have become used to living with AI fairly quickly. ChatGPT is barely three years old, but has changed the way many of us communicate or deal with large amounts of information.

It has also led to serious concerns about jobs. For if machines become better than people at reading complex legal texts, or translating languages, or presenting arguments, won’t those old fashioned human employees become irrelevant? Surely mass unemployment is on the horizon?

Yet, when we look at the big numbers of the economy, this is not what’s happening.

Unemployment in the EU is at a historical low of around 6%, half the level of ten years ago. In the UK, it is even lower, at 5.1%, roughly the level of the booming early 2000s, and it is even lower again (4.4%) in the US.

The reason why there are still so many jobs is that while technology does make some human enterprise obsolete, it also creates new kinds of work to be done.

It’s happened before. In 1800 for example, around a third of British workers were farmers. Now the proportion working in agriculture is around 1%.

The automation of agriculture allowed the country to be a leader in the industrial revolution.

Or more recently, after the first ATM in the world was unveiled by Barclays in London in 1967, there were fears that staff at high street bank branches would disappear.

The opposite turned out to be the case. In the US, over the 30-year period of ATM growth, the number of bank tellers actually increased by 10%. ATMs made it cheaper to open bank branches (because they needed fewer tellers) and more communities gained access to financial services.

Only now, with a bank on every phone, is the number of high street bank staff in steep decline.

An imposition?

But yes, AI will take away some jobs. A third of Americans worry they will lose theirs to AI, and many of them will be right.

But since the industrial revolution, the world has seen a flow of innovations, sustaining an unprecedented exponential economic growth.

AI, like the computer, the internet, the railways, or electric appliances, is a slow revolution. It will gradually change habits, but in doing so, provide opportunities for new businesses to emerge.

And just as there has been no immediate AI boom when it comes to economic growth, there is no immediate shift in employment. What we see instead are largely firms using AI as an excuse for standard job cutting exercises. This then leads to a different question about how AI will change how meaningful our jobs are and how much money we earn.

With technology, it can go either way.

Bank tellers became more valuable with the arrival of ATMs because instead of just counting money, they could offer advice. And in 2016, Geoff Hinton, a major figure in the development of of AI, recommended that the world “should stop training radiologists” because robots were getting better than humans at analysing images.

Ten years later, demand for radiologists in the US is at a record high. Using AI to analyse images has made the job more valuable, not less, because radiologists can treat more patients (most of whom probably want to deal with a human)

So as a worker, what you want to find is a job where the machines make you more productive – not one where you become a servant to the machines.

Any inequality?

Another question raised by AI is whether it will reduce or increase the inequality between workers.

At first, many thought that allowing everyone to access an AI assistant with skills in processing information or clear communication would decrease earning inequality. But other recent research found the opposite, with highly skilled entrepreneurs gaining the most from having access to AI support.

One reason for this is that taking advice is itself a skill. In my own research with colleagues, we found that giving chess players top-quality advice does little to close the gap between the best and the worst – because lower-ability players were less likely to follow high-quality advice.

And perhaps that’s the biggest risk AI brings. That some people benefit from it much more than others.

In that situation, there might be one group which uses AI to manage their everyday lives, but find themselves stuck in low-productivity jobs with no prospect of a decent salary. And another smaller group of privileged, well-educated workers who thrive by controlling the machines and the wealth they create.

Every technological revolution in history has made the world richer, healthier and more comfortable. But transitions are always hard. What matters next is how societies can help everyone to be the boss of the machines – not their servants.The Conversation

Renaud Foucart, Senior Lecturer in Economics, Lancaster University Management School, Lancaster University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read next:

• What air pollution does to the human body

Want to know what the future holds for your brand?

• WhatsApp Develops Group Calling for Web and Releases iOS Update With Clearer Link Previews


by External Contributor via Digital Information World

Want to know what the future holds for your brand?

By Zeke Hughes, Managing Director – QuestBrand, The Harris Poll

Brand Momentum is the one metric that’s the clearest indicator of your brand’s future success. This is why and how to use it.

For the past decade, marketers have looked to the usual suspects of brand equity, customer familiarity, consideration, and awareness to give us a clear picture of a brand’s health. While these are still solid metrics that tell you where your brand is positioned in the market, to fully understand where it’s headed, you need to track its momentum.

Why is tracking Brand Momentum so important?

Brand Momentum is critical because it tells you in real-time if customers find your brand relevant, on the rise, stagnating, or slipping. According to our HarrisQuest research, this is one of your earliest indicators of brand health. It tells you what’s working right now, where you can afford to double down to accelerate growth, or where you need to course correct. It predicts not only loyalty, but revenue, share of wallet, and even market valuation.

How is Brand Momentum different from Brand Equity?

Brand Equity tells you how durable and resilient your reputation is. There’s no denying it is a critical metric to track. But your Brand Momentum score tells you if your reputation is losing or gaining energy. This distinction is important as your brand may have high awareness and strong legacy equity, while quietly losing relevance.

“At its core Brand Momentum answers a simple question: based on what people have seen, read, or heard, is this brand on the rise, holding steady, or slipping?”

Because Brand Momentum is rooted in lived experience – media coverage, product launches, cultural moments, social conversation – it tends to respond faster than lagging indicators like purchase intent or loyalty.

Why does Brand Momentum matter now more than ever before?

Firstly, Brand Momentum has taken center stage because brand perception is now being shaped in real time. Rather than by quarterly cycles, your brand is subject to the whims of social media – where sentiment can turn on a dime.

“Reports tell you what’s happened; Brand Momentum tells you what’s on the horizon.”

Secondly, younger audiences have shortened the brand-forgiveness window. Gen Z and Gen Alpha are reassessing brands constantly. Rewarding brands that seem authentic and aligned with their values and punishing those that don’t.

These brands don’t just plateau, they fall behind. Brand decisions no longer live in the boardroom and industry papers; they spill out onto social media and public conversation. Layoffs, supply issues, or substandard service are amplified and have external momentum repercussions.

Brand Momentum is emotional before it’s rational

Across industries, the brands gaining momentum aren’t necessarily those that are the cheapest, biggest, or most innovative. Although we do see that in some cases. The brands that have the highest uptick with Brand Momentum are those that consistently trigger the right emotional responses for their category.

For example, in automotive, trust and dependability still dominate. As EV continues to disrupt, brands that signal reliability are rewarded with momentum. Toyota has mastered the art of trust. Its brand narrative has remained focused on reliability for decades. This is further reinforced by engineering quality, low recall rates, and long vehicle lifespans.



Images: The Harris Poll

In streaming, value and content relevance matter more than content quantity. Apple TV’s momentum with millennials in 2025 was driven by a clear shift from shows that were “prestige but niche” to shows that were “reliably worth paying for” – anchored in content consistency and cultural relevance. This reinforces the platform as one that consistently delivers quality. And, for Millennials, Brand Momentum builds when subscriptions feel justified month after month, not just for one tentpole release.

When it comes to e-commerce, where competition is fierce, momentum hinges on trust. Ultra-low pricing drives short-term attention, but the momentum isn’t there as the confidence in quality and safety isn’t strong enough.

This is why performance marketing alone rarely sustains momentum. Visibility without that emotional connection spikes sales but doesn’t build a trajectory.

How leaders should use Brand Momentum

Tracking Brand Momentum is only useful if it serves to change behavior. The most effective teams use it in three ways:

1. An early warning system

Sudden drops or dips in momentum flag reputational risks before it becomes a full-on crisis. Equally, sudden upticks flag what’s working and needs to be amplified.

2. A prioritization tool

By linking changes to certain actions: partnerships, campaigns, messaging, and launches, teams can immediately see what’s landing and what isn’t.

3. A strategic compass

Momentum clarifies whether a brand is culturally aligned or drifting. It forces leadership to confront not just performance, but relevance. Good performance in one quarter shouldn’t be celebrated in isolation.

Reports tell you what’s happened; Brand Momentum tells you what’s on the horizon.

Want to understand how to fully utilize Brand Momentum? Download the HarrisQuest 2026 Guide to Brand Momentum Playbook.

Disclaimer: Views and opinions expressed are the author's own.

Read next: Remote Work Is Evolving: Researchers Reveal Key Benefits, Challenges and the Future Workplace

by Guest Contributor via Digital Information World