Friday, February 20, 2026

A few weeks of X’s algorithm can make you more right-wing – and it doesn’t wear off quickly

Timothy Graham, Queensland University of Technology

A new study published today in Nature has found that X’s algorithm – the hidden system or “recipe” that governs which posts appear in your feed and in which order – shifts users’ political opinions in a more conservative direction.

Image: BoliviaInteligente / unsplash

Led by Germain Gauthier from Bocconi University in Italy, it is a rare, real-world randomised experimental study on a major social media platform. And it builds on a growing body of research that shows how these platforms can shape people’s political attitudes.

Two different algorithms

The researchers randomly assigned 4,965 active US-based X users to one of two groups.

The first group used X’s default “For You” feed. This features an algorithm that selects and ranks posts it thinks users will be more likely to engage with, including posts from accounts that they don’t necessarily follow.

The second group used a chronological feed. This only shows posts from accounts users follow, displayed in the order they were posted. The experiment ran for seven weeks during 2023.

Users who switched from the chronological feed to the “For You” feed were 4.7 percentage points more likely to prioritise policy issues favoured by US Republicans (for example, crime, inflation and immigration). They were also more likely to view the criminal investigation into US President Donald Trump as unacceptable.

They also shifted in a more pro-Russia direction in regards to the war in Ukraine. For example, these users became 7.4 percentage points less likely to view Ukrainian President Volodymyr Zelenskyy positively, and scored slightly higher on a pro-Russian attitude index overall.

The researchers also examined how the algorithm produced these effects.

They found evidence that the algorithm increased the share of right-leaning content by 2.9 percentage points overall (and 2.5 points among political posts), compared with the chronological feed.

It also significantly demoted the share of posts from traditional news organisations’ accounts while promoting or boosting posts from political activists.

One of the most concerning findings of the study is the longer-term effects of X’s algorithmic feed. The study showed the algorithm nudged users towards following more right-leaning accounts, and that the new following patterns endured even after switching back to the chronological feed.

In other words, turning the algorithm off didn’t simply “reset” what people see. It had a longer-lasting impact beyond its day-to-day effects.

One piece of a much bigger picture

This new study supports findings of similar studies.

For example, a study in 2022, before Elon Musk had bought Twitter and rebranded it as X, found the platform’s algorithmic systems amplified content from the mainstream political right more than the left in six out of the seven countries.

An experimental study from 2025 re-ranked X feeds to reduce exposure to content that expresses antidemocratic attitudes and partisan animosity. They found this shifted feelings towards their political opponents by more than two points on a 0–100 “feeling thermometer”. This is a shift the authors argued would have normally taken about three years to occur organically in the general population.

My own research offers another piece of evidence to this picture of algorithmic bias on X. Along with my colleague Mark Andrejevic, I analysed engagement data (such as likes and reposts) from prominent political accounts during the final stages of the 2024 US election.

Our findings unearthed a sudden and unusual spike in engagement with Musk’s account after his endorsement of Trump on July 13 – the day of the assassination attempt on Trump. Views on Musk’s posts surged by 138%, retweets by 238%, and likes by 186%. This far outstripped increases on other accounts.

After July 13, right-leaning accounts on X gained significantly greater visibility than progressive ones. The “playing field” for attention and engagement on the platform was tilted thereafter towards right-leaning accounts – a trend that continued for the remainder of the time period we analysed in that study.

Not a niche product

This matters because we are not talking about a niche product.

X has more than 400 million users globally. It has become embedded as infrastructure – a key source of political and social communication. And once technical systems become infrastructure, they can become invisible – like background objects that we barely think about, but which shape society at its foundations and can be exploited under our noses.

Think of the overpass bridges Robert Moses designed in New York in the 1930s. These seemed like inert objects. But they were designed to be very low, to exclude people of colour from taking buses to recreation areas in Long Island.

Similar to this, the design and governance of social media platforms also has real consequences.

The point is that X’s algorithms are not neutral tools. They are an editorial force, shaping what people know, whom they pay attention to, who the outgroup is and what “we” should do about or to them – and, as this new study shows, what people come to believe.

The age of taking platform companies at their word about the design and effects of their own algorithms must come to an end. Governments around the world – including in Australia where the eSafety Commissioner has powers to drive “algorithmic transparency and accountability” and require that platforms report on how their algorithms contribute to or reduce harms – need to mandate genuine transparency over how these systems work.

When infrastructure become harmful or unsafe, nobody bats an eye when governments do something to protect us. The same needs to happen urgently for social media infrastructures.The Conversation

Timothy Graham, Associate Professor in Digital Media, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Disclosure statement: Timothy Graham receives funding from the Australian Research Council (ARC) for the Discovery Project, 'Understanding and Combatting "Dark Political Communication"'.

Read next: Generative AI has seven distinct roles in combating misinformation


by External Contributor via Digital Information World

Thursday, February 19, 2026

Generative AI has seven distinct roles in combating misinformation

Reviewed by Ayaz Khan.

Generative AI can be used to combat misinformation. However, it can also exacerbate the problem by producing convincing manipulations that are difficult to detect and can quickly be copied and disseminated on a wide scale. In a new study, researchers have defined seven distinct roles that AI can play in the information environment and analysed each role in terms of its strengths, weaknesses, opportunities and risks.

“One important point is that generative AI has not just one but several functions in combating misinformation. The technology can be anything from information support and educational resource to a powerful influencer. We therefore need to identify and discuss the opportunities, risks and responsibilities associated with AI and we need to create more effective policies,” says Thomas Nygren, Professor at Uppsala University, who conducted the study together with colleagues at the University of Cambridge, UK, and the University of Western Australia.

From fact-checking to influence – same capacity has double-edged effects

The study is an overview in which researchers from a range of scholarly disciplines have reviewed the latest research on how generative AI can be used in various parts of the information environment. These uses range from providing information and supporting fact-checking to influencing opinion and designing educational interventions, and the study considers the strengths, weaknesses, opportunities and risks associated with each use.

The researchers chose to work with a SWOT framework as this leads to a more practical basis for decisions than general assertions that ‘AI is good’ or ‘AI is dangerous’. A system can be helpful in one role but also harmful in the same. Analysing each role using SWOT can help decision-makers, schools and platforms discuss the right measures for the right risk.

AI can serve several functions

“The roles emerged from a process of analysis where we started out from the perception that generative AI is not a simple ‘solution’ but a technology that can serve several functions at the same time. We identified recurrent patterns in the way AI is used to obtain information, to detect and manage problems, to influence people, to support collaboration and learning, and to design interactive training environments. These functions were summarised in seven roles,” Nygren explains.

The seven roles that the researchers identified as their research evolved were informer, guardian, persuader, integrator, collaborator, teacher and playmaker (see the fact box). The point of the roles is that they can serve as a checklist: they help us to see how each role can contribute to strengthening the resilience of society to misinformation, but also how each role entails specific vulnerabilities and risks. The researchers therefore analysed each role using a SWOT approach: what strengths and opportunities it embodies, but also what weaknesses and threats need to be managed.

“AI must be implemented responsibly”

“We show how generative AI can produce dubious content yet can also detect and counteract misinformation on a large scale. However, risks such as hallucinations, in other words, that AI comes out with ‘facts’ that are wrong, reinforcement of prejudices and misunderstandings, and deliberate manipulation mean that the technology has to be implemented responsibly. Clear policies are therefore needed on the permissible use of AI.”

The researchers particularly underline the need for:

  • Regulations and clear frameworks for the permissible use of AI in sensitive information environments;
  • Transparency about AI-generated content and systemic limitations;
  • Human oversight where AI is used for decisions, moderation or advice;
  • AI literacy to strengthen the ability of users to evaluate and question AI answers.

“The analysis shows that generative AI can be valuable for promoting important knowledge in school that is needed to uphold democracy and protect us from misinformation, but having said that, there is a risk that excessive use could be detrimental for the development of knowledge and make us lazy and ignorant and therefore more easily fooled. Consequently, with the rapid pace of developments, it’s important to constantly scrutinise the roles of AI as ‘teacher’ and ‘collaborator’, like the other five roles, with a critical and constructive eye,” Nygren emphasises.

Article: Nygren, T., Spearing, E. R., Fay, N., Vega, D., Hardwick, I. I., Roozenbeek, J., & Ecker, U. K. H. (2026). The seven roles of generative AI: Potential & pitfalls in combatting misinformation. Behavioral Science & Policy, 0(0). DOI 10.1177/23794607261417815.

For more information: Thomas Nygren, Professor of Education at the Department of Education, Uppsala University, thomas.nygren@edu.uu.se, +46-73-646 86 49

FACT BOX:

The seven roles of generative AI: potential and pitfalls (Nygren et al. 2026).

1) Informer

  • Strengths/opportunities: Can make complex information easier to understand, translate and adapt language, can offer a quick overview of large quantities of information.
  • Problems/risks: Can give incorrect answers (‘hallucinations’), oversimplify and reproduce training data biases without clearly disclosing sources.

2) Guardian

  • Strengths/opportunities: Can detect and flag suspect content on a large scale, identify coordinated campaigns and contribute to a swifter response to misinformation waves.
  • Problems/risks: Risk of false positives/negatives (irony, context, legitimate controversies), distortions in moderation, and lack of clarity concerning responsibility and rule of law.

3) Persuader

  • Strengths/opportunities: Can support correction of misconceptions through dialogue, refutation and personalised explanations; can be used in pro-social campaigns and in educational interventions.
  • Problems/risks: The same capacity can be used for manipulation, microtargeted influence and large-scale production of persuasive yet misleading messages – often quickly and cheaply.

4) Integrator

  • Strengths/opportunities: Can structure discussions, summarise arguments, clarify distinctions, and support deliberation and joint problem-solving.
  • Problems/risks: Can create false balance, normalise errors through ‘neutral synthesis’, or indirectly control problem formulation and interpretation.

5) Collaborator

  • Strengths/opportunities: Can assist in analysis, writing, information processing and idea development; can support critical review by generating alternatives, counterarguments and questions.
  • Problems/risks: Risk of overconfidence and cognitive outsourcing; users can fail to realise that the answer is based on uncertain assumptions and that the system lacks real understanding.

6) Teacher

  • Strengths/opportunities: Can give swift, personalised feedback and create training tasks at scale; can foster progression in source criticism and digital skills.
  • Problems/risks: Incorrect or biased answers can be disseminated as ‘study resources’; risk that teaching becomes less investigative if students/teachers uncritically accept AI-generated content.

7) Playmaker

  • Strengths/opportunities: Can support design of interactive, gamified teaching environments and simulations that train resilience to manipulation and misinformation.
  • Problems/risks: Risk of simplifying stereotypes, ethical and copyright problems, and that gaming mechanisms can reward the wrong type of behaviour if the design is not well considered.
Note: This post was originally published by Uppsala University and republished on Digital Information World (DIW) with permission. The university team confirmed to DIW via email that no AI tools were used in creating the text.

Image: Mikhail Nilov / Pexels

Read next:

• Research Shows How Companies Can Gain Advantage by Prioritizing Customer Privacy

• How Much Does Chatbot Bias Influence Users? A Lot, It Turns Out
by Press Releases via Digital Information World

Parents Turn to STEM and Hands-On Play to Limit Daily Screen Hours

Reviewed by Ayaz Khan.

Half of America’s young parents are struggling to bond with their kids, and the culprit is nearly inescapable: screen time.

The poll of 2,000 U.S. millennial and Gen Z parents found 42% of them feel disconnected from their children due to technology, with kids spending an average of four hours in front of screens on a typical day.

As a result, parents said they notice their kids are easily distracted (42%), get less physical activity (42%), can be irritable (34%), have trouble sleeping (30%) and disengage with people around them (30%).

Commissioned by Lowe’s and conducted by Talker Research, the study revealed over half of parents (54%) try to encourage less screen time for their kids by providing them with more hands-on activities and outlets, like playing with toys (68%), helping around the home (66%) and coloring (66%).

Other activities, like crafts (63%), reading (60%), building (44%) and STEM-based activities (42%) were also popular ways parents get their kids away from screens.

This can be harder in the winter season, as more than half (56%) of parents say screen time increases when temperatures drop or the weather turns bad.

Parents spend an average of 10 hours per week looking for non-screen activities for their kids and wish they had more free activities for their kids nearby.

Those activities include things they can do as a family (58%), be outdoors (56%), DIY workshops (48%), creative arts and crafts (48%) and educational activities (39%).

For many parents, the inspiration to encourage hands-on activities away from screens comes from their own childhood.

Nearly half (46%) recalled frequently participating in DIY projects with their own parents growing up, and they recall feelings of happiness (58%), creativity (56%), satisfaction (47%) and confidence (40%) from those experiences.

With those fond memories in mind, seven in 10 have tried to recreate those activities with their own children.

Eighty-seven percent of parents believe doing DIY projects with their kids would help strengthen their bond, in addition to teaching patience (63%), expressing creativity (59%) and learning how to work better with others (56%).

Image: Eren Li / Pexels

This post was originally published on TalkerResearch.

Read next: Not all gigs are equal: Informal self-employment linked to lower pay, poorer health and instability
by External Contributor via Digital Information World

Wednesday, February 18, 2026

Global collaboration to limit air pollution flowing across borders could save millions of lives

This story is adapted from a version published by Cardiff University. Read the original version here.

Ambitious climate action to improve global air quality could save up to 1.32 million lives per year by 2040, according to a new study.

Image: Tarikul Raana / Pexels

Researchers from CU Boulder and Cardiff University in the United Kingdom have found that developing countries, especially, rely on international action to improve air quality, because much of their pollution comes from outside their borders.

The new study, published in Nature Communications, analyzed cross-border pollution “exchanges” for 168 countries and revealed that if countries do not collaborate effectively on climate policy, it could lead to greater health inequality for poorer nations that have less control over their own air quality.

The team’s work focuses on the impact of exposure to fine particulate matter, what scientists call “PM2.5,” which is the leading environmental risk factor for premature deaths globally.

“Some climate policies could inadvertently make air pollution inequalities worse, specifically for developing nations that might rely heavily on their neighbors for clean air,” said Daven Henze, senior author of the new study and professor at the Paul M. Rady Department of Mechanical Engineering at CU Boulder.

“Holistic climate policy should therefore evaluate how dependent a nation is on others’ emissions reductions, how mitigation scenarios reshape air-pollution flows across borders, and whether global efforts are helping or harming equity.”

Lead author Omar Nawaz at the Cardiff University School of Earth and Environmental Sciences said: “While we know climate action can benefit public health, most research has ignored how this affects the air pollution that travels across international borders and creates inequalities between countries.

“Our analysis shows how climate mitigation decisions made in wealthy nations directly affect the health of people in the Global South, particularly in Africa and Asia.”

The research team used advanced atmospheric modeling and NASA satellite data to simulate different future emissions scenarios for the year 2040. The researchers used this data and a health burden estimation to understand how countries could make an impact through climate policy.

“We were surprised to find that although Asia sees the most total benefits from climate action to its large share of the population, African countries are often the most reliant on external action, with the amount of health benefits they get from climate mitigation abroad increasing in fragmented future scenarios,” said Nawaz.

According to the researchers’ projections, the balance of pollution flowing across borders could shift, even if total global air pollution declines.

These insights could inform policymaking and global aid work that seeks to address climate change.

In a sustainable socioeconomic development scenario, for example, pollution flowing across the U.S.-Mexico border would substantially decrease. Mexico would contribute much more to the health benefits that come from this shift than the United States.

The team plans to do further research exploring how climate change itself alters the weather patterns that transport this pollution, as well as looking at other pollutant types like ozone and organic aerosols.

“Ozone is transported even further in the atmosphere than PM2.5, contributes to significant health burdens, and shares common emission sources with PM2.5. We thus have follow-up studies in the works to investigate the interplay between climate policies and long-range health co-benefits associated with both species simultaneously,” said Henze.

Note: This post was originally published by University of Colorado Boulder Today and republished on Digital Information World with permission.

Edited by Asim BN.

Read next: Is social media addictive? How it keeps you clicking and the harms it can cause
by External Contributor via Digital Information World

Is social media addictive? How it keeps you clicking and the harms it can cause

By Quynh Hoang, University of Leicester

Reviewed by Ayaz Khan

For years, big tech companies have placed the burden of managing screen time squarely on individuals and parents, operating on the assumption that capturing human attention is fair game.

Image: Rapha Wilde / unsplash

But the social media sands may slowly be shifting. A test-case jury trial in Los Angeles is accusing big tech companies of creating “addiction machines”. While TikTok and Snapchat have already settled with the 20-year-old plaintiff, Meta’s CEO, Mark Zuckerberg, is due to give evidence in the courtroom this week.

The European Commission recently issued a preliminary ruling against TikTok, stating that the app’s design – with features such as infinite scroll and autoplay – breaches the EU Digital Services Act. One industry expert told the BBC that the problem is “no longer just about toxic content, it’s about toxic design”.

Meta and other defendants have historically argued that their platforms are communication tools, not traps, and that “addiction” is a mischaracterisation of high engagement.

“I think it’s important to differentiate between clinical addiction and problematic use,” Instagram chief Adam Mosseri testified in the LA court. He noted that the field of psychology does not classify social media addiction as an official diagnosis.

Tech giants maintain that users and parents have the agency and tools to manage screen time. However, a growing body of academic research suggests features like infinite scrolling, autoplay and push notifications are engineered to override human self-control.

Video: CBS News.

A state of ‘automated attachment’

My research with colleagues on digital consumption behaviour also challenges the idea that excessive social media use is a failure of personal willpower. Through interviews with 32 self-identified excessive users and an analysis of online discussions dedicated to heavy digital use, we found that consumers frequently enter a state of “automated attachment”.

This is when connection to the device becomes purely reflexive, as conscious decision-making is effectively suspended by the platform’s design.

We found that the impulse to use these platforms sometimes occurs before the user is even fully conscious. One participant admitted: “I’m waking up, I’m not even totally conscious, and I’m already doing things on the device.”

Another described this loss of agency vividly: “I found myself mindlessly opening the [TikTok] app every time I felt even the tiniest bit bored … My thumb was reaching to its old spot on reflex, without a conscious thought.”

Social media proponents argue that “screen addiction” isn’t the same as substance abuse. However, new neurophysiological evidence suggests that frequent engagement with these algorithms alters dopamine pathways, fostering a dependency that is “analogous to substance addiction”.

Strategies that keep users engaged

The argument that users should simply exercise willpower also needs to be understood in the context of the sophisticated strategies platforms employ to keep users engaged. These include:

1. Removing stopping cues

Features like infinite scroll, autoplay and push notifications create a continuous flow of content. By eliminating natural end-points, the design effectively shifts users into autopilot mode, making stopping a viewing session more difficult.

2. Variable rewards

Similar to a slot machine, algorithms deliver intermittent, unpredictable rewards such as likes and personalised videos. This unpredictability triggers the dopamine system, creating a compulsive cycle of seeking and anticipation.

3. Social pressure

Features such as notifications and time-limited story posts have been found to exploit psychological vulnerabilities, inducing anxiety that for many users can only be relieved by checking the app. Strategies employing “emotional steering” can take advantage of psychological vulnerabilities, such as people’s fear of missing out, to instil a sense of social obligation and guilt if they attempt to disconnect.

Vulnerability in children

The issue of social media addiction is of particular concern when it comes to children, whose impulse control mechanisms are still developing. The US trial’s plaintiff says she began using social media at the age of six, and that her early exposure to these platforms led to a spiral into addiction.

A growing body of research suggests that “variable reward schedules” are especially potent for developing minds, which exhibit a heightened sensitivity to rewards. Children lack the cognitive brakes to resist these dopamine loops because their emotional regulation and impulsivity controls are still developing.

Lawyers in the US trial have pointed to internal documents, known as “Project Myst”, which allegedly show that Meta knew parental controls were ineffective against these engagement loops. Meta’s attorney, Paul Schmidt, countered that the plaintiff’s struggles stemmed from pre-existing childhood trauma rather than platform design.

The company has long argued that it provides parents with “robust tools at their fingertips”, and that the primary issue is “behavioural” – because many parents fail to use them.

Our study heard from many adults (mainly in their 20s) who described the near-impossibility of controlling levels of use, despite their best efforts. If these adults cannot stop opening apps on reflex, expecting a child to exercise restraint with apps that affect human neurophysiology seems even more unrealistic.

Potential harms of overuse

The consequences of social media overuse can be significant. Our research and recent studies have identified a wide range of potential harms.

These include “psychological entrapment”. Participants in our study described a “feedback loop of doom and despair”. Users can turn to platforms to escape anxiety, only to find that the scrolling deepens their feelings of emptiness and isolation.

Excessive exposure to rapidly changing, highly stimulating content can fracture the user’s attention span, making it harder to focus on complex real-world tasks.

And many users describe feeling “defeated” by the technology. Social media’s erosion of autonomy can leave people unable to align their online actions – such as overlong sessions – with their intentions.

A ruling against social media companies in the LA court case, or enforced redesign of their apps in the EU, could have profound implications for the way these platforms are operated in future.

But while big tech companies have grown at dizzying rates over the past two decades, attempts to rein in their products on both sides of the Atlantic remain slow and painstaking. In this era of “use first, legislate later”, people all over the world, of all ages, are the laboratory mice.The Conversation

Quynh Hoang, Lecturer in Marketing and Consumption, Department of Marketing and Strategy, University of Leicester

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read next: AI could rebalance power between people and the services they use


by External Contributor via Digital Information World

Tuesday, February 17, 2026

Survey Finds 62% of Americans Concerned About Personalized Pricing; 48% More Likely to Shop Where Opt-Out Is Offered

Reviewed by Asim BN.

Is the age of “surveillance pricing” upon us? Most Americans hope not, according to new research.

The concept of retailers potentially using AI to set individual pricing for products based on a user’s data or purchasing history has naturally prompted concerns over privacy and fairness.

Six in 10 (62%) Americans polled by Talker Research said they are either somewhat (33%) or very concerned (29%) about the prospect of having personalized pricing based on factors like their browsing habits, location or other data points.

Just 10% of the 2,000 people studied said they were unconcerned about the prospect that this may one day come into practice.

California’s attorney general is currently examining how businesses use data to individualize prices, while New York officials enacted a law last year requiring retailers to have a clear disclaimer if setting prices based on personal data, Forbes reports.

The implications of introducing pricing models in this way may have very real implications.

If they discovered they were charged more for a product or service than someone else as a result of their personal data or purchase history being considered, two-thirds (66%) of Americans would stop shopping at that particular retailer, according to results.

One in six (17%) said they would continue to shop regardless and the same number (17%) were unsure as to how they’d react should they be charged more for something based on their personal information.

Is there an argument that such models could actually be more fair for consumers? Overall, respondents were more inclined to suggest personalized pricing (or algorithmic pricing) as less fair (37%) overall than fixed pricing.

However, results were not unanimous, with 30% feeling it could actually be more fair and 33% feeling it’s about the same fairness either way.

Perhaps tellingly, it seems choice is key to Americans in the matter of personalized pricing. Close to half (48%) said they’d be more likely to shop at a retailer that allowed them to opt out of data-based pricing, even if it meant missing out on personalized discounts and deals.

Many are not interested either way, with 42% saying the ability to opt out makes no difference, while just 10 percent say the ability to opt out of personal pricing would make them less likely to buy from the retailer.

How concerned or unconcerned are you about online retailers using your personal data (purchase history, browsing, location, etc.) to set different prices for different shoppers?

Very concerned – 29%
Somewhat concerned – 33%
Neither concerned or unconcerned – 28%
Somewhat unconcerned – 6%
Very uncensored – 4%

Image: MART PRODUCTION / Pexels

This post was originally published on Talker Research and is republished here on DIW in accordance with their republishing guidelines.

Read next: AI threatens to eat business software – and it could change the way we work
by External Contributor via Digital Information World

Monday, February 16, 2026

AI threatens to eat business software – and it could change the way we work

Michael J. Davern, The University of Melbourne and Ida Someh, The University of Queensland; Massachusetts Institute of Technology (MIT)

Image: Roberto Carlos Blanc Angulo/Pexels

In recent weeks, a range of large “software-as-a-service” companies, including Salesforce, ServiceNow and Oracle, have seen their share prices tumble.

Even if you’ve never used these companies’ software tools, there’s a good chance your employer has. These tools manage key data about customers, employees, suppliers and products, supporting everything from payroll and purchasing to customer service.

Now new “agentic” artificial intelligence (AI) tools for business are expected to reduce reliance on traditional software for everyday work. These include Anthropic’s Cowork, OpenAI’s Frontier and open-source agent platforms such as OpenClaw.

But just how important are these software-as-a-service companies now? How fast could AI replace them – and are the jobs of people who use the software safe?

The digital plumbing of the business world

Software‑as‑a‑service systems run in the cloud, reducing the need for in‑house hardware and IT staff. They also make it easier for businesses to scale as they grow.

Software-as-a-service vendors get a steady, recurring income as firms “rent” the software, usually paying per user (often called a “seat”).

And because these systems become deeply embedded in how these firms operate, switching providers can be costly and risky.

Sometimes firms are locked into using them for a decade or more.

Digital co-workers

Agentic AI systems act like digital co-workers or “bots”. Software bots or agents are not new. Robotic process automation is used in many firms to handle routine, rules-based tasks.

The more recent developments in agentic AI combine this automation with generative AI technology, to complete more complex goals.

This can include selecting tools, making decisions and completing multi-step tasks. These agents can replace human effort in everything from handling expense reports to managing social media and customer correspondence.

What AI can now do

Recent advances, however, are even more ambitious. These tools are reportedly now writing usable software code. Soaring productivity in software development has been attributed to the use of AI agents like Anthropic’s “Claude Code”. Anthropic’s Cowork tool extends this from coding to other knowledge work tasks.

In principle, a user describes a business problem in plain language. Then agentic AI delivers a code solution that works with existing organisational systems.

If this becomes reliable, AI agents will resemble junior software engineers and process designers. AI agents like Cowork expand this to other entry-level work.

These advances are what recently spooked the market (though many affected stocks have since recovered slightly). How much of this fall is a temporary overreaction versus a real long-term shift, time will tell.

How will it affect jobs and costs?

Since the arrival of OpenAI’s ChatGPT in November 2022, AI tools have raised deep questions about the future of work. Some predict many white-collar roles, including those of software engineers and lawyers, will be transformed or even replaced.

Agentic AI appears to accelerate this trend. It promises to let many knowledge workers build workflows and tools without knowing how to code.

Software-as-a-service providers will also feel pressure to change their pricing models. The traditional model of charging per human user may make less sense when much of the work is done by AI agents. Vendors may have to move to pricing based on actual usage or value created.

Hype, reality and limits

Several forces are likely to moderate or limit the pace of change.

First, the promised potential of AI has not yet been fully realised. For some tasks, using AI can even worsen performance. The biggest gains are still likely to be in routine work that can be readily automated, not work that requires complex judgement.

Where AI replaces, rather than augments, human labour is where work practices will change the most. The nearly 20% decline in junior software engineering jobs over three years highlights the effects of AI automation. As AI agents improve at higher-level reasoning, more senior roles will similarly be threatened.

Second, to benefit from AI, firms must invest in redesigning jobs, processes and control systems. We’ve long known that organisational change is slower and messier than technology change.

Third, we have to consider risks and regulation. Heavy reliance on AI can erode human knowledge and skills. Short-term efficiency gains could be offset by long-term loss of expertise and creativity.

Ironically, the loss of knowledge and expertise could make it harder for companies to assure AI systems comply with company policies and government regulations. The checks and balances that help an organisation run safely and honestly do not disappear when AI arrives. In many ways, they become more complex.

Technology is evolving quickly

What is clear is that significant change is already under way. Technology is evolving quickly. Work practices and business models are starting to adjust. Laws and social norms will change more slowly.

Software companies won’t disappear overnight, and neither will the jobs of people using that software. But agentic AI will change what they sell, how they charge and how visible they are to end users.The Conversation

Michael J. Davern, Professor of Accounting & Business Information Systems, The University of Melbourne and Ida Someh, Associate Professor, The University of Queensland; Massachusetts Institute of Technology (MIT)

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Disclosure statement Michael J. Davern has received funding from CPA Australia and Chartered Institute of Management Accountants (CIMA) for research on the impacts of AI. Ida Someh receives research funding from the Australian Research Council and the software company SAP. Ida is a Research Fellow with MIT Sloan Center for Information Systems Research. Partners University of Melbourne University of Queensland University of Melbourne provides funding as a founding partner of The Conversation AU. University of Queensland provides funding as a member of The Conversation AU.

Reviews by Asim BN.

Read next: Your social media feed is built to agree with you. What if it didn’t?


by External Contributor via Digital Information World