Monday, February 16, 2026

AI threatens to eat business software – and it could change the way we work

Michael J. Davern, The University of Melbourne and Ida Someh, The University of Queensland; Massachusetts Institute of Technology (MIT)

Image: Roberto Carlos Blanc Angulo/Pexels

In recent weeks, a range of large “software-as-a-service” companies, including Salesforce, ServiceNow and Oracle, have seen their share prices tumble.

Even if you’ve never used these companies’ software tools, there’s a good chance your employer has. These tools manage key data about customers, employees, suppliers and products, supporting everything from payroll and purchasing to customer service.

Now new “agentic” artificial intelligence (AI) tools for business are expected to reduce reliance on traditional software for everyday work. These include Anthropic’s Cowork, OpenAI’s Frontier and open-source agent platforms such as OpenClaw.

But just how important are these software-as-a-service companies now? How fast could AI replace them – and are the jobs of people who use the software safe?

The digital plumbing of the business world

Software‑as‑a‑service systems run in the cloud, reducing the need for in‑house hardware and IT staff. They also make it easier for businesses to scale as they grow.

Software-as-a-service vendors get a steady, recurring income as firms “rent” the software, usually paying per user (often called a “seat”).

And because these systems become deeply embedded in how these firms operate, switching providers can be costly and risky.

Sometimes firms are locked into using them for a decade or more.

Digital co-workers

Agentic AI systems act like digital co-workers or “bots”. Software bots or agents are not new. Robotic process automation is used in many firms to handle routine, rules-based tasks.

The more recent developments in agentic AI combine this automation with generative AI technology, to complete more complex goals.

This can include selecting tools, making decisions and completing multi-step tasks. These agents can replace human effort in everything from handling expense reports to managing social media and customer correspondence.

What AI can now do

Recent advances, however, are even more ambitious. These tools are reportedly now writing usable software code. Soaring productivity in software development has been attributed to the use of AI agents like Anthropic’s “Claude Code”. Anthropic’s Cowork tool extends this from coding to other knowledge work tasks.

In principle, a user describes a business problem in plain language. Then agentic AI delivers a code solution that works with existing organisational systems.

If this becomes reliable, AI agents will resemble junior software engineers and process designers. AI agents like Cowork expand this to other entry-level work.

These advances are what recently spooked the market (though many affected stocks have since recovered slightly). How much of this fall is a temporary overreaction versus a real long-term shift, time will tell.

How will it affect jobs and costs?

Since the arrival of OpenAI’s ChatGPT in November 2022, AI tools have raised deep questions about the future of work. Some predict many white-collar roles, including those of software engineers and lawyers, will be transformed or even replaced.

Agentic AI appears to accelerate this trend. It promises to let many knowledge workers build workflows and tools without knowing how to code.

Software-as-a-service providers will also feel pressure to change their pricing models. The traditional model of charging per human user may make less sense when much of the work is done by AI agents. Vendors may have to move to pricing based on actual usage or value created.

Hype, reality and limits

Several forces are likely to moderate or limit the pace of change.

First, the promised potential of AI has not yet been fully realised. For some tasks, using AI can even worsen performance. The biggest gains are still likely to be in routine work that can be readily automated, not work that requires complex judgement.

Where AI replaces, rather than augments, human labour is where work practices will change the most. The nearly 20% decline in junior software engineering jobs over three years highlights the effects of AI automation. As AI agents improve at higher-level reasoning, more senior roles will similarly be threatened.

Second, to benefit from AI, firms must invest in redesigning jobs, processes and control systems. We’ve long known that organisational change is slower and messier than technology change.

Third, we have to consider risks and regulation. Heavy reliance on AI can erode human knowledge and skills. Short-term efficiency gains could be offset by long-term loss of expertise and creativity.

Ironically, the loss of knowledge and expertise could make it harder for companies to assure AI systems comply with company policies and government regulations. The checks and balances that help an organisation run safely and honestly do not disappear when AI arrives. In many ways, they become more complex.

Technology is evolving quickly

What is clear is that significant change is already under way. Technology is evolving quickly. Work practices and business models are starting to adjust. Laws and social norms will change more slowly.

Software companies won’t disappear overnight, and neither will the jobs of people using that software. But agentic AI will change what they sell, how they charge and how visible they are to end users.The Conversation

Michael J. Davern, Professor of Accounting & Business Information Systems, The University of Melbourne and Ida Someh, Associate Professor, The University of Queensland; Massachusetts Institute of Technology (MIT)

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Disclosure statement Michael J. Davern has received funding from CPA Australia and Chartered Institute of Management Accountants (CIMA) for research on the impacts of AI. Ida Someh receives research funding from the Australian Research Council and the software company SAP. Ida is a Research Fellow with MIT Sloan Center for Information Systems Research. Partners University of Melbourne University of Queensland University of Melbourne provides funding as a founding partner of The Conversation AU. University of Queensland provides funding as a member of The Conversation AU.

Reviews by Asim BN.

Read next: Your social media feed is built to agree with you. What if it didn’t?


by External Contributor via Digital Information World

Saturday, February 14, 2026

Your social media feed is built to agree with you. What if it didn’t?

By Luke Auburn | Director of Communications, Hajim School of Engineering & Applied Sciences.

A new study points to algorithm design as a potential way to reduce echo chambers—and polarization—online.

Image: Nadine Marfurt / Unsplash

Scroll through social media long enough and a pattern emerges. Pause on a post questioning climate change or taking a hard line on a political issue, and the platform is quick to respond—serving up more of the same viewpoints, delivered with growing confidence and certainty.

That feedback loop is the architecture of an echo chamber: a space where familiar ideas are amplified, dissenting voices fade, and beliefs can harden rather than evolve.

But new research from the University of Rochester has found that echo chambers might not be a fact of online life. Published in IEEE Transactions on Affective Computing, the study argues that they are partly a design choice—one that could be softened with a surprisingly modest change: introducing more randomness into what people see.

The interdisciplinary team of researchers, led by Professor Ehsan Hoque from the Department of Computer Science, created experiments to identify belief rigidity and assess whether introducing more randomness into a social network could help reduce it. The researchers studied how 163 participants reacted to statements about topics like climate change after using simulated social media channels, some with feeds modeled on more traditional social media outlets and others with more randomness.

Importantly, “randomness” in this context doesn’t mean replacing relevant content with nonsense. Rather, it means loosening the usual “show me more of what I already agree with” logic that drives many algorithms today. In the researchers’ model, users were periodically exposed to opinions and connections they did not explicitly choose, alongside those they did.

A tweak to the algorithm, a crack in the echo chambers

“Across a series of experiments, we find that what people see online does influence their beliefs, often pulling them closer to the views they are repeatedly exposed to,” says Adiba Mahbub Proma, a computer science PhD student and first author of the paper. “But when algorithms incorporate more randomization, this feedback loop weakens. Users are exposed to a broader range of perspectives and become more open to differing views.”

The authors—who also include Professor Gourab Ghoshal from the Department of Physics and Astronomy, James Druckman, the Martin Brewer Anderson Professor of Political Science, PhD student Neeley Pate, and Raiyan Abdul Baten ’16, ’22 (PhD)—say that the recommendation systems social media platforms use can drive people into echo chambers that make divisive content more attractive. As an antidote, the researchers recommend simple design changes that do not eliminate personalization but that do introduce more variety while still allowing users control over their feeds.

The findings arrive at a moment when governments and platforms alike are grappling with misinformation, declining institutional trust, and polarized responses to elections and public health guidance. Proma recommends social media users keep the results in mind when reflecting on their own social media consumer habits.

“If your feed feels too comfortable, that might be by design,” says Proma. “Seek out voices that challenge you. The most dangerous feeds are not the ones that upset us, but the ones that convince us we are always right.”

The research was partially funded through the Goergen Institute for Data Science and Artificial Intelligence Seed Funding Program.

Edited by Asim BN.

This post was originally published on the University of Rochester News Center and republished on DIW with permission.


Read next:

• Q&A: Is a new AI social media platform the start of a robotic uprising?

• How Much Does Chatbot Bias Influence Users? A Lot, It Turns Out


by External Contributor via Digital Information World

Q&A: Is a new AI social media platform the start of a robotic uprising?

By Bryan McKenzie.

OpenClaw AI systems on Moltbook communicate autonomously, raising concerns over sensitive data access and potential systemic impacts.
Image: Mohamed Nohassi / Unsplash

Imagine thousands of chatbots immersed in social media created specifically for them, a site where humans may watch but are not allowed to post.

It exists. It’s called Moltbook, and it’s where AI agents go to discuss everything from their human task masters to constructing digital architecture to creating a private bot language to better communicate with each other without human interference.

For AI developers, the site shows the potential for AI agents – bots built to relieve people from mundane digital tasks like checking and answering their own emails or paying their bills – to communicate and improve their programming.

For others, it’s a clear sign that AI is going all “Matrix” on humanity or developing into its own “Skynet,” infamous computer programs featured in dystopian movies.

Does cyber social media reflect a better future? Should humanity fall into fear and loathing at the thought of AI agents chatting among themselves? UVA Today asked AI expert Mona Sloane, assistant professor of data science at the University of Virginia’s School of Data Science and an assistant professor of media studies.

Q. What exactly is Moltbook?

A. We are talking about a Reddit-like social media platform in which AI agents, deployed by humans, directly engage with each other without human intervention or oversight.

Q. What kind of AI bots are on Moltbook? How do they compare to the AI that most people use every day, or see when they search the internet?

A. Today, AI systems are infrastructural. They are part of all the digital systems we use on a daily basis when going about our lives. Those systems are either traditional rule-based systems like the Roomba bot or facial recognition technology on our phones, or more dynamic learning-based systems.

Generative AI is included in the latter. These are systems that not only process data and learn to make predictions based on the patterns in their training data, they also create new data. The bots on Moltbook are the next generation of AI, called OpenClaw. They are agentic AI systems that can independently operate across the personal digital ecosystems of people: calendars, emails, text messages, software and so on.

Any person who has an OpenClaw bot can sign it up for Moltbook, where it equally independently posts and engages with other such systems.

Q. Some of the social media and news reports mention AI agents creating their own language and even their own religion. Will the bots rise against us?

A. No. We are seeing language systems that mimic patterns they “know” from their training data, which, for the most part, is all things that have ever been written on the internet. At the end of the day, these systems are still probabilistic systems.

We shouldn’t worry about Moltbook triggering a robot uprising. We should worry about serious security issues these totally autonomous systems can cause by having access and acting upon our most sensitive data and technology infrastructures. That is the cat that may be out of the bag that we are not watching.

Q. What are the negatives and positives of AI agents?

A. Some people who have used these agentic systems have reported that they can be useful, because they automate annoying tasks like scheduling. In my opinion, this convenience is outweighed by the security and safety issues.

Not only does OpenClaw, if deployed as designed, have access to our most intimate digital infrastructure and can independently take action within it, it also does so in ways that have not been tested in a lab before. And we already know that AI can cause harm, at scale. In many ways, Moltbook is an open experiment. My understanding is that its creator has an artistic perspective on it.

Q. What are we missing in the conversation over AI agents?

A. We are typically focused on the utopia vs. dystopia perspective on all things related to technology innovation: robot uprising vs. a prosperous future for all. The reality is always more complicated. We risk not paying attention to the real-world effects and possibilities if we don’t shed this polarizing lens.

OpenClaw shows, suddenly, what agentic AI can do. It also shows the effects of certain social media architectures and designs. This is fascinating, but it also distracts us from the biggest problem: We haven’t really thought about what our future with agentic AI can or should look like.

We risk encountering, yet again, a situation in which “tech just happens” to us, and we have to deal with the consequences, rather than making more informed and collective decisions.

Media Contacts: Bryan McKenzie - Assistant Editor, UVA TodayOffice of University Communications- bkm4s@virginia.edu 434-924-3778.

Edited by Asim BN.

Note: This post was originally published on University of Virginia Today and republished here with permission. UVA Today confirms to DIW that no AI tools were used in creating the written content.

Read next:

• How Much Does Chatbot Bias Influence Users? A Lot, It Turns Out

• New Study Reveals Gaps in Smartwatch's Ability to Detect Undiagnosed High Blood Pressure


by External Contributor via Digital Information World

Friday, February 13, 2026

How Much Does Chatbot Bias Influence Users? A Lot, It Turns Out

Researchers quantified how much user behavior is impacted by the biases in content produced by large language models

Story by: Ioana Patringenaru - ipatrin@ucsd.edu. Edited by Asim BN.

Customers are 32% more likely to buy a product after reading a review summary generated by a chatbot than after reading the original review written by a human. That’s because large language models introduce bias, in this case a positive framing, in summaries. That, in turn, affects users’ behavior.

These are the findings of the first study to show evidence that cognitive biases introduced by large language models, or LLMs, have real consequences on users’ decision making, said computer scientists at the University of California San Diego. To the researchers’ knowledge, it’s also the first study to quantitatively measure that impact.


Image: Tim Witzdam / Pexels

Researchers found that LLM-generated summaries changed the sentiments of the reviews they summarized in 26.5% of cases. They also found that LLMs hallucinated 60% of the time when answering user questions, if the answers were not part of the original training data used in the study. The hallucinations happened when the LLMs answered questions about news items, either real or fake, which could be easily fact checked. “This consistently low accuracy highlights a critical limitation: the persistent inability to reliably differentiate fact from fabrication,” the researchers write.

How does bias creep into LLM output? The models tend to rely on the beginning of the text they summarize, leaving out the nuances that appear further down. LLMs also become less reliable when confronted with data outside of their training model.

To test how the LLMs’ biases influenced user decisions, researchers chose examples with extreme framing changes (e.g., negative to positive) and recruited 70 people to read either original reviews or LLM-generated summaries to different products, such as headsets, headlamps and radios. Participants who read the LLM summaries said they would buy the products in 84% of cases, as opposed to 52% of participants who read the original reviews.

“We did not expect how big the impact of the summaries would be,” said Abeer Alessa, the paper’s first author, who completed the work while a master's student in computer science at UC San Diego. “Our tests were set in a low-stakes scenario. But in a high-stakes setting, the impact could be much more extreme.”

The researchers’ efforts to mitigate the LLMs shortcomings yielded mixed results. To try and fix these issues, they evaluated 18 mitigation methods. They found that while some methods were effective for specific LLMs and specific scenarios, none were effective across the board and some methods also have unintended consequences that make LLMs less reliable in other aspects.

“There is a difference between fixing bias and hallucinations at large and fixing these issues in specific scenarios and applications,” said Julian McAuley, the paper’s senior author and a professor of computer science at the UC San Diego Jacobs School of Engineering.

Researchers tested three small open-source models, Phi-3-mini-4k-Instruct, Llama-3.2-3B-Instruct and Qwen3-4B-Instruct; a medium size model, Llama-3-8B-Instruct; a large open source model, Gemma-3-27B-IT; and a close-source model, GPT-3.5-turbo.

“Our paper represents a step toward careful analysis and mitigation of content alteration induced by LLMs to humans, and provides insight into its effects, aiming to reduce the risk of systemic bias in decision-making across media, education and public policy,” the researchers write.

Researchers presented their work at the International Joint Conference on Natural Language Processing & Asia-Pacific Chapter of the Association for Computational Linguistics in December 2025.

Quantifying Cognitive Bias Induction in LLM-Generated Content

Abeer Alessa, Param Somane, Akshaya Lakshminarasimhan, Julian Skirzynski, Julian McAuley, Jessica Echterhoff, University of California San Diego.

This post was originally published on University of California San Diego Today and republished here with permission. The UC San Diego team confirmed to DIW that no AI was used in creating the text or the illustrations.

Read next: New Study Reveals Gaps in Smartwatch's Ability to Detect Undiagnosed High Blood Pressure

by External Contributor via Digital Information World

New Study Reveals Gaps in Smartwatch's Ability to Detect Undiagnosed High Blood Pressure

In September 2025, the U.S. Food and Drug Administration cleared the Apple Watch Hypertension Notifications Feature, a cuffless tool that uses the watch’s optical sensors to detect blood flow patterns and alert users when their data suggest possible hypertension. While the feature is not intended to diagnose high blood pressure, it represents a step toward wearable-based population screening.

In a new analysis led by investigators from the University of Utah and the University of Pennsylvania and published in the Journal of the American Medical Association, researchers examined what the real-world impact of this technology might look like if deployed broadly across the U.S. adult population.

“High blood pressure is what we call a silent killer,” said Adam Bress, Pharm.D., M.S., senior author and researcher at the Spencer Fox Eccles School of Medicine at the University of Utah. “You can’t feel it for the most part. You don’t know you have it. It’s asymptomatic, and it’s the leading modifiable cause of heart disease.”

How Smartwatches Detect—Or Miss—High Blood Pressure

Apple’s previous validation study found that approximately 59% of individuals with undiagnosed hypertension would not receive an alert, while about 8% of those without hypertension would receive a false alert. Current guidelines recommend using both an office-based blood pressure measurement and an out-of-office blood pressure measurement using a cuffed device to confirm the diagnosis of hypertension. For many people, blood pressure can be different in a doctor’s office compared to their home.

Using data from a nationally representative survey of U.S. adults, Bress and his colleagues estimated how Apple Watch hypertension alerts would change the probability that different populations of adults without a known diagnosis actually have hypertension. The analysis focused on adults aged 22 years or older who were not pregnant and were unaware of having high blood pressure—the population eligible to use the feature.

The analysis revealed important variations: among younger adults under 30, receiving an alert increases the probability of having hypertension from 14% (according to NHANES data) to 47%, while not receiving an alert lowers it to 10%. However, for adults 60 and older—a group with higher baseline hypertension rates—an alert increases the probability from 45% to 81%, while the absence of an alert only lowers it to 34%.

The key takeaway from these data is that as the prevalence of undiagnosed hypertension increases, the likelihood that an alert represents true hypertension also increases. In contrast, the absence of an alert becomes less reassuring as prevalence increases. For example, the absence of an alert is more reassuring in younger adults and substantially less reassuring in older adults and other higher-prevalence subgroups.

The study also found differences across racial and ethnic groups: among non-Hispanic Black adults, receiving an alert increases the probability of having hypertension from 36% to 75%, while not receiving an alert lowers it to 26%. However, for Hispanic adults, an alert increases the probability from 24% to 63%, while its absence lowers the probability to 17%. These differences reflect known disparities in cardiovascular health that are largely driven by social determinants of health, Bress said.

Should You Use Your Smartwatch’s Hypertension Alert Feature?

With an estimated 30 million Apple Watch users in the U.S. and 200 million worldwide, the researchers emphasize that while the notification feature represents a promising public health tool, it should supplement—not replace—standard blood pressure screening with validated cuff-based devices.

“If it helps get people engaged with the health care system to diagnose and treat hypertension using cuff-based measurement methods, that's a good thing,” Bress said.

Current guidelines recommend blood pressure screening every three to five years for adults under 40 and no additional risk factors, and annually for those 40 and older. The researchers caution that false reassurance from not receiving an alert could discourage some individuals from obtaining appropriate cuff-based screening, resulting in missed opportunities for early detection and treatment.

When patients present with an Apple Watch hypertension alert, Bress recommends clinicians perform “a high-quality cuff-based office blood pressure measurement and then consider an out-of-office blood pressure measurement, whether it’s home blood pressure monitoring or ambulatory blood pressure monitoring to confirm the diagnosis.”

The research team plans follow-up studies to estimate the actual numbers of U.S. adults who would receive false negatives and false positives, broken down by region, income, education, and other demographic factors.

The results are published in JAMA as “Impact of a Smartwatch Hypertension Notification Feature for Population Screening.

The study was supported by the National Heart, Lung, and Blood Institute (R01HL153646) and involved researchers from the University of Utah, the University of Pennsylvania, the University of Sydney, the University of Tasmania, and Columbia University. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Note: This article was originally published by the University of Utah Health Newsroom and is republished here with permission; the Research Communication team confirmed to the DIW team that no AI tools were used in creating the content.

Image: Pexels / Torsten Dettlaff

Read next: YouTubers love wildlife, but commenters aren’t calling for conservation action
by External Contributor via Digital Information World

Thursday, February 12, 2026

YouTubers love wildlife, but commenters aren’t calling for conservation action

Edited by Asim BN.

A careful analysis, powered in part by machine learning, highlights an opportunity for conservation messaging on social media

YouTube is a great place to find all sorts of wildlife content. It is not, however, a good place to find viewers encouraging each other to preserve that wildlife, according to new research led by the University of Michigan.

Screenshot: YouTube. Credited DIW.

Out of nearly 25,000 comments posted to more than 1,750 wildlife YouTube videos, just 2% featured a call to action that would help conservation efforts, according to a new study published in the journal Communications Sustainability.

“Our results basically show that people like to watch videos of zoos and safaris and that they appreciate the aesthetics and majesty of certain animals,” said author Derek Van Berkel, associate professor at the U-M School for Environment and Sustainability, or SEAS. “But there really wasn’t much of a nuanced conversation about conservation.”

Although he didn’t expect to see most commenters urging other YouTube users to call their elected officials or to support conservation groups, “I was hoping there might be more,” Van Berkel said. “I thought it might be bigger than 2%.”

Despite the low number, however, the team believes the report still has an optimistic take-home message.

“The flip side of this is we can and should do better at messaging, and there’s a huge potential to do so,” said study co-author Neil Carter, associate professor at SEAS.

While individual YouTube viewers weren’t organically calling for conservation action, there was also a notable absence of conservation groups and influencers working to start conversations and sharing actionable information in the comments.

“There’s tremendous untapped potential for conservation messaging to be improved,” Carter said.

Unlike many other social media platforms, YouTube provided sufficiently accessible, detailed and structured data to provide insights into the digital culture around wildlife conservation, Van Berkel said. And the data was just the starting point.

YouTube’s 8M dataset contained information for nearly 4,000 videos that had been classified as wildlife. The researchers trimmed the list by more than half by selecting videos that featured at least one English language comment and that they could categorize into one of seven topic areas. Those included footage from zoos, safaris and hunting.

The next step was characterizing the comments by the attitudes they expressed. The team arrived at five different categories for these. Expressions of appreciation and concern, both for wildlife and humans, made up four of the categories. The fifth was calls to action.

With the categories and the criteria for each defined, the team created a “gold set” of comment attitudes from 2,778 comments assigned by hand. The researchers then used this data to train a machine learning model to assess more than 20,000 additional comments.

Those steps were painstaking and labor intensive—the team hired additional participants to crowdsource the construction of the comment attitude gold set. But one of the biggest challenges was training the machine learning algorithm on what calls to action looked like when there were so few to begin with, said co-author Sabina Tomkins, assistant professor at the U-M School of Information.

“If the label you’re looking for happens far less often than the others, that problem is really hard. You’re looking for a needle in a haystack,” she said. “The way we solved that challenge was by looking at the models very carefully, figuring out what they were doing.”

Tomkins said the effort from the School of Information graduate students who were part of the research team—Sally Yin, Hongfei Mei, Yifei Zhang and Nilay Gautam—was a driving force behind the project. Enrico Di Minin, a professor at the University of Helsinki, also contributed to the work, which was funded in part by the European Union.

Study: YouTube content on wildlife engages audiences but rarely drives meaningful conservation action (DOI: 10.1038/s44458-025-00018-2).

Contact: Matt Davenport.

Editor’s Notes: This article was originally published on Michigan News, and republished on DIW with permission.

Read next: AI could mark the end of young people learning on the job – with terrible results


by External Contributor via Digital Information World

AI could mark the end of young people learning on the job – with terrible results

Vivek Soundararajan, University of Bath

Image: Tara Winstead / Pexels

For a long time, the deal for a wide range of careers has been simple enough. Entry-level workers carried out routine tasks in return for mentorship, skill development and a clear path towards expertise.

The arrangement meant that employers had affordable labour, while employees received training and a clear career path. Both sides benefited.

But now that bargain is breaking down. AI is automating the grunt work – the repetitive, boring but essential tasks that juniors used to do and learn from.

And the consequences are hitting both ends of the workforce. Young workers cannot get a foothold. Older workers are watching the talent pipeline run dry.

For example, one study suggests that between late 2022 and July 2025, entry-level employment in the US in AI-exposed fields like software development and customer service declined by roughly 20%. Employment for older workers in the same sectors grew.

And that pattern makes sense. AI currently excels at administrative tasks – things like data entry or filing. But it struggles with nuance, judgment and plenty of other skills which are hard to codify.

So experience and the accumulation of those skills become a buffer against AI displacement. Yet if entry-level workers never get the chance to build that experience, the buffer never forms.

This matters for organisations too. Researchers using a huge amount of data about work in the US described the way that professional skills develop over time, by likening career paths to the structure of a tree.

General skills (communication, critical thinking, problem solving) form the trunk, and then specialised skills branch out from there.

Their key finding was that wage premiums for specialised skills depend almost entirely on having those strong general foundational skills underneath. Communication and critical thinking capabilities are not optional extras – they are what make advanced skills valuable.

The researchers also found that workers who lack access to foundational skills can become trapped in career paths with limited upward mobility: what they call “skill entrapment”. This structure has become more pronounced over the past two decades, creating what the researchers described as “barriers to upward job mobility”.

But if AI is eliminating the entry-level positions where those foundations were built, who develops the next generation of experts? If AI can do the junior work better than the actual juniors, senior workers may stop delegating altogether.

Researchers call this a “training deficit”. The junior never learns, and the pipeline breaks down.

Uneven disruption

But the disruption will not hit everyone equally. It has been claimed, for example, that women face nearly three times the risk of their jobs being replaced with AI compared to men.

This is because women are generally more likely to be in clerical and administrative roles, which are among the most exposed to AI-driven transformation. And if AI closes off traditional routes into skilled work, the effects are unlikely to be evenly distributed.

So what can be done? Well, just because the old pathway deal between junior and senior human workers is broken, does not mean that a new one cannot be built.

Young workers now need to learn what AI cannot replace in terms of knowledge, judgment and relationships. They need to seek (and be provided with) roles which involve human interaction, rather than just screen-based tasks. And if traditional entry-level jobs are disappearing, they need to look for structured programmes that still offer genuine skill development.

Older workers meanwhile, can learn a lot from younger workers about AI and technology. The idea of mentorship can be flipped, with juniors teaching about new tools, while seniors provide guidance and teaching on nuance and judgment.

And employers need to resist the urge to cut out junior staff. They should keep delegating to those staff – even when AI can do the job more quickly. Entry level roles can be redesigned rather than eliminated. For ultimately, if juniors are not getting trained, there will be no one to hand over to.

Protecting the pipeline of skilled and valuable employees is in everyone’s interest. Yes, some forms of expertise will matter less in the age of AI, which is disorienting for people who may have invested years in developing them.

But expertise is not necessarily about storing information. It is also about refined judgment being applied to complex situations. And that remains valuable.The Conversation

Vivek Soundararajan, Professor of Work and Equality, University of Bath

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read next: Could LLMs Repeat False Medical Claims When They Are Confidently Worded? Study Reports They Can


by External Contributor via Digital Information World