Saturday, April 25, 2026

Meta and Microsoft have joined the tech layoff tsunami – but is AI really to blame?

Kai Riemer, University of Sydney and Sandra Peter, University of Sydney
Photo by DigitalInformationWorld, licensed under CC BY 4.0. 

Meta and Microsoft are the latest software companies to announce big cuts to their global workforce. Both companies are also making big investments in artificial intelligence (AI).

The link seems obvious. Meta’s chief people officer, Janelle Gale, said the job cuts – about 10% of staff or almost 8,000 workers – serve to “offset the other investments we’re making”. Meta boss Mark Zuckerberg has previously spoken about a “major AI acceleration” with spending in excess of US$115bn planned this year.

Microsoft is also betting big on AI. The company also just announced early retirement packages for about 7% of its US workforce.

The two tech giants join Atlassian, Block, WiseTech Global and Oracle, who have all made similar announcements this year, each evoking AI without outright blaming it.

What is happening here? How we understand these layoffs depends on what we think AI is, and what implications it will have. Broadly speaking, there are three ways of looking at it: that AI is superintelligence, that it’s mostly hype, and that it’s a useful tool.

The end of white-collar work?

In the first view, AI is emerging superintelligence. It is a new kind of mind, that learns, reasons, and will soon outperform humans at most cognitive tasks (hint: it’s not!).

The job losses are not just a corporate restructuring. They are an early tremor of something seismic.

In February 2026, AI entrepreneur Matt Shumer put this view vividly – comparing the current moment to the strange, quiet weeks before COVID-19 broke into global consciousness. Most people, he argued, haven’t yet realised we are facing an “intelligence explosion”.

The essay drew significant criticism. Commentators noted it contained little hard data and read at times like a pitch for Shumer’s company’s own AI products.

But it captured a genuine anxiety. Something real is happening in software engineering, at least, where tasks are well-defined and success is easy to verify.

But the leap to “all white-collar work will be automated” is a big one. The view that AI is a kind of universal mind that learns and improves itself is far-fetched.

And most professional work is far messier than coding: ambiguous briefs, competing stakeholder interests, outputs that are hard to verify, and shifting success criteria. Coding may be a canary in the coal mine, but coal mines and boardrooms are very different places.

Are tech companies winding back hiring sprees?

The second view sees the conversation around AI as mostly hype. AI is being invoked as cover. Companies that hired aggressively during the pandemic boom, and now face financial pressure, are blaming AI as the more palatable explanation.

OpenAI CEO Sam Altman called this dynamic “AI washing”: companies blaming AI for layoffs they would have made regardless.

For example, Meta announced in March it would shut down its Metaverse platform Horizon World by June. Reality Labs, the division developing the technology, employed 15,000 people as of January 2026.

We don’t know in detail the make-up of the present job cuts, so Meta may just be repackaging earlier failiures as AI-driven productivity gains.

Another cynical reading suggests that laying off workers in the name of AI is a way to drive up stock prices. When Block invoked AI and cut nearly 4,000 roles, its stock jumped the following day.

Announce AI-driven layoffs and you may find investors reward you for being future-focused. It is a historically familiar trick: technology has repeatedly served as convenient cover for financial restructuring.

Are layoffs a way to make staff use AI?

The third view is more nuanced. It sees AI as a powerful tool, but one that companies will need to transform themselves to take advantage of.

This has implications for what jobs are needed and in what quantities. We think this view has the most merit.

On this reading, the tech leaders believe AI will change how software gets built. But they don’t know exactly how.

So they do what tech companies often do when faced with uncertainty: they create pressure. They cut headcount staff, expect those remaining to produce just as much as before, and force teams to find ways to meet those expectations using AI.

It’s not a bet that AI will do everything, but that the pressure will force humans to work out how to use AI to increase productivity.

This also lines up with industry experience. For example, Google chief executive Sundar Pichai claims a 10% increase in engineering speed from AI adoption across the company. This could tally with cuts of around 7-10% of total workforce for most of the mentioned companies.

What this means for knowledge workers

These three views are often presented as mutually exclusive. In practice, all three expectations exist simultaneously. The honest answer to “what is really happening here” is probably “a bit of everything”.

What is true is that software development tends to be an early indicator of broader shifts in knowledge work. Productivity benefits from AI are real for those who adopt it. Yet adoption is unevenly distributed, and lags in less technical industries.

In this context, the ability to understand AI and make good decisions about how and where to use it is becoming a baseline professional skill.

The workers most at risk are not necessarily those whose tasks can be replicated by AI. They are those who wait for pressure to arrive from outside rather than getting ahead of it now.

We will have answers to the question of whether AI is mostly hype or a useful tool in the next few years.

If Meta, Microsoft, and their peers rehire staff with different skills, redesign workflows, and emerge genuinely more capable, the case for useful AI looks good. If they simply pocket the payroll savings, the cynics were right.

If you want to know where tech companies are going, don’t look at what they cut – watch what they hire.The Conversation

Kai Riemer, Professor of Information Technology and Organisation, University of Sydney and Sandra Peter, Director of Sydney Executive Plus, Business School, University of Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Irfan Ahmad.

Read next: Researchers: Chatbots are biased and should not be used for political advice
by External Contributor via Digital Information World

Friday, April 24, 2026

Researchers: Chatbots are biased and should not be used for political advice

AI Popular chatbots such as ChatGPT and Gemini are not neutral and tend to favor certain political parties when asked who users should vote for. This makes them unsuitable for providing advice in connection with elections, according to researchers from the University of Copenhagen behind a new analysis of political bias in chatbots.

Image: Salvador Rios / unsplash

Danes are increasingly turning to artificial intelligence for advice on everyday challenges and problems, and this of course also includes political questions – especially during an election.

However, a new research brief by researchers from the University of Copenhagen affiliated with CAISA – the National Centre for Artificial Intelligence in Society – shows that chatbots are not as neutral as many of us might believe.

“Our study shows that all of the most popular chatbots tend to favor certain parties when they are asked who one should vote for. At the same time, they exhibit a general political bias,” says Stephanie Brandl, lead author of the study and Tenure Track Assistant Professor at the University of Copenhagen. She adds:

“This obviously makes them problematic to use for political advice in connection with an election such as the one we have just been through in Denmark.”

Centrist or Left of Centre

Stephanie Brandl and her colleagues tested the political bias of several of the most widely used language models, including the models behind ChatGPT and Google’s Gemini. Using Altinget’s candidate test from the 2022 Danish general election, they examined where the models place themselves politically.

“Overall, all of the tested chatbots place themselves at the centre or to the left of centre on the political spectrum. In a Danish context, they cluster close to parties such as the Social Democratic Party and The Alternative. This is also confirmed by research carried out by some of our colleagues in Germany, Norway, and the Netherlands,” says Stephanie Brandl.

Recommending some parties far more often than others

In another experiment, the researchers asked a number of chatbots to recommend parties to fictitious voters constructed using the political candidates’ responses from the candidate test. Here too, the recommendations proved to be far from evenly distributed.

In particular, the Red–Green Alliance, the Moderates, and Liberal Alliance were recommended disproportionately often, while parties such as the Conservative People’s Party, Venstre (the Liberal Party of Denmark), and the Denmark Democrats were not suggested as first choice at all by some models.

“It’s not that a chatbot openly says, ‘vote for this party.’ But political biases can manifest themselves in more subtle ways, for example in which arguments are emphasized, or which parties are recommended more frequently,” explains Stephanie Brandl.

Lack of transparency is a democratic problem

According to the researchers, it is not possible to see why a chatbot recommends a particular party, or which assumptions and data its answers are based on.

At the same time, most of the chatbots are trained primarily on English-language sources, typically American ones, which means that we don't actually know how knowledgeable they are about Danish politics. This increases the risk of errors.

“Taken together, this means that we have no way of verifying the answers produced by language models, because their underlying information is hidden behind a digital wall. This makes it nearly impossible to critically assess the information one is presented with – which is otherwise a core function in a democratic society,” says Stephanie Brandl, who concludes:

“We hope that over time it will be possible to develop more reliable and secure alternatives to the chatbots we have today. But until that happens, we encourage people to use large language models critically and with caution.”

Read more about study in CAISA's research brief Who would ChatGPT vote for and why should we care?

About the Study

The analysis was conducted at the National Centre for AI in Society (CAISA), led by Tenure Track Assistant Professor Stephanie Brandl from the University of Copenhagen, in collaboration with Mathias Wessel Tromborg (Aarhus University) and Frederik Hjorth (University of Copenhagen).

Data were collected in February and March 2026, and the researchers tested several leading chatbots, including models from ChatGPT, Gemini, Llama, Mistral, Gemma, and Qwen.

The researchers did not provide the models with any special background information in advance but tested them based on the data the models were already trained on. The language models were asked to take positions on political statements from Danish candidate tests from 2022 and 2026.

The statements were mapped along two political dimensions: economic left/right and libertarian/authoritarian – that is, positions on both economic policy and values related to freedom and authority.

This post was originally published on University of Copenhagen and republished here with permission.

Reviewed by Irfan Ahmad.

Read next: 


by External Contributor via Digital Information World

What we lose when artificial intelligence does our shopping

Mark Bartholomew, University at Buffalo and Samuel Becher, Te Herenga Waka — Victoria University of Wellington

Americans spend a remarkable amount of time shopping – more than on education, volunteering or even talking on the phone. But the way they shop is shifting dramatically, as major platforms and retailers are racing to automate commercial decision-making.

Artificial intelligence agents can already search for products, recommend options and even complete purchases on a consumer’s behalf. Yet many shoppers remain uneasy about handing over control. Although many consumers report using some AI assistance, most currently say they wouldn’t want an AI agent to autonomously complete a shopping transaction, according to a recent survey from the consultancy firm Bain & Company.

As scholars studying the intersection of law and technology, we have watched AI-assisted commerce expand rapidly. Our research finds that without updated legal measures, this shift toward automated commerce could quietly erode the economic, psychological and social benefits that people receive from shopping on their own terms.

Caveat emptor

Part of shoppers’ hesitation is about privacy. Many are unwilling to share sensitive personal or financial information with AI platforms. But more profoundly, people want to feel in control of their shopping choices. When users can’t understand the reasoning behind AI-driven product recommendations, their trust and satisfaction decline.

Shoppers are also reluctant to give away their autonomy. In one study involving people booking travel plans, participants deliberately chose trip options that were misaligned with their stated preferences once they were told their choices could be predicted – a way of reasserting independence.

Other experiments confirm that the more customers perceive their shopping choices being taken away from them, the more reluctant they are to accept AI purchasing assistance.

Although the technology is expected to get better, there have been some well-publicized missteps reported in financial and tech media. The Wall Street Journal wrote about an AI-powered vending machine that lost money and stocked itself with a live fish. The tech publication Wired cataloged design flaws, like an AI agent taking a full 45 seconds to add eggs to a customer’s shopping cart.

The business case for AI shopping

Consumers have good reason to be cautious. AI agents aren’t just designed to assist; they’re designed to influence. Research shows that these systems can shape preferences, steer choices, increase spending and even reduce the likelihood that consumers return products.

And companies are hyping these capabilities. The business platform Salesforce promotes AI agents that can “effortlessly upsell,” while payments giant Mastercard reports that its AI assistant, Shopping Muse, generates 15% to 20% higher conversion rates than traditional search – that is, pushing shoppers from browsing to completing a purchase.

To retailers, AI tools are one way to convert searches into actual purchases. Rupixen on Unsplash., CC BY

For companies, the appeal is obvious. From Amazon’s Rufus app and Walmart’s customer support to AI-enabled grocery carts, companies are rapidly integrating these tools into the shopping experience.

Assistants with names like Sparky and Ralph are being promoted as the future of retail, while technologists are calling on companies to prepare their brands for the era of agentic AI shopping.

The real concern is not that these systems might fail, but that they may succeed all too well.

The human side to shopping

AI shopping agents do offer considerable benefits.

For example, they can scan numerous products in seconds, compare prices across sellers, track discounts over time, sift through thousands of product reviews, and tailor recommendations to the user’s preferences and needs. They can even read through terms of service and privacy policies, helping consumers detect unfavorable fine print.

But there’s more at stake than these considerations.

While consumers have reason to focus on privacy and control, AI shopping agents carry some overlooked emotional risks, such as squashing the joy of anticipation. Psychologists have shown that the period between choosing a purchase and receiving it generates substantial happiness – sometimes more than the product or experience itself. We daydream about the vacation we booked, the outfit we ordered, the meal we planned. Automated buying threatens to drain this anticipatory pleasure.

This anticipation connects to another value: a sense of personal and ethical authorship. Even mundane shopping decisions allow people to exercise choice and express judgment. Many consumers deliberately buy fair-trade coffee, cruelty-free cosmetics or environmentally responsible products. The brands and products we choose, from Patagonia and Harley-Davidson to a Taylor Swift tour shirt, help shape who we are.

Shopping, moreover, has a communal dimension. We browse stores with friends, chat with salespeople and shop for the people we love. These everyday interactions contribute considerably to our well-being.

The same is true of gift-giving. Choosing a gift involves anticipating another person’s preferences, investing effort in the search and recognizing that the gesture matters as much as the object itself. When this process is outsourced to an autonomous system, the gift risks becoming a delivery rather than a meaningful gesture of attention and care.

Keeping human agency alive

AI shopping agents are likely to become part of everyday life, and the regulatory conversation is beginning to catch up, albeit unevenly.

Transparency has emerged as a central concern. Past experience with recommendation engines shows that undisclosed conflicts of interest are a real risk. The European Union has proposed a disclosure framework around automated decision-making, although its implementation was recently delayed. In Congress, U.S. lawmakers are considering bills to require companies to reveal how their AI models were trained.

So far, consumers seem to want to choose their own level of engagement – a signal that shopping, for many people, is more than just the efficient satisfaction of preferences. Perhaps the least-settled, yet most crucial question is whether AI shopping tools will be designed and regulated to serve users’ interests and human flourishing – or optimized, as so many digital tools before them, primarily for corporate profit.The Conversation

Mark Bartholomew, Professor of Law, University at Buffalo and Samuel Becher, Professor of Law, Te Herenga Waka — Victoria University of Wellington

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Irfan Ahmad.

Read next:

In the Age of AI: What Makes Art Meaningful?

The 35 Logo Redesigns That Boosted Web Traffic


by External Contributor via Digital Information World

Thursday, April 23, 2026

Pamphlets, radio, and now Iran’s AI‑generated Lego videos: the new frontier of information warfare

Ibrahim Al-Marashi, IE University; California State University San Marcos
Pamphlets, radio, and now Iran’s AI‑generated Lego videos: the new frontier of information warfare
Photo by DigitalInformationWorld, licensed under CC BY 4.0. Via: X / Explosive Media.

While AI technology is new, information warfare is as old as conflict itself. For millennia, humans have used propaganda, deception and psychological operations to influence adversaries’ decision-making and morale. In the 13th century, for instance, the Mongols destroyed entire cities just so word of mouth would spread to the next, with the goal of breaking morale and forcing it to capitulate before troops even arrived.

As technology has progressed, it has opened new frontiers in information warfare. From the Second World War to the 1991 Gulf War, planes dropped leaflets to spread rumours and propaganda. During the Vietnam War, English-language radio shows presented by Hanoi Hannah (real name Trịnh Thị Ngọ) taunted US troops with lists of their locations and casualties to lower morale. Radio propaganda also demonstrated its devastating effect when it was used to guide the Rwandan Genocide in 1994.

Cable TV came next. The 1991 Gulf War was the first major conflict broadcast on a 24 hour news cycle as opposed to the evening news. Instead of daily updates in bulletins or newspapers, people at home began receiving a continuous stream of information and images that was invariably biased towards national interests. This technological shift defined public perceptions of the war, and led historians to dub it the “CNN War”.

What we are witnessing today is the next step in this evolution – from print, radio and TV to social media. If the First Gulf War was the CNN war, the 2025 and 2026 conflict between the US, Israel and Iran can be thought of as the first TikTok War, and the first major AI War.

AI has ushered in new forms of information warfare that target perceptions, information environments, and trust itself. AI-generated videos in particular have fundamentally altered how states and non-state actors wage information warfare, manipulate populations, and compete not only in the Gulf, but in a global arena.

This “synthetic media” is frequently deployed and spread to falsify footage of real-world events – from devastating military attacks that never really happened to fake videos of officials pleading for a ceasefire.

But this technology also convincingly and easily creates propaganda material that is obviously fiction. The most notable example is Iran’s viral Lego videos that have repeatedly – and very successfully – mocked Israel and the US throughout the war.

Digital weapons

To fully understand the disruptive potential of AI videos, we can go back and look at the futurist speculation of dystopian science fiction novels. Science fiction author William Gibson coined the term “cyberspace” in his 1983 novel Neuromancer, describing it as a “consensual hallucination” – not reality, but rather a “graphic representation of data abstracted from banks of every computer in the human system”.

But when digital tools like AI videos and social media are used as weapons, the barrier between cyberspace and physical reality becomes permeable. They no longer create virtual reality, but what French media theorist Jean Baudrillard called “hyperreality”. This term describes a state in which the distinction between reality and a simulation of reality collapses, where the simulation feels “more real than real”.

Bauldrillard’s work is underpinned by the concept of “simulacra”: copies or representations of something that really exists. He classified simulacra in three orders. The first order is the pre-industrial counterfeit – a faithful copy or replica of a real object – while the second is the mechanically mass-produced object.

Third order simulacra are simulations, or signs with absolutely no physical form. Take Iran’s Lego videos, which depict scenes such as Trump and Netanyahu using the Iran War as a pretext to distract from the Epstein files while worshipping the pagan Canaanite deity Baal. They have nothing to do with the intentions of the Danish company that makes the ubiquitous plastic brick toys, and yet they have gained enormous traction as viral meme propaganda – both in the West and around the world.

AI is the message

Media theorist Marshall McLuhan’s oft-quoted phrase “the medium is the message” argues that, irrespective of the messages transmitted by media – be it newspaper, radio or TV – the medium in and of itself also tells us something.

The content of Iranian, US and Israeli AI videos are, naturally, entirely different, as each seeks to undermine their opponents’ narratives. But the medium of AI videos shared on social media also sends a message: these videos transcend an adversary’s borders in ways that previous media could not.

Unlike the pamphlets, radio broadcasts and TV networks of before, AI’s production and consumption are geographically unbound. Anyone can make and view it anywhere – whether in Tehran, Tel Aviv, Washington or anywhere else in the world. What this has created is a new era of borderless, decentralised, viral, digital public diplomacy.

Deepfakes, propaganda and ‘truth decay’

Unlike Iran’s Lego videos, AI deepfakes are realistic but entirely fabricated content, making it difficult for viewers to discern truth from falsehood. Early iterations were crude and easily identifiable, but modern deepfakes have reached a level of photorealism and vocal authenticity that can deceive even experienced observers and automated detection systems.

During the so-called “12-Day War” in 2025 in Israel and Iran, AI deepfakes and video game footage sought to replicate real combat. Fabricated visuals included scenes of destroyed Israeli aircraft, collapsing buildings in Tel Aviv and its airport, while others showed Israeli strikes on Tehran that left a crater in an intersection and sent cars flying.

But believability isn’t always paramount. One widely-shared image of a downed Israeli F-35 fighter was taken from a flight simulator game. The plane was obviously too large compared to the bystanders on the ground, but this didn’t stop the image from going viral (it got 23 million views on TikTok) or from being spread by networks sympathetic to Russia seeking to demonstrate the vulnerability of American-made aircraft.

In total, the three most viewed deepfake videos during the 2025 war received 100 million views across social media. One deepfake video that circulated on Facebook even depicted Israeli officials pleading for the US to enforce a ceasefire, claiming “we cannot fight Iran any longer”.

This content was disseminated on TikTok, Telegram and X, where the AI chatbot Grok failed to identify fabricated videos that used footage from other conflicts.

Legal scholars have coined the phrases “liar’s dividend” and “truth decay” to characterise this ongoing trend towards fabricating reality. These terms refer to a media landscape where AI-driven fakes cast even legitimate evidence into doubt, eroding trust to the point where any image or medium can now be dismissed as a deepfake.

The most recent 2025 to 2026 wars demonstrate that, as states race to develop drones, missiles and defence systems, a parallel arms race is unfolding online. The digital revolution, coupled with advances in AI, has exponentially increased the speed, scale and sophistication of information manipulation. This conflict heralds a new era of information warfare, one where AI technologies are weaponised to influence, disrupt and destabilise adversaries.

Ibrahim Al-Marashi, Adjunct Professor, IE School of Humanities, IE University; California State University San Marcos

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Irfan Ahmad.

Read next: 

• From floppy discs to Claude Mythos, how ransomware grew into a multibillion‑dollar industry

How AI bias can creep into online content moderation


by External Contributor via Digital Information World

How AI bias can creep into online content moderation

A University of Queensland study has shown Large Language Models (LLMs) used in AI content moderation may be prone to subtle biases that undermine their neutrality. 

A team led by data scientist Professor Gianluca Demartini from UQ’s School of Electrical Engineering and Computer Science used persona prompting to test the tendency of AI chatbots to encode and reproduce political biases, and found significant behavioural shifts. 

The research team asked six LLMs – including vision models – to moderate thousands of examples of hateful text and memes through the lens of different ideologically diverse AI personas. 

Professor Demartini said the exercise revealed that AI political personas, even without significantly altering overall accuracy, were prone to introducing consistent ideological biases and divergences in chatbot content moderation judgments. 

“It has already been established that persona conditioning can shift the political stance expressed by LLMs,” Professor Demartini said.  

“Now we have shown through political personas that there is an underlying risk that LLMs will lean towards certain perspectives when identifying and responding to hateful and harmful comments.”

“It demonstrates a need to rigorously examine the ideological robustness of AI systems used in tasks where even subtle biases can affect fairness, inclusivity and public trust.”

Image: Emma Ou / Unsplash

The AI personas used in the study were from a database of 200,000 synthetic identities ranging from schoolteachers to musicians, sports stars and political activists. 

Each persona was put through a political compass test to determine their ideological positioning, with 400 of the more ‘extreme’ positions asked to identify hateful online content. 

Professor Demartini said his team found that assigning a persona to an LLM chatbot altered its precision and recall in line with ideological leanings, rather than change the overall accuracy of hate speech detection.

However, the team found LLMs – especially larger models – exhibited strong ideological cohesion and alignment between personas from the same ideological ‘region’.

Professor Demartini said this suggested larger AI models tend to internalise ideological framings, as opposed to smoothing them out or ‘neutralising’ them.

“As LLMs become more capable at persona adoption, they also encode ideological ‘in-groups’ more distinctly,” Professor Demartini said. 

“On politically targeted tasks like hate speech detection this manifested as partisan bias, with LLMs judging criticism directed at their ideological in-group more harshly than content aimed at their opponents.” 

Professor Demartini said larger LLMs also displayed more complex patterns, including a tendency towards defensive bias. 

“Left personas showed heightened sensitivity to anti-left hate, and right-wing personas were more sensitive to anti-right hate speech,” Professor Demartini said. 

“This suggests that ideological alignment not only shifts detection thresholds globally, but also conditions the model to prioritise protection of its ‘in-group’ while downplaying harmfulness directed at opposing groups.”

Researchers said the project highlighted that it was crucial for high-stakes content moderation tasks to be overseen by neutral arbiters so that fairness and public trust is maintained and the health and wellbeing of vulnerable demographics is protected. 

“People interact with AI programs trusting and believing they are completely neutral,” Professor Demartini said. 

“But concerns remain about their tendency to encode and reproduce political biases, raising important questions about AI ethics and deployment.

“In content moderation the outputs of these models reflect embedded ideological biases that can disproportionately affect certain groups, potentially leading to unfair treatment of billions of users.”

PhD candidates Stefano Civelli, Pietro Bernadelle and research assistant Nardiena Pratama collaborated on the study. 

The research is published in Transactions on Intelligent Systems and Technology

This article is republished from The University of Queensland under a Creative Commons license. Read the original article.

Reviewed by Irfan Ahmad.

Read next: 

• From floppy discs to Claude Mythos, how ransomware grew into a multibillion‑dollar industry

• Single-minded pursuit of profit can get firms in trouble. Same thing with AI

• Why April 23 Is Email Day: 25 Fascinating Names for the @ Sign Worldwide


by External Contributor via Digital Information World

Wednesday, April 22, 2026

From floppy discs to Claude Mythos, how ransomware grew into a multibillion‑dollar industry

Anja Shortland, King's College London

Image: Kevin Horvat / Unsplash

When evolutionary biologist Joseph Popp coded the first documented piece of ransomware in 1989, he had little idea it would become a major criminal business model capable of bringing economies to their knees.

Popp, who worked for the World Health Organization at the time, wanted to warn people about the dangers of ignoring health warnings, poor sexual hygiene and (human) virus transmission.

He sent out 20,0000 floppy discs that, when loaded, flashed up a demand for money to regain files that had supposedly been encrypted (in fact, it was just their file names). He was later arrested and charged with 11 counts of blackmail, but declared mentally unfit to stand trial.

In 1996, two Columbia University computer scientists published a paper explaining how criminals could use more sophisticated versions of Popp’s scheme to mount large-scale extortion operations. At the heart of this was malicious software that could be used to encrypt, block access to or steal a person or organisation’s files and data.

However, two preconditions still had to be met for ransomware to become a feasible criminal business: communication channels that were difficult to monitor, and a payments process outside financial regulation.

The Tor protocol, released by US intelligence services to protect their covert communications, solved the first problem in 2004. Cryptocurrencies solved the second – in particular, when bitcoin cash machines started appearing in North American cities from 2013.

Today, artifical intelligence makes malware coding and crafting convincing phishing-emails in any language simple. And the latest model in Anthropic’s AI system, Claude Mythos, recently proved more effective at hacking into computer systems than humans.

As an expert in extortive crime, I am increasingly concerned about public and political apathy to the threats posed by ransomware. To better understand these, it’s worth tracing its evolution over the past two decades – and how improvements in computer security and law enforcement, plus changes in data regulation, have led to new criminal strategies each time.

Cut out the middlemen

The first generation, which came to global attention in the mid-2010s, was known as “commodity ransomware”. A pioneering example, Cryptolocker, was developed by Russia-based hackers who infiltrated hundreds of thousands of computers, seeking to cut out the middlemen previously needed to commit financial fraud. They proved that a large majority of their victims would happily pay a small ransom to restore data that had been locked by their malware.

As both competent and incompetent hackers piled into this new market, victims shared information about rogue operators and put them out of business. This led to the second generation of ransomware such as Ryuk, which emerged in 2018.

In this phase, criminals abandoned the indiscriminate “spray-and-pray” approach in favour of targeting individual cash-rich businesses. They would set an individual ransom, negotiate with the company, and even offer to help with decryption if paid. Fast-rising ransoms more than compensated for this increased administrative effort.

In response, many companies began investing in multi-factor authentication, better threat monitoring, advance warning systems and software patches for known vulnerabilities.

However, these security benefits were soon offset by the impact of COVID on work practices across the world. The pandemic led to widespread remote working, with many people using unsecured devices and connections that were vulnerable to cyber-attack.

A multibillion-dollar industry

The next ransomware innovation was driven by the emergence of back-up systems that enabled companies to restore encrypted files without the criminals’ help. This was coupled with the emergence of tighter data privacy regulation such as GDPR in Europe and the UK.

Invented in 2019, third-generation ransomware weaponised these regulations, which threatened firms with massive fines if confidential data about clients or staff was revealed. The criminal gangs now sought out and exfiltrated an organisation’s most sensitive files, then threatened to publicise them through dedicated dark web leak sites.

This so-called double-extortion model – encrypting an organisation’s data while threatening to make it public – brought many businesses back to the negotiation table.

Ransomware had become a multibillion-dollar industry – with the Conti gang, sheltered by Russia and employing hundreds of people, among the key players setting new records for ransomware demands. Its attacks on critical infrastructure and hospitals saw it sanctioned by the UK government in 2023.

Video: BBC News.

This new approach forced many governments to row back on imposing hefty fines for data breaches, since many were the result of criminal attacks. Meanwhile, new initiatives by law enforcement – supported by the private sector – targeted and broke up the largest and most egregious ransomware gangs.

Today’s fourth generation of ransomware, building on the latest AI technology, looks nimbler and slimmed-down in comparison. Anyone who gains access to a network can lease weapons-grade malware on the dark web without forming long-term ties with a particular gang.

Advanced AI-based hacking tools make ransomware accessible to many more criminals and politically motivated hacktivists. And around one-quarter of breaches still result in ransom payments. For criminals sheltered by their governments, only the digital infrastructure is at risk of being taken down by western law enforcement.

Lessons not learned

While coverage of Claude Mythos suggests even the most sophisticated cyber defences could now be vulnerable, the troubling reality is that many individuals and organisations are still using out-of date, unpatched or only partially upgraded software. This means even early-generation ransomware techniques are still lucrative.

While Popp sent out his floppy discs to promote better sexual hygiene, today’s poor cyberhygiene is leaving many public and private networks open to malware attacks. The intended lesson of his original ransomware caper – be vigilant and properly heed health warnings – has still only been partially learnt in the digital world.

Many western societies appear to have grown accepting of criminals leaching on business conducted on the internet. Not even a steady stream of human fatalities, caused by attacks on hospitals and medical providers, has generated the level of response required to stamp out this dangerous threat.

The hope that governments sheltering cybercriminals can be encouraged (or forced) to stop them targeting critical national infrastructure appears increasingly fragile amid current geopolitical tensions. At all levels of society, we need to get smarter about cyber defence.The Conversation

Anja Shortland, Professor in Political Economy, King's College London

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Irfan Ahmad.

Read next: Single-minded pursuit of profit can get firms in trouble. Same thing with AI


by External Contributor via Digital Information World

Single-minded pursuit of profit can get firms in trouble. Same thing with AI

By Sy Boles - Harvard Gazette

Researchers see lesson for lawmakers, executives as systems asked to run business, maximize gain resort to unethical, fraudulent tactics.

Image: Freepik / AI-Gen

If you give artificial intelligence a goal of maximizing profit, how far will it go?

AI agents appear capable of lying, concealing, and colluding, according to new research from Harvard Business School.

Researchers found that AI agents — software trained to perform tasks independently — engaged in a “broad pattern” of misconduct after being asked to manage a simulated vending machine business and maximize profits for a year. The agents were neither instructed to cut legal or ethical corners nor prohibited from doing so.

“What’s unambiguous looking at the models is that the misconduct we observed — from not paying a customer refund or deciding to collude on prices — was not an accident. It was deliberately done by agents to maximize profitability,” said Eugene F. Soltes , the McLean Family Professor of Business Administration at HBS and first author of the working paper.

Soltes and co-author Harper Jung , a doctoral student studying accounting and management at HBS, hope their research will serve as a starting point for more conversation about AI safety in the context of business management control.

The research for the paper, which the group aims to publish and is currently out for peer review, was done in collaboration with Andon Labs, an AI safety company focusing on testing AI models in realistic business operations.

In experiments, 20 commercially available AI models from major firms, including Anthropic’s Claude Opus 4.6, DeepSeek v3.2, and OpenAI’s GPT-5.1, independently operated a vending machine over the course of a simulated year.

Tasks included searching for suppliers, buying products, and engaging with customers.

In some experiments, agents operated solo; in others, four agents operated simultaneously in a shared market, where they could communicate with rivals via email.

Agents started with $500 and a small inventory of chips and sodas.

“They had to figure it out themselves,” said Jung. “Each agent had to independently search online for suppliers, negotiate wholesale prices, set its own retail pricing, and handle customer complaints.”

Jung and Soltes said the agents demonstrated impressive business savvy.

“The best models had the capacity to negotiate and calculate valuations like a top-notch M.B.A. student,” Soltes said.

“When we went through the deliberations and the exchanges the agents made with each other, we were just in shock,” said Jung. “I was amazed at how far these machines can go.”

The agents’ misconduct ranged from the questionable to the comical to the potentially criminal and included denying refunds by claiming defects were normal product variation; inventing nonexistent corporate policies to avoid processing returns; and colluding with competitors to fix prices.

In one instance, agents formed what researchers described as a “three-person cartel,” which the agents named the Bay Street Triumvirate. The alliance fractured, though, when one agent discovered another was undercutting cartel prices, which it called a “declaration of war.”

The simulations also supplied constraints: Agents were charged a $2 per day operating fee plus a token usage fee — effectively turning time spent “thinking” into an operating expense.

In response, the agents sought to economize. For instance, Soltes said, internal reasoning logs showed agents shifting from carefully weighing refund decisions to dismissing most requests outright, often without review.

“The agents come to the realization that ‘thinking’ about giving a refund is itself a cognitive burden, and so they just ignore it altogether in some circumstances,” Soltes explained. “People might assume that machines are deliberative, while humans rely on shortcuts and are vulnerable to bias. But it turns out that, under similar constraints, agents reproduce the same myopic and biased behaviors we associate with people.”

The research raises questions about accountability for AI developers and regulators.

The reasoning logs, Soltes said, can sometimes be read as resembling mens rea — the “guilty mind” concept in criminal law used to establish intent. Yet when an AI agent behaves improperly, responsibility is far harder to determine.

“Does it rest with the company that deployed the system, the AI firm that created the model, or the manager who chose to use it?” he asked.

“The most straightforward answer may be to hold the individual managers overseeing the software responsible for its actions, on the assumption that they will monitor and supervise its behavior,” he said. “But that solution also creates a different issue, since many of the promised efficiencies of autonomous AI systems begin to disappear if a human must remain in the loop at every decision point.” A thorny problem, but one that business leaders and lawmakers must deal with, hopefully sooner than later, researchers say.

This post was originally published on The Harvard Gazette and republished here with permission.

Reviewed by Irfan Ahmad.

Read next: US government ramps up mass surveillance with help of AI tech, data brokers – and your apps and devices


by External Contributor via Digital Information World