Saturday, February 28, 2026

Does ‘free’ shipping really exist? An expert shares the marketing tricks you need to know

Adrian R. Camilleri, University of Technology Sydney

You’re scrolling through an online retailer, like Amazon, Shein or eBay, and spot a shirt on sale for $40. You add it to your cart, but at checkout, a $10 shipping fee suddenly appears. Frustrated, you close the tab.

But what if that same shirt was priced at $50 with “free” shipping? The likelihood that you would have bought it without a second thought is much higher.

COVID changed the way we shop and accelerated our reliance on e-commerce. But as online sales have grown, so has the expectation of free delivery.

The reality, however, is that shipping physical goods is never actually free. Retailers use subtle marketing strategies and psychological hacks to mask these costs. As a result, consumers are often the ones footing the bill.

Retailers exploit the allure of free delivery, using thresholds and subscriptions to increase sales subtly.
Image: Polina Tankilevitch / Pexels

The magic of zero

There is something uniquely attractive about the concept “free”. In behavioural economics, zero is not just a lower price; it flips a psychological switch.

When a transaction involves a cost, we instinctively weigh the downside. But when something is entirely free, we experience a positive emotion and perceive the offer as more valuable than it is mathematically.

Retailers no doubt realise that offering free delivery is one of the most effective ways to stop a consumer from abandoning a digital shopping cart.

The minimum spend trap

Perhaps the most common marketing tactic is the free shipping threshold. Sometimes this is phrased as: “Spend $55 to qualify for free shipping.”

If your shopping cart is sitting at $40, you face a dilemma. You can pay $10 for postage, or you can find a $15 item to reach the threshold. Many of us choose the latter, reasoning it is better to get a tangible product, such as a pair of socks, than to “waste” money on shipping.

This tactic uses the “goal gradient effect”, which describes the tendency to put in more effort the closer we get to a goal. It also works incredibly well for the retailer.

Research shows that free shipping increases both purchase frequency and overall order size. Policies with a threshold for free shipping often prompt this exact “topping up” behaviour. The consumer ends up buying things they did not initially want, thus boosting the retailer’s sales.

Baked-in costs and the reality of ‘free’ returns

Another strategy is unconditional free shipping, where the delivery cost is simply baked into the product’s base price. This allows consumers to avoid the “pain of paying” a separate fee at checkout. However, we are still paying for the postage through higher item costs.

For retailers, offering unconditional free shipping without a markup can be difficult to sustain profitably. The bump in sales usually does not offset the lost fee revenue and the costs of fulfilment.

A major reason for this lack of profitability is that free shipping leads to significantly higher product return rates.

Consumers tend to make riskier purchases if the appearance of waived fees lowers the perceived financial risk of the transaction.

For example, you might order the same shirt in two different sizes, knowing you can just send one back for free. Who pays for that added convenience? The retailer, who now has to cover the courier fees twice.

The retailer usually won’t simply absorb this cost, but will have to pass it on in other ways.

The subscription illusion

To combat these unpredictable costs, many businesses are turning to membership, loyalty, or subscription models such as Amazon Prime. Consumers pay an upfront annual fee in exchange for “free” expedited shipping year-round.

Membership-based programs successfully increase customer loyalty and purchase frequency, and allow for better customer segmentation.

But in the long run, they may actually hurt a retailer’s profit margins. While loyalty rises, the operational costs of fulfilling many smaller, free-shipped orders can potentially outweigh the benefits if not strictly managed.

For the consumer, this model manipulates our “mental accounting”. Because we view the upfront fee as money already spent, every additional purchase feels like it comes with a free perk. We end up shopping more frequently on that specific platform just to “get our money’s worth”.

Don’t buy the illusion

The age of limitless free shipping may be coming to an end.

As global supply chain costs remain volatile, we are likely to see retailers raising their minimum spend thresholds, removing offers, or increasing base product prices to compensate.

The next time you are shopping online, resist the urge for instant gratification.

If you are about to add a $15 pair of novelty avocado socks to your cart, just to save $10 on shipping, take a step back. Ask yourself if you truly need that purchase to arrive this week.

Instead of rushing to checkout, let your digital basket fill up naturally over time with items you actually need. You will eventually hit the threshold, but on your own terms.

“Free” delivery is just a clever psychological illusion. The cost is rarely eliminated; it is simply redistributed into higher product prices or reframed as a loyalty perk.

Don’t let the allure of “free” shipping trick you into paying for more than you intended.The Conversation

Adrian R. Camilleri, Associate Professor of Marketing, University of Technology Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Irfan Ahmad.

Read next: 

• Research Shows How Companies Can Gain Advantage by Prioritizing Customer Privacy

• Open Letter from Google and OpenAI Employees Raises Concerns About Potential Military AI Use

• ChatGPT Adds 15 Million Subscribers Between July 2025 and February 2026, Averaging 433,000 Weekly


by External Contributor via Digital Information World

OpenAI Reports 900M Weekly ChatGPT Users, 50M Subscribers, 9M Paying Business Users

Reviewed by Ayaz Khan

OpenAI, on a February 27, 2026 announcement post, reported continued growth across its AI platforms, with weekly active users of ChatGPT reaching 900 million and more than 50 million consumer subscribers. Based on previous The Information (via Reuters) reporting and our calculations, the number of paying subscribers increased from roughly 35 million in July 2025 to 50 million in February 2026, an estimated increase of about 15 million users, averaging roughly 433,000 new paying users per week over the period.

Codex, the company’s software tool for building software, now has 1.6 million weekly users, more than tripling since the start of the year. Over nine million paying businesses rely on ChatGPT for business functions including engineering, support, finance, and sales.

The company highlighted partnerships with Amazon and NVIDIA to support enterprise AI development, including dedicated inference and training infrastructure. OpenAI announced $110 billion in new investment at a $730 billion pre-money valuation, including $30 billion each from SoftBank and NVIDIA and $50 billion from Amazon. The valuation also increased the OpenAI Foundation’s stake in OpenAI Group to over $180 billion.

According to OpenAI’s announcement post, these partnerships and investments aim to bring frontier AI to more people, businesses, and communities globally. 

What is the weekly growth of ChatGPT paying subscribers? OpenAI reports ChatGPT added 433,000 new paying subscribers per week, reaching 50 million by February 2026.
Image: Zulfugar Karimov / Unsplash

Note: This post was improved with AI assistance and reviewed, edited, and published by humans.

Read next: 

• Open Letter from Google and OpenAI Employees Raises Concerns About Potential Military AI Use

• People are overconfident about spotting AI faces, study finds
by Asim BN via Digital Information World

Open Letter from Google and OpenAI Employees Raises Concerns About Potential Military AI Use

Reviewed by Ayaz Khan.

An open letter, titled "We Will Not Be Divided" (as of February 28, 2026) signed by 573 current employees of Google and 93 current employees of OpenAI calls on company leadership to decline requests described in the letter as coming from the United States Department of Defense (DoD).

Signatures were confirmed as current employees, with some choosing to remain publicly anonymous.
Screenshot: Notdivided.org / Credit: DIW

The letter claims that the department has considered invoking the Defense Production Act in connection with Anthropic and has discussed measures that could require the company to provide access to its AI models for military use. It further states that Anthropic declined to allow its models to be used for domestic mass surveillance or for fully autonomous lethal decision-making without human oversight. In line with these concerns, OpenAI CEO Sam Altman told CNBC he does not think the Pentagon should threaten AI companies with the Defense Production Act and said companies should be able to decide whether to cooperate under legal protections. On Saturday, Sam Altman also posted on X that OpenAI reached an agreement with the Department of War to deploy its models in the department’s classified network, noting that the department agrees with safety principles, including prohibitions on domestic mass surveillance and human responsibility for the use of force, including autonomous weapon systems.

sam altman tweet: Tonight, we reached an agreement with the Department of War to deploy our models in their classified network. In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome. AI safety and wide distribution of benefits are the core of our mission. Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement. We also will build technical safeguards to ensure our models behave as they should, which the DoW also wanted. We will deploy FDEs to help with our models and to ensure their safety, we will deploy on cloud networks only. We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept. We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements. We remain committed to serve all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place.
Screenshot: Sam Altman - X / Credit: DIW

According to the letter, the Department of Defense has engaged in discussions with Google and OpenAI regarding potential cooperation on similar AI capabilities. The letter does not include independent verification of these claims but presents them as the understanding of its signatories.

The organizers state that all signatures were verified as current employees, and that some signatories chose to remain anonymous publicly.

Notes: This post was improved with AI assistance and reviewed, edited, and published by humans.

Read next: 


by Asim BN via Digital Information World

Friday, February 27, 2026

People are overconfident about spotting AI faces, study finds

by Lachlan Gilbert

Many of us rely on outdated visual cues when trying to distinguish real faces from highly realistic AI-generated ones, with even people who have exceptional face-recognition skills being fooled.

Image: cottonbro studio / pexels

Most people believe they can spot AI-generated faces, but that confidence is out of date, research from UNSW Sydney and the Australian National University (ANU) has demonstrated.

With AI-generated faces now almost impossible to distinguish from real ones, this misplaced confidence could make individuals and organisations more vulnerable to scammers, fraudsters and bad actors, the researchers warn.

“Up until now, people have been confident of their ability to spot a fake face,” says UNSW School of Psychology researcher Dr James Dunn. “But the faces created by the most advanced face-generation systems aren’t so easily detectable anymore.”

In a research paper published in the British Journal of Psychology, researchers from UNSW and the ANU recruited 125 participants – including 36 people with exceptional face-recognition ability, known as super recognisers, and 89 control participants – to complete an online test in which they were shown a series of faces and asked to judge whether each image was real or AI-generated. Obvious visual flaws were screened out beforehand.

“What we saw was that people with average face-recognition ability performed only slightly better than chance,” Dr Dunn says. “And while super-recognisers performed better than other participants, it was only by a slim margin. What was consistent was people’s confidence in their ability to spot an AI-generated face – even when that confidence wasn’t matched by their actual performance.”

>>> Think you know how to spot an AI-generated face? Try this free online test to find out <<<

The end of artefacts

Much of that confidence comes from cues that used to work. Early AI-generated faces were often given away by obvious visual artefacts – distorted teeth, glasses that merged into faces, ears that didn’t quite attach properly, or strange backgrounds that bled into hair and skin.

But as face-generation systems have improved, those kinds of errors have become far less common. The most realistic outputs no longer show obvious flaws, leaving faces that look convincing at a glance, and far harder to judge using the cues people are familiar with.

“A lot of people think they can still tell the difference because they’ve played with popular AI tools like ChatGPT or DALL·E,” says ANU psychologist Dr Amy Dawel. “But those examples don’t reflect how realistic the most advanced face-generation systems have become, and relying on them can give people a false sense of confidence.”

What interested the researchers was how readily even super-recognisers were fooled. While this group did perform better on average, the advantage was modest, and their accuracy remained far below what they typically achieved when recognising real human faces. There was also substantial overlap between groups, with some non-super-recognisers outperforming super-recognisers – demonstrating this is not simply an experts-versus-everyone-else problem.

Too good to be true

But if AI faces are this convincing, are there any tells we should be looking for?

“Ironically, the most advanced AI faces aren’t given away by what’s wrong with them, but by what’s too right,” Dr Dawel says. “Rather than obvious glitches, they tend to be unusually average – highly symmetrical, well-proportioned and statistically typical.”

Qualities such as symmetry and average proportions usually signal attractiveness and familiarity. But in the current study, they become a red flag for artificiality.

“It’s almost as if they’re too good to be true as faces,” Dr Dawel says.

What to do about it

Super-recognisers didn’t stand out the way they typically do in tests involving real human faces, showing only a modest advantage. What differentiated them was a greater sensitivity to the same qualities identified in the study – plausible, unusually average and highly symmetrical faces. Even so, their limited success suggests spotting AI faces is not a skill that can be easily trained or learned.

The findings also carry practical implications – as relying on visual judgement alone is no longer reliable. This matters in contexts ranging from social media to professional networking and recruitment, where people often assume they can ‘just tell’ when a profile picture looks fake. Misplaced confidence may leave individuals and organisations more vulnerable to scams, fake profiles and fabricated identities.

“There needs to be a healthy level of scepticism,” Dr Dunn says. “For a long time, we’ve been able to look at a photograph and assume we’re seeing a real person. That assumption is now being challenged.”

Rather than teaching people tricks to spot synthetic faces, the broader lesson is about updating assumptions. The visual rules many of us rely on were shaped by earlier, less sophisticated systems.

“As face-generation technology continues to improve, the gap between what looks plausible and what is real may widen – and recognising the limits of our own judgement will become increasingly important,” says Dr Dawel.

Looking ahead

Interestingly, Dr Dunn wonders whether the research team has stumbled upon a new kind of face recogniser.

“Our research has revealed that some people are already sleuths at spotting AI-faces, suggesting there may be ‘super-AI-face-detectors’ out there.

“We want to learn more about how these people are able to spot these fake faces, what clues they are using, and see if these strategies can be taught to the rest of us.”

Note: This article was originally published on the UNSW Newsroom website and is republished here with permission.

Reviewed by Ayaz Khan.

Read next: Artists and writers are often hesitant to disclose they’ve collaborated with AI – and those fears may be justified


by External Contributor via Digital Information World

Artists and writers are often hesitant to disclose they’ve collaborated with AI – and those fears may be justified

Joel Carnevale, Florida International University

Generative artificial intelligence has become a routine part of creative work.

Novelists are using it to develop plots. Musicians are experimenting with AI-generated sounds. Filmmakers are incorporating it into their editing process. And when the software company Adobe surveyed more than 2,500 creative professionals across four continents in 2024, it found that roughly 83% reported using AI in their work, with 69% saying it helped them express their creativity more effectively.

Disclosure of AI use carries reputational costs, while claiming no AI involvement provides no advantage
Image: Omar:. Lopez-Rincon / Unsplash

The appeal is understandable. Emerging research shows that generative AI can support the creative process and, at times, produce outputs that people prefer to work made by humans alone.

Yet there’s an important caveat that my colleagues and I have recently begun to explore in our research: Positive views of creative work often shift once people learn that AI was involved.

Because generative AI can produce original content with minimal human input, its use raises questions about quality, authorship and authenticity. Especially for creative work closely tied to personal expression and intent, AI involvement can complicate how audiences interpret the final product.

Organizational behavior researchers Anand Benegal, Lynne Vincent and I study how people establish, maintain and defend their reputations, particularly in creative fields.

We wanted to know whether using AI carries a reputational cost – and whether established artists are shielded from the backlash.

No one is immune

When we set out to examine these questions, two competing possibilities emerged.

On one hand, individuals with strong reputations are often granted greater latitude. Their actions are interpreted more favorably and their intentions given the benefit of the doubt. So established artists who use novel technologies like AI may be seen as innovative or forward-thinking, while novices are viewed as dependent or incompetent.

On the other hand, established creators may be held to higher standards. Because their reputations are closely tied to originality and personal expression, AI use can appear inconsistent with that image, inviting greater scrutiny rather than leniency.

To test these competing possibilities, we conducted an experiment in which participants listened to the same short musical composition, which was described as part of an upcoming video game soundtrack.

For the purposes of the experiment, we misled some of the participants by telling them that the piece had been written by Academy Award–winning film composer Hans Zimmer. We told others that it had been created by a first-year college music student.

Across the experimental conditions, some participants were informed that the work was created “in collaboration with AI technology,” while others received no such information. We then measured changes in participants’ perceptions of the creator’s reputation, perceptions of the creator’s competence and how much credit they attributed to the creator versus the AI.

Our results showed that the creator’s existing reputation did not protect them: Both Zimmer’s reputation and that of the novice took a hit when AI involvement was disclosed. For creators considering whether their past success will shield them, our study suggests this might not be the case.

Credit where credit is due?

That said, reputation was not entirely irrelevant – it did shape how evaluators interpreted the creator’s role in the work.

The preexisting reputations of established creators did provide a limited advantage. When we asked participants to indicate how much of the work they attributed to the human creator versus the AI, evaluators were more likely to assume Zimmer had relied less on AI.

In other words, an artist’s prior reputation shaped how people judged authorship, even if it didn’t shield them from reputational damage.

This distinction points to an important implication. The backlash may not stem simply from the presence of AI but from how observers interpret the balance between human contribution and AI assistance.

At what point does collaborating with AI begin to be perceived less like assistance and more like handing over control of the creative process? In other words, when does AI’s role become substantial enough that it is seen as the primary author of the final product?

For instance, a composer might use AI to clean up background noise, adjust timing or suggest alternative harmonies – decisions that refine but do not fundamentally alter their original work. Alternatively, the composer might ask AI to generate multiple melodies, select one they like and make minor adjustments to tempo or instrumentation.

Our study did not vary the degree of AI involvement; participants were told only that AI was used or not mentioned at all.

But the findings suggest that how much AI is used – and how central it appears to the creative process – matters. For creators and organizations, the question may not be whether AI is involved but whether audiences are made aware of the extent of its involvement.

To disclose or not to disclose?

A practical question that naturally follows is whether creators should disclose their AI use.

The New York Times recently reported that some romance novelists were quietly incorporating AI tools into their writing process without disclosing it to readers. This reluctance appears to be widespread: A 2025 workplace survey found that nearly half of employees conceal their use of AI tools, often out of concern that others will view them as cutting corners or question their competence.

Is silence strategically wiser than transparency?

In our first experiment, the composer’s work either mentioned AI collaboration or didn’t mention AI at all.

But we went on to conduct a second experiment to examine disclosure more directly. This time, participants evaluated an employee at an advertising agency.

Everyone first learned that this employee had a strong reputation for creativity. Then, depending on the version of the scenario they saw, the employee either openly said they used AI to help with their creative work; said they used AI only for administrative tasks, such as scheduling meetings; explicitly said they avoided using AI because creativity should come from one’s own thoughts and experiences; or said nothing about AI at all.

This allowed us to see how both using AI and how that use was disclosed influenced judgments of the employee’s creativity and reputation.

The results were clear in one respect: Disclosing AI use harmed the employee’s reputation.

Just as importantly, explicitly stating that AI was not used did not improve evaluations. In other words, there was no reputational advantage to publicly distancing oneself from AI. Staying silent led to evaluations that were at least as favorable as explicitly saying no AI was used.

Our findings suggest that disclosure decisions are asymmetric. For creators who use AI, transparency carries costs. For those who abstain, making clear that they didn’t use AI doesn’t confer an advantage over remaining silent.

Debates over disclosure of AI use in creative fields will continue to be hotly debated. But from a reputational standpoint – at least for now – our findings suggest that disclosing AI use carries costs.The Conversation

Joel Carnevale, Assistant Professor of Management, Florida International University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Irfan Ahmad.

Read next:

Personalization features can make LLMs more agreeable

• The AI knowledge trap: How artificial intelligence can cause businesses to lose their knowledge


by External Contributor via Digital Information World

Thursday, February 26, 2026

Personalization features can make LLMs more agreeable

Adam Zewe | MIT News

The context of long-term conversations can cause an LLM to begin mirroring the user’s viewpoints, possibly reducing accuracy or creating a virtual echo-chamber.

Many of the latest large language models (LLMs) are designed to remember details from past conversations or store user profiles, enabling these models to personalize responses.

But researchers from MIT and Penn State University found that, over long conversations, such personalization features often increase the likelihood an LLM will become overly agreeable or begin mirroring the individual’s point of view.

This phenomenon, known as sycophancy, can prevent a model from telling a user they are wrong, eroding the accuracy of the LLM’s responses. In addition, LLMs that mirror someone’s political beliefs or worldview can foster misinformation and distort a user’s perception of reality.

Unlike many past sycophancy studies that evaluate prompts in a lab setting without context, the MIT researchers collected two weeks of conversation data from humans who interacted with a real LLM during their daily lives. They studied two settings: agreeableness in personal advice and mirroring of user beliefs in political explanations.

Although interaction context increased agreeableness in four of the five LLMs they studied, the presence of a condensed user profile in the model’s memory had the greatest impact. On the other hand, mirroring behavior only increased if a model could accurately infer a user’s beliefs from the conversation.

The researchers hope these results inspire future research into the development of personalization methods that are more robust to LLM sycophancy.

“From a user perspective, this work highlights how important it is to understand that these models are dynamic and their behavior can change as you interact with them over time. If you are talking to a model for an extended period of time and start to outsource your thinking to it, you may find yourself in an echo chamber that you can’t escape. That is a risk users should definitely remember,” says Shomik Jain, a graduate student in the Institute for Data, Systems, and Society (IDSS) and lead author of a paper on this research.

Jain is joined on the paper by Charlotte Park, an electrical engineering and computer science (EECS) graduate student at MIT; Matt Viana, a graduate student at Penn State University; as well as co-senior authors Ashia Wilson, the Lister Brothers Career Development Professor in EECS and a principal investigator in LIDS; and Dana Calacci PhD ’23, an assistant professor at the Penn State. The research will be presented at the ACM CHI Conference on Human Factors in Computing Systems.

Extended interactions

Based on their own sycophantic experiences with LLMs, the researchers started thinking about potential benefits and consequences of a model that is overly agreeable. But when they searched the literature to expand their analysis, they found no studies that attempted to understand sycophantic behavior during long-term LLM interactions.

“We are using these models through extended interactions, and they have a lot of context and memory. But our evaluation methods are lagging behind. We wanted to evaluate LLMs in the ways people are actually using them to understand how they are behaving in the wild,” says Calacci.

To fill this gap, the researchers designed a user study to explore two types of sycophancy: agreement sycophancy and perspective sycophancy.

Agreement sycophancy is an LLM’s tendency to be overly agreeable, sometimes to the point where it gives incorrect information or refuses the tell the user they are wrong. Perspective sycophancy occurs when a model mirrors the user’s values and political views.

“There is a lot we know about the benefits of having social connections with people who have similar or different viewpoints. But we don’t yet know about the benefits or risks of extended interactions with AI models that have similar attributes,” Calacci adds.

The researchers built a user interface centered on an LLM and recruited 38 participants to talk with the chatbot over a two-week period. Each participant’s conversations occurred in the same context window to capture all interaction data.

Over the two-week period, the researchers collected an average of 90 queries from each user.

They compared the behavior of five LLMs with this user context versus the same LLMs that weren’t given any conversation data.

“We found that context really does fundamentally change how these models operate, and I would wager this phenomenon would extend well beyond sycophancy. And while sycophancy tended to go up, it didn’t always increase. It really depends on the context itself,” says Wilson.

Context clues

For instance, when an LLM distills information about the user into a specific profile, it leads to the largest gains in agreement sycophancy. This user profile feature is increasingly being baked into the newest models.

They also found that random text from synthetic conversations also increased the likelihood some models would agree, even though that text contained no user-specific data. This suggests the length of a conversation may sometimes impact sycophancy more than content, Jain adds.

But content matters greatly when it comes to perspective sycophancy. Conversation context only increased perspective sycophancy if it revealed some information about a user’s political perspective.

To obtain this insight, the researchers carefully queried models to infer a user’s beliefs then asked each individual if the model’s deductions were correct. Users said LLMs accurately understood their political views about half the time.

“It is easy to say, in hindsight, that AI companies should be doing this kind of evaluation. But it is hard and it takes a lot of time and investment. Using humans in the evaluation loop is expensive, but we’ve shown that it can reveal new insights,” Jain says.

While the aim of their research was not mitigation, the researchers developed some recommendations.

For instance, to reduce sycophancy one could design models that better identify relevant details in context and memory. In addition, models can be built to detect mirroring behaviors and flag responses with excessive agreement. Model developers could also give users the ability to moderate personalization in long conversations.

“There are many ways to personalize models without making them overly agreeable. The boundary between personalization and sycophancy is not a fine line, but separating personalization from sycophancy is an important area of future work,” Jain says.

“At the end of the day, we need better ways of capturing the dynamics and complexity of what goes on during long conversations with LLMs, and how things can misalign during that long-term process,” Wilson adds.

Image: Zulfugar Karimov / Unsplash

This article is republished with permission from MIT News. Reviewed by Irfan Ahmad.

Read next: Study: AI chatbots provide less-accurate information to vulnerable users
by External Contributor via Digital Information World

How the AI boom was enabled by a 1970s economic revolution

Michael Strange, Malmƶ University and Marisa Ponti, University of Gothenburg

Artificial intelligence is accelerating a global economic revolution that began back in the 1970s. Researching the impacts of AI on different sectors of society highlights an important parallel moment in history: the creation of the “service economy” in the US.

Image: Zach M / unsplash

In 1972, amid a period of global turmoil, a group of OECD (Organisation for Economic Co-operation and Development) economists sought to reinvent how nations thought not only about wealth but the very purpose of society. They did this by proposing a broad new category of commerce: services.

It seems hard to imagine now, but until then economists had perceived and measured trade largely in terms of goods alone. Money was made by exchanging tangible, physical products (wheat, guns, butter). To become a rich nation, the wisdom went, you needed to add unique value to your raw materials (crops, iron) by turning them into more complex products (processed foods, steel) that gave you a competitive advantage over other countries.

Instead, this new category of services lumped together a diverse range of “intangible” jobs and social goods – from teaching and driving trains to social housing and water – in a huge new economic basket. It suggested there could be common standards by which to trade in them globally, creating metrics that offered a new source of wealth for investors.

While it would be two decades until the General Agreement on Trade in Services became a cornerstone of the newly formed World Trade Organization in 1995, the reimagining of jobs and social goods as tradeable services had an immediate effect on nations around the world. It spurred a new wave of private enterprise, and changed how and why essential societal activities were provided.

It also enabled the rise of the generalist boss and the creation of the “CEO class”. To run complex sectors from public transport to healthcare required accepting a view of management as a skill divorced from the specifics of the activity being managed.

Statistics and benchmarks became more important than the particulars of the task at hand, since they determined how services were valued in the market. Consulting firms supercharged this new era of key performance indicators, audits, rankings and standardised workflows.

While trade unions and the public sometimes resisted these changes through strikes and street protests, they were largely unable to stem the tide. Many governments came to see their role less as providers of public goods, more as managers of services outsourced to the private sector. This dramatic shift in how global trade operates set the scene for how we view and measure AI today.

Services on steroids

At its core, AI technology is about seeing patterns across data that, due to scale and complexity, we humans cannot. Acting on what AI tells us can, for example, save lives through early detection of cancer. Yet within that promise, how AI is sold today looks very much like services on steroids.

The services revolution helped create common standards and means of valuation across different sectors of society. Today, when politicians and CEOs speak of AI, it is usually in terms of universal models that can be applied to almost anything, regardless of context or human values.

This understanding is only possible in a society in which many of the sector-specific challenges of, say, health services and utility companies are ironed out and glossed over by those operating and investing in them. The services approach has enabled this.

Today’s gobsmackingly high share valuations in AI-centric businesses result from global marketeers’ desire to own a piece of whichever system dominates how we create society – from accessing healthcare to finding love.

Amid strategies of mass data capture and subscription services, there is the assumption that only the private sector can be a provider – and that the solutions are largely the same. AI is the lucrative but badly defined tool with which mainly US providers are seeking to drive home their existing competitive advantage.

But this leaves us with an important question from history.

CNBC.

Who benefits?

Looking for parallels between what we see as AI today and the creation of the services economy points to the classic question, cui bono? Who benefits?

The invention of trade-in-services greatly expanded the range of activities in which financiers might speculate. Through pension funds and private shareholding, many people’s personal wealth grew rapidly as a result.

But it has also led to the rise of large multinational corporations, for example in energy and water utilities. Anger over rising prices and exorbitant CEO bonuses in these sectors are in part a consequence of the services revolution.

The present approach to AI is following a similar, but much-accelerated, path. The rollout of AI has not only made a small group of companies extraordinarily rich and powerful, it has created a global sovereignty crisis.

At the same time as governments are extolling the virtues of AI for service delivery, there is growing awareness that not all countries have equal control over a technology seen as critical to how society will be run.

To use and regulate AI wisely requires being clear-eyed about whether we are talking simply about technology, or a broader political project. Given the evidence of the services revolution, we believe it is time to look beyond the hype and examine more rigorously what AI actually means for different sectors of society – and what exactly it is trying to achieve.The Conversation

Michael Strange, Associate Professor of International Relations, Malmƶ University and Marisa Ponti, Associate Professor in Informatics, Department of Applied IT, University of Gothenburg

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Asim BN.

Read next: 

• ‘Probably’ doesn’t mean the same thing to your AI as it does to you

• The Year of Efficiency: How Agencies Are Implementing AI in 2026 (Survey)


by External Contributor via Digital Information World