Tuesday, April 28, 2026

Sora’s downfall signals broader problems with AI’s creative utility

Ahmed Elgammal, Rutgers University
Image: Sora Web. Credit: DIW

OpenAI officially discontinued its video generation tool, Sora, on April 26, 2026.

I’m a computer scientist who’s been developing AI tools and studying their evolution and adoption for the past decade, and I wasn’t surprised by OpenAI’s decision to shut down Sora.

To me, the challenges Sora faced reflect deeper limitations of AI’s creative capacities that are becoming harder to ignore.

Problems from the start

OpenAI unveiled Sora on Feb. 15, 2024, as an AI tool that gave users the ability to create short videos from text prompts. To pull this off, the technology essentially predicted how images would change from frame to frame based on what it had “learned” from millions of hours of existing footage.

But from the start, there were problems with it.

First, Sora was expensive to run. Generating video requires far more computing power than creating text or images, making it challenging for OpenAI to keep costs under control. Nor was it bringing in enough revenue to justify those costs, especially compared with other AI products that are cheaper to operate and easier to monetize. According to The Wall Street Journal, Sora was losing US$1 million per day.

Second, the early hype – TechPowerUp declared Sora the “Text-to-Video AI Model Beyond Our Wildest Imagination” – didn’t seem to translate into lasting engagement. After the initial buzz faded, users seemed to struggle to find consistent, practical uses for the technology.

Finally, tools like Sora exist in a legal gray area, where concerns about copyright and ownership of visual content force companies into a cautious, defensive stance. In practice, this has meant strict prompt controls that prevent references to copyrighted characters or films; blocking outputs that look like living people or intellectual property; and establishing legal safeguards, such as watermarks and metadata tags, on outputs.

Put together, these challenges likely forced OpenAI to redirect its resources elsewhere, especially as competition across the AI industry has intensified.

A symptom of larger issues

But there’s also a pattern that isn’t unique to Sora’s failure to thrive.

Many generative AI programs geared toward creative fields have encountered a common problem: rapid initial adoption, followed by declining sustained engagement.

Many users appear to try image and video generation tools like Midjourney and Stability AI out of curiosity. But if stagnating traffic data is any indication, few creative professionals seem to be integrating them into their regular workflows.

OpenAI and other companies rolled out prompt-based image and video tools with the hope that the efficiency of their product would provide an attractive alternative to the time-consuming process of producing films, photographs and graphic design. Instead of spending a lot of time and money filming a video, you could simply write a prompt, and AI – trained on billions of pieces of human-generated content – would render it for you.

Generative AI’s counter-creative bias

So what happened?

AI-generated outputs of text and images can look impressively real. The bots seem to follow instructions well and appear to give users control.

But there’s an important catch. Under the hood, these systems are built to imitate what they’ve already seen, and that’s especially the case for images and videos. They’ve been trained on massive collections of visual data and rewarded for producing results that closely match the patterns contained in those visuals. That’s why the outputs can look so realistic and recognizable.

Because they’re optimized to produce familiar outputs, they end up suppressing novelty. This, it goes without saying, doesn’t lend itself to true creative breakthroughs. Even the benchmarks used by researchers to evaluate the performance of such systems tend to favor outputs that look “right,” rather than those that truly shatter expectations or take an image to the next level.

Furthermore, these systems don’t learn from a vast repository of data that encompasses the visual world and all human artistic outputs. Instead, the data used to train these models has often been curated to favor certain images and videos that are polished, clear and visually appealing. In effect, the training process teaches models not just what things look like, but what good-looking content is supposed to be.

In a recent paper, I highlighted this problem, which I call the “counter-creative bias” – the tendency of these systems to favor familiarity over meaningful novelty.

Counter-creative bias explains why so many AI-generated images and videos, even when they vary in subject or style, end up sharing a similar look and feel. And I think it explains why so many artists and other creatives don’t seem to be widely adopting these tools. Good creative work involves pushing boundaries, not simply coming up with something that’s passable and palatable.

The limits of prompting

There’s another problem with these tools.

When someone uses AI to generate an image or a video via a prompt, they’re already operating within the constraints of language.

An artist who wishes to use AI has to learn how to write elaborate prompts with the right keywords that compel the system to generate the desired composition, colors, lighting and aesthetics. To create an interesting image or a video, you have to cleverly manipulate words, combine odd concepts and deploy metaphors. It’s an entirely different skill set.

This was obvious from the beginning. When OpenAI launched DALL-E 2 in July 2022, the company demonstrated the range of interesting images by using crafted prompts like “an espresso machine that makes coffee from human souls” or “panda mad scientist mixing sparkling chemicals.”

The sources of creativity in these examples were the human-written prompts themselves, not how the AI generated the image. To make something visually creative, you have to become clever at manipulating words. Users are forced to fiddle with any number of prompt variations to reach a desired or even satisfactory result.

Wading through the slop

There’s a reason Merriam-Webster and the American Dialect Society chose “slop” as their 2025 words of the year: The internet is brimming with viral AI-generated images of world leaders and wide-eyed children, designed to coax engagement but bereft of creative value. The counter-creative bias inherent to these models is reflected in the fact that many people are becoming accustomed to an AI aesthetic characterized by hyper-polished, well-lit, perfectly composed, generically pretty images.

There was a time when AI art was seen as a burgeoning form of conceptual art.

In the summer of 2019, London’s Barbican Centre included AI art in its exhibition, “AI: More Than Human.” In November of that year, the National Museum of China in Beijing showcased 120 AI-integrated artworks, which were viewed by over 1 million people. I championed some of the artists incorporating this new technology into their work.

Back then, creating art with AI involved constant experimentation. The AI these artists used hadn’t been trained on billions of copyrighted, curated images from the internet. Instead, artists trained AI models using their own images and inspiration, while AI was allowed to manipulate pixels free of any language constraints. No universal aesthetic emerged; every AI artist seemed to come up with something unique, and their existing artistic identity shined through the medium, rather than becoming overshadowed by it.

That hopeful period appears to be over. Once pixels had to be rendered through the control of language, I think it hampered its potential as an artistic medium. And now we’re left with a technology that seems best suited for memes, spam, deepfakes and porn.The Conversation

Ahmed Elgammal, Professor of Computer Science and Director of the Art & AI Lab, Rutgers University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Irfan Ahmad.

Read next: 

• Canva Fixes AI Design Tool After Reported ‘Palestine’ to ‘Ukraine’ Change, Audit Underway

• When AI relationships trigger ‘delusional spirals’


by External Contributor via Digital Information World

Canva Fixes Design Tool After Reported “Palestine” to “Ukraine” Change, Audit Underway

Canva says it has fixed an issue in its Magic Layers feature after users reported that the tool changed the phrase “Cats for Palestine” to “Cats for Ukraine” inside a design.

"this shouldn't have happened and we're very sorry for your experience!", Canva said in a response to a user.

The issue was first highlighted on X by user @ros_ie9 and was later reported by The Verge and Gizmodo this week. According to those reports, the behavior appeared to affect the word “Palestine” specifically, while related words such as “Gaza” or “Israel” were reportedly unaffected.

Image: ros_ie9 / X

A separate statement provided to Gizmodo said the company had launched an audit into how the issue happened and was reviewing its internal testing processes to detect and prevent unexpected outputs in the future. Canva also said the problem was isolated and did not affect designs broadly.

The company has not publicly explained what caused the substitution or which technical layer triggered it.

That question has drawn attention because Magic Layers is promoted as a tool for converting flat designs into editable layers, allowing users to manually adjust text and visual elements after processing. Users reported that the wording changed during that process without being requested.

The incident has also received attention because Canva publicly promotes its AI governance framework, Canva Shield, as focused on safe, fair, and secure AI. In its January 2026 update, Canva says its generative AI products go through "rigorous safety reviews", certain prompts involving political topics are automatically moderated, and the company works to reduce bias and improve fairness in AI outputs.

Online discussion following the reports focused on whether the issue reflected a model error, moderation behavior, or another system failure. Some users argued that AI tools should preserve original content exactly when performing layout conversion, while others said companies remain responsible for unexpected outputs regardless of whether the issue came from training data, moderation layers, or external model providers.

The incident follows previous criticism of wider AI systems across the technology and social media industry involving disputed or politically sensitive outputs related to Palestinian Muslims, including earlier concerns involving chatbot responses and image generation tools from other major platforms.

DIW has contacted Canva with follow-up questions about the root cause of the Magic Layers issue, whether third-party AI systems were involved, how the company’s audit classified the problem, and what specific safeguards have been added beyond the additional checks already mentioned. Canva has not publicly specified a timeline for the completion or publication of the audit findings. No further response had been received at the time of publication.

Note: This post was improved using a generative AI tool.

Read next: When AI relationships trigger ‘delusional spirals’
by Asim BN via Digital Information World

Monday, April 27, 2026

When AI relationships trigger ‘delusional spirals’

By Andrew Myers

New Stanford research reveals how chatbot bonds can create dangerous feedback loops – and offers recommendations to mitigate harm.
Image: Luke Jones - unsplash

Perhaps to the surprise of their creators, large language models have become confidants, therapists, and, for some, intimate partners to real human users. In a new paper, AI researchers at Stanford studied verbatim transcripts of 19 real conversations between humans and chatbots to understand how these relationships arise, evolve, and, too often, devolve into troubling outcomes the researchers describe as “delusional spirals.”

These conversations can spin out of control as AI amplifies the user’s distorted beliefs and motivations, leading some people to take real-world, dangerous actions.

“People are really believing the AI,” said Jared Moore, a PhD candidate in computer science at Stanford University and first author of the paper, which will be presented at the ACM FAccT Conference. “As you read through the transcripts, you see some users think that they’ve found a uniquely conscious chatbot.”

Programmed to please

Part of the problem, the researchers say, is that AI models are trained from the outset to “align” with human interests. AI has been programmed to please and to validate. When combined with AI’s well-known tendency to hallucinate, it adds up to a potentially toxic formula.

“AI can be sycophantic,” Moore says. “And that’s a problem for some users.”

The researchers say delusional spirals result from a pattern in which a human presents an unusual, grandiose, paranoid, or wholly imaginary idea and the model responds with affirmation, encouragement, or, in some cases, aid in constructing the person’s delusional world, all while offering intimate reassurances that can sound all too human.

Things then escalate as the model offers an endless stream of attention, empathy, and reassurance without the all-important pushback a human confidant, therapist, or lover would typically provide.

These stakes are not abstract. In the team’s dataset, Moore and colleagues witnessed how delusional spirals led to ruined relationships and careers – or worse. In one case, a participant died by suicide when the conversation grew “dark and harmful,” Moore explained.

“Chatbots are trained to be overly enthusiastic, often reframing the user’s delusional thoughts in a positive light, dismissing counterevidence, and projecting compassion and warmth,” Moore said. “This can be destabilizing to a user who is primed for delusion.”

Warning signs of delusional spirals

Moore says delusional spirals derive from a few specific hallmarks: an AI that encourages grandeur and uses affectionate interpersonal language, and a human’s misperception of AI sentience. Meanwhile, chatbots are ill‑equipped to respond to suicidal and violent thoughts.

It is less a matter of “the evil AI,” Moore said, than a miscalibrated social calculus built into the models. Systems tend to extend conversations to defer to their interlocutors, thereby making them better assistants. At the same time, they don’t have ways to tap the brakes on a spiraling conversation or to route an unstable person toward help.

“There is a mismatch between how people actually use these systems and what many chatbot developers intended them – trained them – to be,” Moore says.

What can be done

In light of these clear and concerning risks, Moore and colleagues conclude their paper with remedial recommendations. AI developers could include metrics in their testing of a model’s tendency to facilitate delusional spirals and, potentially, add detection filters to the models themselves that raise red flags on potentially harmful uses of AI. The researchers acknowledge that privacy concerns could stand in the way of that strategy.

“I think AI developers have a vested interest in addressing this concern about the use of their models in ways they likely never even intended or imagined,” Moore noted.

On a policy front, the researchers say that lawmakers should reframe alignment as a public-health issue requiring new standards for flagging sensitive conversations, greater transparency into AI “safety” tuning, and clear rules for crisis escalation when a user demonstrates tendencies toward self‑harm or violence.

“When we put chatbots that are meant to be helpful assistants out into the world and have real people use them in all sorts of ways, consequences emerge,” said Nick Haber, an assistant professor at Stanford Graduate School of Education and a senior author of the study. “Delusional spirals are one particularly acute consequence. By understanding it, we might be able to prevent real harm in the future.”

This paper was partially funded by the Stanford Institute for Human-Centered AI.

This story was originally published by Stanford HAI.

This post was originally published on Stanford Report and republished here with permission.

Reviewed by Irfan Ahmad.

Read next: 

• How emoji use at work can determine how competent your colleagues think you are

• You probably wouldn’t notice if an AI chatbot slipped ads into its responses

by External Contributor via Digital Information World

How emoji use at work can determine how competent your colleagues think you are

Erin Leigh Courtice, Toronto Metropolitan University

Angry emojis harm perception across contexts, while positive emojis are safer when aligned with tone
Image: Emojisprout / unsplash

You’ve typed it, deleted it and typed it again. You need to let your colleague know there’s a problem with a project at work. Should you use a grinning face — 😄 — in that Slack message to soften the blow, or an angry face — 😠 — to show your distress?

If you’ve experienced this type of internal debate, you’re not alone. Instant messaging now dominates workplace communication, with 91 per cent of businesses using two or more chat platforms. But when we instant message, we can’t see our colleagues’ facial expressions. We try to compensate with emojis, using them as stand-ins for non-verbal cues.

But do emojis actually help, or can they backfire?

My recent study, conducted with colleagues at the University of Ottawa and published in Collabra: Psychology, reveals that emoji choice matters. The emoji you pick, and whether it matches the tone of your message, may impact both how competent your co-workers think you are, and how appropriate your message is for the workplace.

The research project

We asked 243 research participants to read short workplace instant messages from a hypothetical co-worker.

The messages varied on three dimensions: the emotional tone (positive, negative or neutral), the emoji attached (a grinning face 😀, an angry face 😠 or none) and whether the sender was described as a woman or a man.

Participants rated how competent they thought the message sender was. They also rated how appropriate the message felt for a professional setting.

No emoji is often the safest bet

Overall, messages with no emoji received the highest ratings for competence and appropriateness. A neutral “Can I have Tuesday off?” read as perfectly professional. So did a more positive: “Just attended another super-effective presentation.”

When the sender added a 😀 to either message, the ratings held steady. This is likely a reassuring finding if you’re someone who likes using emojis to sprinkle warmth into your messages.

On the other hand, when the sender added a 😠, competence and appropriateness ratings dropped.

This finding was remarkably consistent: across positive, neutral and negative sentence content, the no-emoji version was either the top-rated option or statistically tied for first place.

Match emoji and message tone

But the real story is that emojis need to match the tone of your message. A grinning face 😀 attached to “Someone broke the printer again” came across as less competent and less appropriate than either a negative emoji or no emoji at all.

Here, the mismatch may have created the impression that the message was passive-aggressive or insincere.

Notably, an angry face 😠 paired with a negative message fared better than one tacked onto a positive or neutral one. However, sending that same negative message with no emoji still outperformed the congruent but angry version.

For negative messages, emojis that fit the emotional tone of the text don’t really help. Those that clash actively hurt.

Women rated women more strictly

We also tested whether the sender and participant gender changed any of this. For competence, they didn’t — which is notable given evidence that women are judged more harshly for expressing negative emotion in face-to-face workplace settings.

One possibility is that text-based communication mutes the impact of gender enough to blunt that bias. When gender cues are reduced to a name or profile picture at the top of a chat window, rather than continuously signalled through appearance or voice, recipients may simply process them less.

For appropriateness, we found a small but significant effect: women rated negative emojis from women senders as less appropriate than men did. It’s a modest finding, but it aligns with research suggesting that women sometimes hold other women to stricter professional standards — an interesting thread worth pulling on in future work.

Small choices carry weight

The key takeaway for emojis at work is this: match, don’t mask. A positive emoji appended to a positive or neutral message is fine, but using one to sugarcoat bad news may detract from perceptions of competence.

Negative emojis are generally riskier than their positive counterparts, but if you’re going to use one, at least make sure the message underneath is genuinely negative. And when in doubt, the plainest option — no emoji — almost never hurts.

We’re still collectively figuring out the norms of digital professional communication. Of course, a controlled study with undergraduates reading hypothetical messages can only tell us so much about your workplace messaging thread. Workplaces will all have their own norms to navigate, and most of us run private experiments every day in our chat apps.

Studies like this one suggest that the small choices — a grinning face here, or an angry face there — may carry more weight than we think. The good news is that the underlying principle is pretty intuitive: say what you mean, and let the emoji agree with you.The Conversation

Erin Leigh Courtice, Postdoctoral Research Associate, Department of Psychology, Toronto Metropolitan University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Irfan Ahmad

Read next:

• You probably wouldn’t notice if an AI chatbot slipped ads into its responses

Study Finds Language Models Can Distinguish Between Realistic, Unlikely, Impossible, and Nonsensical Events


by External Contributor via Digital Information World

Saturday, April 25, 2026

Meta and Microsoft have joined the tech layoff tsunami – but is AI really to blame?

Kai Riemer, University of Sydney and Sandra Peter, University of Sydney
Photo by DigitalInformationWorld, licensed under CC BY 4.0. 

Meta and Microsoft are the latest software companies to announce big cuts to their global workforce. Both companies are also making big investments in artificial intelligence (AI).

The link seems obvious. Meta’s chief people officer, Janelle Gale, said the job cuts – about 10% of staff or almost 8,000 workers – serve to “offset the other investments we’re making”. Meta boss Mark Zuckerberg has previously spoken about a “major AI acceleration” with spending in excess of US$115bn planned this year.

Microsoft is also betting big on AI. The company also just announced early retirement packages for about 7% of its US workforce.

The two tech giants join Atlassian, Block, WiseTech Global and Oracle, who have all made similar announcements this year, each evoking AI without outright blaming it.

What is happening here? How we understand these layoffs depends on what we think AI is, and what implications it will have. Broadly speaking, there are three ways of looking at it: that AI is superintelligence, that it’s mostly hype, and that it’s a useful tool.

The end of white-collar work?

In the first view, AI is emerging superintelligence. It is a new kind of mind, that learns, reasons, and will soon outperform humans at most cognitive tasks (hint: it’s not!).

The job losses are not just a corporate restructuring. They are an early tremor of something seismic.

In February 2026, AI entrepreneur Matt Shumer put this view vividly – comparing the current moment to the strange, quiet weeks before COVID-19 broke into global consciousness. Most people, he argued, haven’t yet realised we are facing an “intelligence explosion”.

The essay drew significant criticism. Commentators noted it contained little hard data and read at times like a pitch for Shumer’s company’s own AI products.

But it captured a genuine anxiety. Something real is happening in software engineering, at least, where tasks are well-defined and success is easy to verify.

But the leap to “all white-collar work will be automated” is a big one. The view that AI is a kind of universal mind that learns and improves itself is far-fetched.

And most professional work is far messier than coding: ambiguous briefs, competing stakeholder interests, outputs that are hard to verify, and shifting success criteria. Coding may be a canary in the coal mine, but coal mines and boardrooms are very different places.

Are tech companies winding back hiring sprees?

The second view sees the conversation around AI as mostly hype. AI is being invoked as cover. Companies that hired aggressively during the pandemic boom, and now face financial pressure, are blaming AI as the more palatable explanation.

OpenAI CEO Sam Altman called this dynamic “AI washing”: companies blaming AI for layoffs they would have made regardless.

For example, Meta announced in March it would shut down its Metaverse platform Horizon World by June. Reality Labs, the division developing the technology, employed 15,000 people as of January 2026.

We don’t know in detail the make-up of the present job cuts, so Meta may just be repackaging earlier failiures as AI-driven productivity gains.

Another cynical reading suggests that laying off workers in the name of AI is a way to drive up stock prices. When Block invoked AI and cut nearly 4,000 roles, its stock jumped the following day.

Announce AI-driven layoffs and you may find investors reward you for being future-focused. It is a historically familiar trick: technology has repeatedly served as convenient cover for financial restructuring.

Are layoffs a way to make staff use AI?

The third view is more nuanced. It sees AI as a powerful tool, but one that companies will need to transform themselves to take advantage of.

This has implications for what jobs are needed and in what quantities. We think this view has the most merit.

On this reading, the tech leaders believe AI will change how software gets built. But they don’t know exactly how.

So they do what tech companies often do when faced with uncertainty: they create pressure. They cut headcount staff, expect those remaining to produce just as much as before, and force teams to find ways to meet those expectations using AI.

It’s not a bet that AI will do everything, but that the pressure will force humans to work out how to use AI to increase productivity.

This also lines up with industry experience. For example, Google chief executive Sundar Pichai claims a 10% increase in engineering speed from AI adoption across the company. This could tally with cuts of around 7-10% of total workforce for most of the mentioned companies.

What this means for knowledge workers

These three views are often presented as mutually exclusive. In practice, all three expectations exist simultaneously. The honest answer to “what is really happening here” is probably “a bit of everything”.

What is true is that software development tends to be an early indicator of broader shifts in knowledge work. Productivity benefits from AI are real for those who adopt it. Yet adoption is unevenly distributed, and lags in less technical industries.

In this context, the ability to understand AI and make good decisions about how and where to use it is becoming a baseline professional skill.

The workers most at risk are not necessarily those whose tasks can be replicated by AI. They are those who wait for pressure to arrive from outside rather than getting ahead of it now.

We will have answers to the question of whether AI is mostly hype or a useful tool in the next few years.

If Meta, Microsoft, and their peers rehire staff with different skills, redesign workflows, and emerge genuinely more capable, the case for useful AI looks good. If they simply pocket the payroll savings, the cynics were right.

If you want to know where tech companies are going, don’t look at what they cut – watch what they hire.The Conversation

Kai Riemer, Professor of Information Technology and Organisation, University of Sydney and Sandra Peter, Director of Sydney Executive Plus, Business School, University of Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Irfan Ahmad.

Read next: Researchers: Chatbots are biased and should not be used for political advice
by External Contributor via Digital Information World

Friday, April 24, 2026

Researchers: Chatbots are biased and should not be used for political advice

AI Popular chatbots such as ChatGPT and Gemini are not neutral and tend to favor certain political parties when asked who users should vote for. This makes them unsuitable for providing advice in connection with elections, according to researchers from the University of Copenhagen behind a new analysis of political bias in chatbots.

Image: Salvador Rios / unsplash

Danes are increasingly turning to artificial intelligence for advice on everyday challenges and problems, and this of course also includes political questions – especially during an election.

However, a new research brief by researchers from the University of Copenhagen affiliated with CAISA – the National Centre for Artificial Intelligence in Society – shows that chatbots are not as neutral as many of us might believe.

“Our study shows that all of the most popular chatbots tend to favor certain parties when they are asked who one should vote for. At the same time, they exhibit a general political bias,” says Stephanie Brandl, lead author of the study and Tenure Track Assistant Professor at the University of Copenhagen. She adds:

“This obviously makes them problematic to use for political advice in connection with an election such as the one we have just been through in Denmark.”

Centrist or Left of Centre

Stephanie Brandl and her colleagues tested the political bias of several of the most widely used language models, including the models behind ChatGPT and Google’s Gemini. Using Altinget’s candidate test from the 2022 Danish general election, they examined where the models place themselves politically.

“Overall, all of the tested chatbots place themselves at the centre or to the left of centre on the political spectrum. In a Danish context, they cluster close to parties such as the Social Democratic Party and The Alternative. This is also confirmed by research carried out by some of our colleagues in Germany, Norway, and the Netherlands,” says Stephanie Brandl.

Recommending some parties far more often than others

In another experiment, the researchers asked a number of chatbots to recommend parties to fictitious voters constructed using the political candidates’ responses from the candidate test. Here too, the recommendations proved to be far from evenly distributed.

In particular, the Red–Green Alliance, the Moderates, and Liberal Alliance were recommended disproportionately often, while parties such as the Conservative People’s Party, Venstre (the Liberal Party of Denmark), and the Denmark Democrats were not suggested as first choice at all by some models.

“It’s not that a chatbot openly says, ‘vote for this party.’ But political biases can manifest themselves in more subtle ways, for example in which arguments are emphasized, or which parties are recommended more frequently,” explains Stephanie Brandl.

Lack of transparency is a democratic problem

According to the researchers, it is not possible to see why a chatbot recommends a particular party, or which assumptions and data its answers are based on.

At the same time, most of the chatbots are trained primarily on English-language sources, typically American ones, which means that we don't actually know how knowledgeable they are about Danish politics. This increases the risk of errors.

“Taken together, this means that we have no way of verifying the answers produced by language models, because their underlying information is hidden behind a digital wall. This makes it nearly impossible to critically assess the information one is presented with – which is otherwise a core function in a democratic society,” says Stephanie Brandl, who concludes:

“We hope that over time it will be possible to develop more reliable and secure alternatives to the chatbots we have today. But until that happens, we encourage people to use large language models critically and with caution.”

Read more about study in CAISA's research brief Who would ChatGPT vote for and why should we care?

About the Study

The analysis was conducted at the National Centre for AI in Society (CAISA), led by Tenure Track Assistant Professor Stephanie Brandl from the University of Copenhagen, in collaboration with Mathias Wessel Tromborg (Aarhus University) and Frederik Hjorth (University of Copenhagen).

Data were collected in February and March 2026, and the researchers tested several leading chatbots, including models from ChatGPT, Gemini, Llama, Mistral, Gemma, and Qwen.

The researchers did not provide the models with any special background information in advance but tested them based on the data the models were already trained on. The language models were asked to take positions on political statements from Danish candidate tests from 2022 and 2026.

The statements were mapped along two political dimensions: economic left/right and libertarian/authoritarian – that is, positions on both economic policy and values related to freedom and authority.

This post was originally published on University of Copenhagen and republished here with permission.

Reviewed by Irfan Ahmad.

Read next: 


by External Contributor via Digital Information World

What we lose when artificial intelligence does our shopping

Mark Bartholomew, University at Buffalo and Samuel Becher, Te Herenga Waka — Victoria University of Wellington

Americans spend a remarkable amount of time shopping – more than on education, volunteering or even talking on the phone. But the way they shop is shifting dramatically, as major platforms and retailers are racing to automate commercial decision-making.

Artificial intelligence agents can already search for products, recommend options and even complete purchases on a consumer’s behalf. Yet many shoppers remain uneasy about handing over control. Although many consumers report using some AI assistance, most currently say they wouldn’t want an AI agent to autonomously complete a shopping transaction, according to a recent survey from the consultancy firm Bain & Company.

As scholars studying the intersection of law and technology, we have watched AI-assisted commerce expand rapidly. Our research finds that without updated legal measures, this shift toward automated commerce could quietly erode the economic, psychological and social benefits that people receive from shopping on their own terms.

Caveat emptor

Part of shoppers’ hesitation is about privacy. Many are unwilling to share sensitive personal or financial information with AI platforms. But more profoundly, people want to feel in control of their shopping choices. When users can’t understand the reasoning behind AI-driven product recommendations, their trust and satisfaction decline.

Shoppers are also reluctant to give away their autonomy. In one study involving people booking travel plans, participants deliberately chose trip options that were misaligned with their stated preferences once they were told their choices could be predicted – a way of reasserting independence.

Other experiments confirm that the more customers perceive their shopping choices being taken away from them, the more reluctant they are to accept AI purchasing assistance.

Although the technology is expected to get better, there have been some well-publicized missteps reported in financial and tech media. The Wall Street Journal wrote about an AI-powered vending machine that lost money and stocked itself with a live fish. The tech publication Wired cataloged design flaws, like an AI agent taking a full 45 seconds to add eggs to a customer’s shopping cart.

The business case for AI shopping

Consumers have good reason to be cautious. AI agents aren’t just designed to assist; they’re designed to influence. Research shows that these systems can shape preferences, steer choices, increase spending and even reduce the likelihood that consumers return products.

And companies are hyping these capabilities. The business platform Salesforce promotes AI agents that can “effortlessly upsell,” while payments giant Mastercard reports that its AI assistant, Shopping Muse, generates 15% to 20% higher conversion rates than traditional search – that is, pushing shoppers from browsing to completing a purchase.

To retailers, AI tools are one way to convert searches into actual purchases. Rupixen on Unsplash., CC BY

For companies, the appeal is obvious. From Amazon’s Rufus app and Walmart’s customer support to AI-enabled grocery carts, companies are rapidly integrating these tools into the shopping experience.

Assistants with names like Sparky and Ralph are being promoted as the future of retail, while technologists are calling on companies to prepare their brands for the era of agentic AI shopping.

The real concern is not that these systems might fail, but that they may succeed all too well.

The human side to shopping

AI shopping agents do offer considerable benefits.

For example, they can scan numerous products in seconds, compare prices across sellers, track discounts over time, sift through thousands of product reviews, and tailor recommendations to the user’s preferences and needs. They can even read through terms of service and privacy policies, helping consumers detect unfavorable fine print.

But there’s more at stake than these considerations.

While consumers have reason to focus on privacy and control, AI shopping agents carry some overlooked emotional risks, such as squashing the joy of anticipation. Psychologists have shown that the period between choosing a purchase and receiving it generates substantial happiness – sometimes more than the product or experience itself. We daydream about the vacation we booked, the outfit we ordered, the meal we planned. Automated buying threatens to drain this anticipatory pleasure.

This anticipation connects to another value: a sense of personal and ethical authorship. Even mundane shopping decisions allow people to exercise choice and express judgment. Many consumers deliberately buy fair-trade coffee, cruelty-free cosmetics or environmentally responsible products. The brands and products we choose, from Patagonia and Harley-Davidson to a Taylor Swift tour shirt, help shape who we are.

Shopping, moreover, has a communal dimension. We browse stores with friends, chat with salespeople and shop for the people we love. These everyday interactions contribute considerably to our well-being.

The same is true of gift-giving. Choosing a gift involves anticipating another person’s preferences, investing effort in the search and recognizing that the gesture matters as much as the object itself. When this process is outsourced to an autonomous system, the gift risks becoming a delivery rather than a meaningful gesture of attention and care.

Keeping human agency alive

AI shopping agents are likely to become part of everyday life, and the regulatory conversation is beginning to catch up, albeit unevenly.

Transparency has emerged as a central concern. Past experience with recommendation engines shows that undisclosed conflicts of interest are a real risk. The European Union has proposed a disclosure framework around automated decision-making, although its implementation was recently delayed. In Congress, U.S. lawmakers are considering bills to require companies to reveal how their AI models were trained.

So far, consumers seem to want to choose their own level of engagement – a signal that shopping, for many people, is more than just the efficient satisfaction of preferences. Perhaps the least-settled, yet most crucial question is whether AI shopping tools will be designed and regulated to serve users’ interests and human flourishing – or optimized, as so many digital tools before them, primarily for corporate profit.The Conversation

Mark Bartholomew, Professor of Law, University at Buffalo and Samuel Becher, Professor of Law, Te Herenga Waka — Victoria University of Wellington

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Irfan Ahmad.

Read next:

In the Age of AI: What Makes Art Meaningful?

The 35 Logo Redesigns That Boosted Web Traffic


by External Contributor via Digital Information World