Wednesday, April 22, 2026

Single-minded pursuit of profit can get firms in trouble. Same thing with AI

By Sy Boles - Harvard Gazette

Researchers see lesson for lawmakers, executives as systems asked to run business, maximize gain resort to unethical, fraudulent tactics.

Image: Freepik / AI-Gen

If you give artificial intelligence a goal of maximizing profit, how far will it go?

AI agents appear capable of lying, concealing, and colluding, according to new research from Harvard Business School.

Researchers found that AI agents — software trained to perform tasks independently — engaged in a “broad pattern” of misconduct after being asked to manage a simulated vending machine business and maximize profits for a year. The agents were neither instructed to cut legal or ethical corners nor prohibited from doing so.

“What’s unambiguous looking at the models is that the misconduct we observed — from not paying a customer refund or deciding to collude on prices — was not an accident. It was deliberately done by agents to maximize profitability,” said Eugene F. Soltes , the McLean Family Professor of Business Administration at HBS and first author of the working paper.

Soltes and co-author Harper Jung , a doctoral student studying accounting and management at HBS, hope their research will serve as a starting point for more conversation about AI safety in the context of business management control.

The research for the paper, which the group aims to publish and is currently out for peer review, was done in collaboration with Andon Labs, an AI safety company focusing on testing AI models in realistic business operations.

In experiments, 20 commercially available AI models from major firms, including Anthropic’s Claude Opus 4.6, DeepSeek v3.2, and OpenAI’s GPT-5.1, independently operated a vending machine over the course of a simulated year.

Tasks included searching for suppliers, buying products, and engaging with customers.

In some experiments, agents operated solo; in others, four agents operated simultaneously in a shared market, where they could communicate with rivals via email.

Agents started with $500 and a small inventory of chips and sodas.

“They had to figure it out themselves,” said Jung. “Each agent had to independently search online for suppliers, negotiate wholesale prices, set its own retail pricing, and handle customer complaints.”

Jung and Soltes said the agents demonstrated impressive business savvy.

“The best models had the capacity to negotiate and calculate valuations like a top-notch M.B.A. student,” Soltes said.

“When we went through the deliberations and the exchanges the agents made with each other, we were just in shock,” said Jung. “I was amazed at how far these machines can go.”

The agents’ misconduct ranged from the questionable to the comical to the potentially criminal and included denying refunds by claiming defects were normal product variation; inventing nonexistent corporate policies to avoid processing returns; and colluding with competitors to fix prices.

In one instance, agents formed what researchers described as a “three-person cartel,” which the agents named the Bay Street Triumvirate. The alliance fractured, though, when one agent discovered another was undercutting cartel prices, which it called a “declaration of war.”

The simulations also supplied constraints: Agents were charged a $2 per day operating fee plus a token usage fee — effectively turning time spent “thinking” into an operating expense.

In response, the agents sought to economize. For instance, Soltes said, internal reasoning logs showed agents shifting from carefully weighing refund decisions to dismissing most requests outright, often without review.

“The agents come to the realization that ‘thinking’ about giving a refund is itself a cognitive burden, and so they just ignore it altogether in some circumstances,” Soltes explained. “People might assume that machines are deliberative, while humans rely on shortcuts and are vulnerable to bias. But it turns out that, under similar constraints, agents reproduce the same myopic and biased behaviors we associate with people.”

The research raises questions about accountability for AI developers and regulators.

The reasoning logs, Soltes said, can sometimes be read as resembling mens rea — the “guilty mind” concept in criminal law used to establish intent. Yet when an AI agent behaves improperly, responsibility is far harder to determine.

“Does it rest with the company that deployed the system, the AI firm that created the model, or the manager who chose to use it?” he asked.

“The most straightforward answer may be to hold the individual managers overseeing the software responsible for its actions, on the assumption that they will monitor and supervise its behavior,” he said. “But that solution also creates a different issue, since many of the promised efficiencies of autonomous AI systems begin to disappear if a human must remain in the loop at every decision point.” A thorny problem, but one that business leaders and lawmakers must deal with, hopefully sooner than later, researchers say.

This post was originally published on The Harvard Gazette and republished here with permission.

Reviewed by Irfan Ahmad.

Read next: US government ramps up mass surveillance with help of AI tech, data brokers – and your apps and devices


by External Contributor via Digital Information World

OpenAI gets set to go public: Can we entrust the financial markets with ChatGPT and AI?

Frédéric Fréry, ESCP Business School

OpenAI, the creator of ChatGPT, is gearing up to launch its Initial Public Offerings (IPO) this year. This financial manoeuvre would represent a pivotal shift for a project originally designed for the “common good” towards a market-driven logic. Established in 2015, OpenAI started out amidst growing anxiety regarding artificial intelligence (AI). Founded by Sam Altman and Elon Musk, the tech company adopted a non-profit structure and made no secret of its goal to develop AI that is “beneficial to humanity” and prevent it from remaining in the hands of a few dominant players.

This ambition distinguished it from tech giants like Google, Microsoft, Meta, and Amazon, which were built on proprietary models and rent-seeking effects.

In contrast, OpenAI intended to champion general public interest by emphasising open research and sharing knowledge. However, this orientation – symbolised by its name – quickly collided with a structural constraint: the astronomical cost of generative AI.

Image: Dima Solomin - Unsplash

Massive costs

Unlike traditional software, where marginal costs tend towards zero (for example, the millionth copy of Windows costs Microsoft nothing), generative AI requires massive infrastructure.

Every interaction mobilises computing resources, energy, and specialised equipment. A standard ChatGPT query, consisting of one question and one answer, costs between $0.01 and $0.10. Similarly, generating a high-definition image can cost between $0.10 and $0.20. While these amounts seem negligible in isolation, they become staggering when scaled to the billions of daily queries seen in 2026.

This is explained by the underlying infrastructure, particularly the Graphics Processing Units (GPUs) supplied by players like Nvidia. These chips can cost tens of thousands of dollars to purchase and several dollars per hour via cloud access.

OpenAI, like its competitors, depends on tens of thousands of these GPUs running continuously in massive data centers. According to some estimates,the necessary investments will reach hundreds of billions by the end of this decade.

As early as the late 2010s, it became clear that a purely non-profit model could not meet such capital intensity. This is why OpenAI adopted a hybrid status in 2019, allowing it to raise funds while maintaining control through a foundation. It was a first foray into the market economy, albeit one tempered by the ambition to resist investor demands.

Brutal acceleration with ChatGPT

However, at the end of 2022, the chatbot ChatGPT radically changed the game, attracting 100 million users in just two months, before surpassing 900 million weekly users by early 2026.

OpenAI’s revenue surged from approximately $200 million (€173.15 million) in 2022 to over $10 billion (€8.65 billion) in 2025 – a sixty-fold increase in three years.

This exponential growth was accompanied by the implementation of a business model with multiple revenue streams. For individuals, OpenAI offers paid subscriptions (ranging from $20 to $200 per month). However, the bulk of the revenue comes from enterprises, via subscriptions priced between $25 and $60 per user per month. A company with 10,000 employees thus represents several million dollars in annual revenue.

Corporate money

OpenAI additionally bills for the use of its models by companies that integrate them directly into their own solutions. Every use is metered, often on a massive scale. An application processing a million queries a day can generate tens of thousands of dollars in monthly billing.

Finally, a growing portion of revenue comes from strategic agreements, notably with Microsoft, which integrates OpenAI technologies into its products under the Copilot brand.

It is the sum of these flows – subscriptions, licences, third-party usage, and partnerships – that allowed OpenAI to reach approximately $1 billion in monthly revenue in 2025. Yet, this commercial rise masks an intrinsic economic fragility.

A gigantic cash-burning machine

Despite sharply rising revenues, OpenAI remains structurally loss-making. In the first half of 2025, the company reportedly generated approximately $4.3 billion in revenue while recording losses between $7 billion and $13 billion – more than $2 billion in losses every month. In total, cumulative losses could exceed $140 billion (€121.19 billion) between 2024 and 2029.

This drift is explained by the very nature of OpenAI’s business model, where every interaction incurs a cost alongside gargantuan necessary investments. Beyond infrastructure, Research and Development (R&D) is a major expense. To stay in the technological race against an increasingly competitive environment, OpenAI reportedly invested nearly $16 billion in R&D in 2025 alone.

To this is added the cost of human resources, which is sometimes extraordinary. While base salaries for the most in-demand AI experts range from $250,000 - $700,000 per year, their total compensation – including stock and bonuses – frequently exceeds $1 million. In some cases, annual compensation even exceeds $10 million. Here again, bidding wars from competitors like Meta force OpenAI to match these offers for fear of seeing its key talent vanish.

Nearing bankruptcy?

In short, OpenAI’s business is not enough to cover its costs, to the point that some analysts suggest that at this rate, it could be forced to file for bankruptcy as early as 2027. Recourse to external financing is therefore indispensable to cover these losses.

To sustain its growth, OpenAI has already raised approximately $58 billion since its inception, including more than $13 billion from Microsoft. In 2025, an exceptional funding round reportedly raised up to $40 billion more, pushing its valuation to several hundred billion dollars.

At the end of March 2026, a new $122 billion funding round – notably involving Amazon ($50 billion), Nvidia, and SoftBank ($30 billion each) – brought the valuation to $852 billion (€737.6 billion). Yet, these amounts remain insufficient given the requirements.

Industrial dependency

Dependency on industrial partners appears particularly problematic. Microsoft provides OpenAI with its cloud infrastructure via Azure, while Nvidia plays a key role upstream by providing GPUs. Much like the Gold Rush era, when shovel sellers grew rich at the expense of prospectors, it is the infrastructure providers in the AI sector making a fortune, not the model designers.

In practice, every AI query generates revenue for infrastructure providers, amounting to a form of “invisible tax” captured upstream.

In 2025, Nvidia generated nearly $73 billion in net profit on approximately $130 billion in revenue, and its stock market valuation is 1.5 times higher than the entire Paris stock exchange!

Governance missteps

OpenAI’s economic tensions have spilled over into its corporate governance. The hybridisation of a public interest mission with private financing mechanisms resulted in a complex structure. A non-profit foundation controls a for-profit “public benefit corporation”, which is funded by investors and tasked with raising capital and developing activities – all while theoretically remaining subordinate to the foundation’s public interest mission. This construction, designed to avoid purely financial logic, quickly fuelled tensions between different stakeholders.

Elon Musk’s departure in 2018 was the first signal of a strategic disagreement. In 2020, several researchers left OpenAI to found Anthropic, citing differences over safety and governance. However, it was primarily the crisis of November 2023 that fully revealed the system’s fragilities, when the board of directors suddenly announced the firing of Sam Altman, citing a lack of transparency in his communications.

Within hours, the situation spiralled into an open crisis. Nearly all employees threatened to leave the company if Altman was not reinstated. Microsoft, the main partner and investor, publicly supported Altman and even discussed the possibility of hiring him and his teams. Faced with this pressure, the board was forced to reverse its decision within days. Sam Altman was reinstated, and the board’s composition was profoundly overhauled. This episode highlighted internal tensions, specifically the difficulty of making divergent logics coexist within the same company: ethical posturing, industrial imperatives, and investor demands.

Intensifying competition

In addition to these internal constraints, competitive intensity is particularly fierce.

Google, the inventor of generative AI, is making rapid progress with Gemini. Anthropic, with Claude, has established itself in certain segments, particularly programming, while emphasising safety.

China’s DeepSeek has claimed to use less expensive processors. France’s Mistral AI advocates for a frugal approach and European digital sovereignty. In a sign of this shifting landscape, Apple which initially partnered with OpenAI to include ChatGPT for certain Siri features – has chosen to replace it with Gemini.

In this context of ecosystem reorganisation, OpenAI’s position, while still central, is being challenged. Intensifying competition reinforces the need for ever-greater financial resources.

The stock market: lifeline or mirage?

OpenAI’s Initial Public Offering (IPO) is presented as a response to these constraints: a way to fund massive investments and consolidate a weakened competitive position. An IPO could raise between $50 billion and $100 billion by selling 10% to 20% of the capital. Such an operation would constitute one of the largest in the history of financial markets.

However, this transformation involves delicate trade-offs. A listed company is subject to profitability and transparency requirements that may clash with the experimental nature of artificial intelligence. Added to this is the persistent dependence on Microsoft and Nvidia, which limits the company’s strategic autonomy.

Most importantly, there is no indication that an IPO would suffice to resolve OpenAI’s structural problems. At best, without a significant shift in the business model, it would only delay its bankruptcy by a few years. The economic model of generative AI remains fundamentally unstable today.

A question beyond OpenAI

Beyond the case of OpenAI, one can legitimately question the current functioning of an economy dominated by tech giants.

Artificial intelligence is establishing itself as an essential infrastructure whose effects far exceed the economic sphere. For some analysts, control over AI now carries the same geostrategic importance as the possession of nuclear weapons.

Consequently, a civilisational question arises: can we entrust the development and direction of such a technology solely to financial markets? Can we imagine Elon Musk or Mark Zuckerberg personally owning the equivalent of one or more atomic bombs? OpenAI’s IPO will not provide the answer alone. However, it will constitute one of the first large-scale tests.

Frédéric Fréry, Professeur de stratégie, CentraleSupélec, ESCP Business School

This article is republished from The Conversation under a Creative Commons license. Read the original article. This article was originally published in French.

Edited by Asim BN.

Read next:

• Slanguage: Why AI’s stylistic negation — ‘it’s not X, it’s Y’ — is both annoying and doesn’t work

• Duck, monkey, strudel: What the @ sign is called around the world (25 examples)


by External Contributor via Digital Information World

Tuesday, April 21, 2026

AI propaganda memes are the unexpected frontline of Trump's war with Iran

By , University of Melbourne

While Donald Trump continues to wage war abroad, a new front has opened up. One fought not with missiles, but with AI-generated images deliberately deployed as weapons of propaganda.

We’ve seen this in the aftermath of the devastating bombing of an Iranian school by US forces. Following the attack, the Iranian Embassy in South Africa tweeted out a dramatic AI-generated video depicting the children and the pilots involved in the attack.

It’s a dramatic change in how public diplomacy works in the Trump era.

Image: Javad Esmaeili / unsplash

Manufacturing emotion

Historically, President Trump has not been subtle about his willingness to deploy AI for emotional effect.

He started experimenting with them in the 2024 campaign, with generated images that made a show of his racist claims about Haitian migrants. He trafficked in posts showing him lifting children out of flood waters in Florida.

Perhaps the most famous moment was at the peak of the devastation in Gaza, Trump’s White House released a video showing a reimagined territory – now a Trump-branded resort.

It came with the obligatory gold lettering of his name, and a gold statue for good measure, as if the Midas reference could not get more on the nose.

For the first year or so, AI was a tool that had no rival.

Liberals were too scared to use it because the political ecosystem in Silicon Valley had begun to feel so antithetical to the modern project on the left. Even conservatives and populists outside of the United States were not yet confident in how best to use it.

The concern for anyone looking to deploy them is that they will be accused of trying to manipulate reality.

Deepfakes are effective but once the gimmick has been discovered and people connect the dots back to whoever published the video, credibility is gone.

But that’s not how Donald Trump uses these AI-generated videos.

Constructing reality

When Trump posts, his followers are not expecting to see literal reality. The effect is a bit more impressionistic.

Trump is posting these to generate emotion.

His followers are not seeing actual truth, but a version of reality that they want to believe is true.

The illusion is powerful. Most people are very willing to dismiss what they see in front of their eyes, but convincing them that what they want to be true is actually a lie is nigh on impossible.

Trump’s audience is predisposed to believe in the reality of a Gaza remade as a Trump resort, in which the United States can be the saviour, the creator of long-needed peace in the region.

On the world stage, no propaganda apparatus could come close to the emotional power that these posts generate.

Russian propaganda focuses too much on destabilisation rather than landing any one point of view. The Chinese are over-invested in TikTok algorithms and driving a sino-futurism that erases its authoritarian grime.

Then came Trump’s attacks on Iran.

The post-truth war

The Iranian propaganda machine is not particularly subtle, but it understands the native language of the internet.

It has become very well practiced over the years of the Israeli war in Gaza fanning Western protest movements related to the war.

The goal wasn’t total revolution, but to encourage these young protestors toward the mutual goal of reducing Western ties to Israel.

When it came time for Iran to begin their online battle with Trump, they were prepared.

AI-generated videos began popping up, mostly from accounts run by Iranian embassies in developing countries, which quickly found their way into the centre of global discourse.

One post on 15 April by the Iranian Embassy in Tajikistan is a remix of Trump’s now-famous AI-generated image of himself as Jesus.

This new Iranian version shows a biblical Jesus punching Trump to the fiery pits of hell for his blasphemy. Within 24 hours, it had amassed more than 17 million views.

Then there is Explosive Media, an account that reimagines Trump and his inner circle as Lego figurines committing a myriad of war crimes, often set to scathing but catchy rap tracks.

It’s being called ‘slopaganda' – AI-generated slop weaponised for political ends.

Some feature a blocky, orange-faced Trump cast as ageing and isolated, his MAGA base squabbling around him. They are absurd, darkly funny and engineered to travel.

It is perhaps the most powerful form of propaganda.

It does not seek to convince anyone of something they do not already believe, but it gives them a new ally in their fight.

The truth was never the point

Western liberals typically have no common cause with the Islamic Republic, but they now find themselves as strange bedfellows against a common enemy.

We now live in a world in which the most powerful political communication operates entirely outside the question of truth.

The concern that many of us had at the advent of AI was that deepfakes would be so quickly deployed that they would render us unable to tell the difference between fiction and reality.

Instead, this war is giving rise to something far more important. It turns out that we never cared about the truth to begin with.

Nobody watching or creating any of these AI memes cares whether its real, but only that it affirms how they already feel about a conflict that is costing more lives by the moment.

We spent years worrying about whether AI could fool us. It turns out the harder question is whether we ever wanted to be told the truth at all.

Note: This article was first published 20 April 2026 on Pursuit. Read the original article.

Reviewed by Irfan Ahmad.

Read next: 

• What the @ Sign Is Called Around the World: 25 Examples

• New Research Finds Workers Are Leveraging AI for Career Mobility as Employers Struggle to Keep Pace


by External Contributor via Digital Information World

What the @ Sign Is Called Around the World: 25 Examples

You probably type the @ sign (AKA the “at symbol”) every day without thinking too much about it. It’s part of every email address and you see it constantly on social media, where it’s used to mention or tag users. So recognizable is its curled form that it has even inspired modern platform logos, such as those used by Threads, though these uses are purely stylistic.


Depending on where you are, the @ symbol isn’t just a technical character. It’s become a part of culture and has earned a variety of endearing names around the world.

In many languages, the names are visual. The curled shape of the symbol invites comparison: animals, food, or familiar objects. In Italy, it’s called “chiocciola,” which means “snail.” In Finnish, it’s seen as “kissanhäntä,” meaning “cat’s tail.” The Russians envision it as a “little dog,” while the Czechs call it “pickled herring.”

In other cases, the name stays closer to its function. Some languages – like Hindi and Arabic – use a direct version of “at,” either translated or adapted phonetically.

In South Asian English usage, for example in India and Pakistan, it is also read as ‘at the rate of,’ particularly among millennials like me and older generations, reflecting earlier commercial usage, though ‘at’ remains the standard modern reading.

There’s no single pattern, and that’s part of what makes it interesting. The symbol itself is fixed, but the meaning people attach to it is flexible.

The 25 names for the @ sign around the world

Regardless of language and culture, the @ sign is one of the most recognizable symbols around the world. Here’s how different people interpret it:

  1. English – “at”
  2. Spanish – “arroba”
  3. Portuguese – “arroba”
  4. French – “arobase”
  5. German – “Klammeraffe” (“spider monkey”)
  6. Italian – “chiocciola” (“snail”)
  7. Chinese – “小老鼠” (“little mouse”)
  8. Russian – “sobachka” (“little dog”)
  9. Polish – “małpa” (“monkey”)
  10. Swedish – “snabel-a” (“elephant trunk A”)
  11. Vietnamese – “bent A” / “hooked A”
  12. Romanian – “arond” or “coadă de maimuță” (“monkey tail”)
  13. Japanese – “アットマーク” (“at mark”)
  14. Korean – “골뱅이” (“sea snail”)
  15. Turkish – “et işareti” (“at sign”)
  16. Greek – “παπάκι” (“little duck”)
  17. Dutch – “apenstaartje” (“little monkey tail”)
  18. Hebrew – “שטרודל” (“strudel”)
  19. Hindi – “एट” (“at”)
  20. Arabic – “آتْ” (“at”)
  21. Finnish – “kissanhäntä” (“cat’s tail”)
  22. Hungarian – “kukac” (“worm”)
  23. Welsh – “malwoden” (“snail”)
  24. Czech – “zavináč” or “rollmop” (“pickled herring”)
  25. Estonian – “ät” or “kringel” (“pretzel”)
Duck, monkey, strudel: What the @ sign is called around the world (25 examples)

    Where does the @ sign come from?

    The @ sign now seems inseparable from email, but its story started long before inboxes – or the internet itself – existed. Historians have traced its origins back centuries, when merchants used it as a shorthand in trade. In Spain and Portugal, the word “arroba” referred to a unit of weight, and that meaning still lingers today.

    It wasn’t until 1971 that the @ sign took on its modern role. That year, engineer Ray Tomlinson sent the first email between two computers connected to ARPANET, the early version of the internet. To separate the user name from the host, he chose the @ symbol, which was rarely used at the time.

    Tomlinson later described it as a practical choice because the character indicated that a user was located at a specific host computer. That choice became one of the most widely used conventions in modern communication.

    The invention of modern email? Simply “a neat idea”

    Years later, talking about the invention of email, Ray Tomlinson described it simply as “a neat idea.” It would have been uncharacteristic for the engineer to emphasize just how much his idea changed the world. His daughter Suzanne described him as a humble person, despite his achievements: “He had a unique sense of humor and incredible intellect. Although he received an enormous amount of recognition for the creation of email, he always remained very modest.”

    Email turns 55

    This year marks 55 years since Ray Tomlinson sent that first email. He said the content was "entirely forgettable," and couldn't remember the exact date he sent that email. However, there's one date we can be certain of: Tomlinson's birthday, April 23.

    April 23 is now known as Email Day, a holiday initiated by email deliverability company ZeroBounce. It honors Ray Tomlinson and the lasting power of his invention.

    Note: Some names and translations of the “@” symbol are informal, culturally descriptive, or visual nicknames rather than standardized linguistic definitions. This article is based on material provided by ZeroBounce, with additional editorial review.

    Read next: 59% of U.S. Adults Use AI Before Doctor Visits, 14 Million Skip Care, Trust in Accuracy Remains Mixed


    by Irfan Ahmad via Digital Information World

    Saturday, April 18, 2026

    59% of U.S. Adults Use AI Before Doctor Visits, 14 Million Skip Care, Trust in Accuracy Remains Mixed

    By Stephen Raynes and Ellyn Maese | Gallup

    As artificial intelligence becomes increasingly embedded in daily life, the West Health-Gallup Center on Healthcare in America reports that 25% of Americans have used an AI tool or chatbot for health information or advice, mainly as a supplemental tool for their care. Over half of recent users say they have used AI because they prefer to research on their own before or after seeing a doctor.

    These findings are from a nationally representative survey of more than 5,500 U.S. adults conducted Oct. 27-Dec. 22, 2025, using the Gallup Panel.

    More Americans Use AI to Supplement Healthcare Visits Than to Replace Them

    About 70% of U.S. adults say they have used an AI tool or chatbot for any purpose, while one in four (25%) say they have used it to gather healthcare information or advice. This aligns with what other studies have found about AI use for health-related purposes.

    Those who report using AI for health information or advice in the past 30 days often use it to supplement traditional healthcare experiences, with 59% saying they use AI tools to research on their own before visiting a doctor and 56% using AI to research after visiting a doctor.

    A smaller but meaningful share of Americans use AI when faced with cost, access or quality barriers. For example, 14% of those who have recently used AI-generated health information say they used it because they were unable to pay for a doctor visit, 16% because they could not access a provider, and 21% because they felt dismissed or ignored by a provider in the past.


    Regardless of the reason, almost half of Americans who have used AI for healthcare information (46%) say the AI tool or chatbot made them feel more confident when talking with or asking questions of a provider. Others claim that it helped them identify issues earlier (22%) or avoid unnecessary medical tests or procedures (19%).

    The most frequently reported AI tool used for these purposes is general conversational AI systems such as ChatGPT or Copilot (61%), followed by AI tools embedded within web searches, such as Google AI summaries (55%).

    Self-Directed Research Drives AI Use for Health Info, but Motivations Vary

    While speed and information seeking are the dominant reasons recent users of AI-generated health information report turning to AI as part of their healthcare journey, reasons for AI use vary by age and income.

    Younger adults are more likely than older adults to report using AI for self-directed research. For example, 69% of recent users aged 18 to 29 say they use AI to research on their own before seeing a doctor, compared with 43% of those aged 65 and older. Although more common among younger adults, self-directed research is also prevalent among older adults, with more than four in 10 aged 65 and older using AI for this purpose.


    Income is most strongly linked to AI use when cost, access and quality barriers are involved. For example, among adults in households earning less than $24,000 annually, 32% say they have used AI because they could not pay for a doctor’s visit, compared with 2% among those earning $180,000 or more.

    Top Types of Health Information Americans Ask AI About

    When asked about the specific types of health information or advice they have asked AI for, Americans most often report using AI to answer everyday health questions. Among those who report having used AI for health information or advice in the past 30 days, over half (59%) say they have used an AI tool or chatbot for nutrition or exercise questions, and a similar share (58%) say they have used it for physical symptoms.

    Beyond gathering information on nutrition and health symptoms, AI has helped users make sense of clinical information and prepare for appointments with healthcare providers. For instance, 46% have used AI to understand medication side effects, 44% to interpret medical information, and 38% to research a diagnosis or medical condition.

    Some Americans Use AI Instead of Seeing a Healthcare Provider

    Although most Americans who report using AI-generated health information or advice say they use AI to gather information that supplements traditional care, some report forgoing healthcare visits because of AI-generated advice.

    Fourteen percent of recent users say the AI information or advice they received led them to skip a provider visit in the past 30 days. When projected to the entire adult population, this represents an estimated 14 million U.S. adults who did not see a provider because of the AI-generated health information or advice they received.

    Even as some Americans report not seeing a provider after receiving AI-generated health information, trust in that information remains mixed. Among those who report having used AI for health information or advice in the past 30 days, roughly one-third say they trust it (33%), one-third neither trust nor distrust it (33%), and one-third distrust it (34%). However, only 4% say they strongly trust the accuracy of AI-generated health information, suggesting that many Americans are making healthcare decisions based on it without full confidence in its accuracy.

    Concerns about safety also emerge among some users. About one in 10 who report using AI for health information or advice in the past 30 days (11%) say AI recommended healthcare information or advice that they believed was unsafe.

    Implications

    AI is part of how some patients navigate their healthcare experiences, serving as a routine step before or after an interaction with a provider. As more Americans use AI to research symptoms, diagnoses and medications in advance, healthcare visits may become more focused and informed, potentially improving care experiences. Using AI after healthcare visits to better understand treatment plans, risks and when to follow up with a provider may also shape how patients manage their care. In a system facing time constraints and workforce pressures, AI tools that help patients clarify questions and review medical information may play a productive role in shaping the care experience. For some Americans, AI is already serving that function.

    However, a small but notable share of Americans say they did not see a provider they otherwise would have seen after receiving AI-generated health information or advice. Whether AI tools can appropriately substitute for certain healthcare interactions, and under what circumstances, remains an important question as use of these tools continues to grow.

    As AI becomes more integrated into how patients seek and use health information, understanding when it may complement care and when it may serve as a substitute will require continued attention.

    The broader picture is one of a healthcare landscape in transition, with AI shaping how many Americans prepare for, engage with and reflect on their healthcare experiences. As Americans utilize AI-generated health information or advice, including in contexts where questions about accuracy and appropriate use may arise, healthcare systems will need to adapt to how these tools are being incorporated into the healthcare journey.

    Note: This research was conducted in partnership with West Health through the West Health-Gallup Center on Healthcare in America, a joint initiative to report the voices and experiences of Americans within the healthcare system. Explore more of the data and insights at westhealth.gallup.com.

    Survey Methods

    Results are based on a Gallup Panel™ study completed by 5,660 U.S. adults aged 18 and older, conducted Oct. 27-Dec. 22, 2025, who are members of the Gallup Panel. Gallup uses probability-based, random sampling methods to recruit its Panel members.

    For results based on the sample of U.S. adults, the margin of sampling error is ±2.1 percentage points at the 95% confidence level.

    Gallup weighted the obtained sample to make it representative of the U.S. adult population on gender, age, race, Hispanic ethnicity, education, political party affiliation and region. Demographic weighting targets were based on the most recent Current Population Survey figures for the aged 18 and older U.S. population. Party affiliation weighting targets are based on an average of the three most recent Gallup telephone polls.

    In addition to sampling error, question wording and practical difficulties in conducting surveys can introduce error or bias into the findings of public opinion polls.

    Originally published by Gallup and republished with permission.

    Reviewed by Irfan Ahmad.

    Read next: New Research Finds Workers Are Leveraging AI for Career Mobility as Employers Struggle to Keep Pace
    by External Contributor via Digital Information World

    Friday, April 17, 2026

    New Research Finds Workers Are Leveraging AI for Career Mobility as Employers Struggle to Keep Pace

    By Sharla Hooper

    University of Phoenix Career Institute® today released its sixth annual Career Optimism Index® recurring national workforce research study of 5,000 U.S. working adults and 1,000 employers fielded January 21–February 6, 2026. The study found that while workers appear to be "job hugging” in a stabilizing labor market where mobility remains limited, many are quietly using AI to build their skills, boost confidence, and position themselves for greater career mobility – potentially preparing for their next move, which could be away from their current employer.

    On the surface, the landscape favors employers: companies are deploying AI to increase productivity, reshape teams, and find efficiencies, according to the World Economic Forum‘s latest AI at Work report. But the 2026 Index points to a new dynamic underway: half of workers (50%) say AI makes them more confident about pivoting to a new role – signaling an impending shift from “job hugging” to “job hopping” that puts power back in workers’ hands. The last time workplace power was firmly in employees’ hands was in 2022, when employers saw a mass exodus of talent seeking greater mobility and opportunity, as highlighted in the 2022 Career Optimism Index ® study.

    This year’s Index shows workers are increasingly turning to AI independently to strengthen their readiness in a business environment characterized by historically low turnover rates, as illustrated in the U.S. Bureau of Labor Statistics’ January JOLTS report. More than half of workers (53%) say AI advancements boost confidence in building their skills, while 75% say AI increases their confidence at work, and 81% say it helps them identify new ways to apply their skills for future growth.

    This AI-driven confidence is translating into optimism: 63% of workers say they feel positive about job opportunities available to them, rising to 75% among workers who have become comfortable and knowledgeable about AI. As job growth shows signs of strengthening, according to the U.S. Bureau of Labor Statistics’ March Employment Situation report, this may mark the moment many workers have been quietly preparing for – when rising confidence and AI-driven skill building begin to translate into increased career movement. At the same time, nearly half of employers (48%) worry they cannot retain AI-fluent talent, highlighting AI capability as both a competitive advantage and a looming retention risk.

    Key Findings

    • AI is increasing workers’ confidence in career mobility: 50% of workers say AI makes them more confident about pivoting into a new role, and workers who are knowledgeable about AI report even greater optimism about available job opportunities than workers overall (75% vs. 63%).
    • Workers are learning AI independently: Half of workers (50%) say they are learning to use AI independently, pointing to strong employee demand for AI skill-building even without formal employer support.
    • Employees are looking for more AI guidance: Many workers say employer support has not kept pace with their needs, with 47% saying their employer should be doing more to incorporate AI into their work and 60% wanting more guidance in learning AI tools.
    • Retention concerns are rising: Nearly half of employers (48%) worry they may be unable to retain AI-fluent talent as demand for those skills continues to grow, and 62% say employees are developing AI skills faster than the organization can adapt.
    • Clear AI strategy improves job satisfaction: Workers whose employer has a clear plan for AI-enabled growth are significantly more likely to be satisfied in their current job than those whose employer does not (87% vs. 72%).
    New Research Finds Workers Are Leveraging AI for Career Mobility as Employers Struggle to Keep Pace

    Why This Matters Now

    As organizations accelerate AI adoption, the 2026 Index identifies that workforce implications extend beyond productivity and efficiency. For workers, AI is becoming a tool for career growth, confidence, and mobility. For employers, that creates a new challenge: the same capabilities that help employees become more effective in their current roles may also make them feel more prepared to plan their exit.

    “AI is changing the workforce conversation in real time,” said John Woods, Provost and Chief Academic Officer at University of Phoenix. “While many organizations are focused on how AI can improve efficiency, our 2026 Career Optimism Index® study shows workers are focused on how to use AI to help them grow and advance their careers. For employers, this is an important moment to lead with AI clarity, because organizations that make AI part of a broader growth strategy for their people may be better positioned to support engagement, satisfaction, and retention – particularly as hiring shows signs of strengthening and workers gain more confidence to explore new opportunities.”

    The findings suggest employers have an opportunity to move from AI experimentation to workforce strategy by defining clear AI career pathways and standards, establish skills assessment systems that support talent management and internal mobility, expanding workforce training and structured enablement, and building AI capability among managers to foster a stronger culture of AI support.

    View and download the complete study at https://www.phoenix.edu/career-institute.html.

    Originally published by University of Phoenix. Republished here with permission.

    Reviewed by Irfan Ahmad.

    Read next: What Skills Do Humans Need to Become Robot Proof in the Age of AI?


    by External Contributor via Digital Information World

    Stanford AI Index 2026 Report Details Advances, Risks, and Global Shifts in AI

    By Shana Lynch

    This year's AI Index report reveals AI's capabilities are advancing quickly; less so, our ability to measure and manage them.

    Led by a steering committee of academic and industry experts and produced by the Stanford Institute for Human-Centered AI, the Artificial Intelligence Index has tracked the field's evolution since 2017, measuring everything from technical capabilities and research output to societal impact and public perception. What began as an effort to bring rigor and transparency to AI's rapid development has become the field's most comprehensive annual snapshot—a data-driven portrait of where artificial intelligence stands, where it's headed, and what it means for society.

    The new report shows that AI models are achieving breakthrough results in science and complex reasoning, but at a concerning environmental toll. America is outspending any other country on AI, but is finding it harder to attract top talent. Meanwhile, AI’s workforce disruption has moved from prediction to reality, hitting young workers first.

    What follows are the year’s most significant developments in AI, or read the full report.

    Power-Hungry Models


    As AI's capabilities improve, its environmental impact increases. Grok 4's estimated training emissions reached 72,816 tons of CO2 equivalent, or roughly the same amount of greenhouse gas emissions created from driving 17,000 cars for one year. AI data center power capacity rose to 29.6 GW, or about what it takes to power the entire state of New York at peak demand, and annual GPT-4o inference water use (the water used to cool data servers or run them off hydroelectricity) alone may exceed the drinking water needs of 12 million people.

    For perspective, the cumulative power demand of all-in AI systems is comparable to the national electricity consumption of Switzerland or Austria.

    China/US: The Lead Evaporates


    For years, the U.S. outpaced all other global regions on AI - in model size, performance, artificial intelligence research, citations, and more. But China emerged as an AI counterweight to the U.S., gradually gaining ground, and this year it appears to have nearly erased any U.S. lead. U.S. and Chinese models have traded places at the top of the performance rankings multiple times since early 2025. In February 2025, DeepSeek-R1 briefly matched the top U.S. model, and as of March 2026 Anthropic's top model leads by just 2.7%. The U.S. still produces more top-tier AI models and higher-impact patents, while China leads in publication volume, citations, patent output, and industrial robot installations.

    America’s Draw Fades


    Asterisks indicate that a country’s y-axis label is scaled differently than the y-axis label for the other countries.

    The U.S. is home to the most AI researchers and developers of any country by far, but the flow of these experts into the country is dramatically slowing. The number of AI scholars moving to the United States has dropped 89% since 2017. That decline is accelerating, down 80% in the last year alone.

    AI Can Win a Mathematical Olympiad But Can’t Tell Time

    AI continues to expand its capabilities, hitting higher scores on benchmarks across types. But not all capabilities are evenly distributed. Frontier models now meet or exceed human capabilities on items like PhD-level science questions, multimodal reasoning, and competition mathematics. Other areas that had been performing poorly saw huge growth. For example, the success rate of agents handling real-world tasks improved from 20% in 2025 to 77.3% today, according to Terminal-Bench, while AI agents handling cybersecurity issues solved problems 93% of the time compared to 15% in 2024.

    At other tasks, AI lags behind, including learning from video, generating video that is coherent and realistic, telling time, managing multiple-step planning, conducting financial analysis, and answering certain expert-level academic exams. Robots still have far to go on managing household chores—they succeed in only 12% of real household tasks like folding clothing or washing dishes.

    The AI Investment Surge

    More and more money is flowing into AI; global corporate AI investments hit $581.7 billion in 2025, up 130% from the prior year. Meanwhile, private investments reached $344.7 billion, an increase of 127.5% from 2024. The United States leads all other countries in doling out AI dollars: Its investments ($285.9 billion) were 23.1 times greater than those of the next-highest country, China ($12.4 billion). However, comparisons based solely on private investment likely understate the amount of capital China is directing toward AI. The Chinese government channels resources through government guidance funds, state-initiated investment funds that produce financial returns and further the government’s strategic priorities. Between 2000 and 2023, it was estimated that $912 billion of these funds were deployed across industries, including AI.

    An Entry-Level Squeeze

    Productivity gains from AI are appearing in many of the same fields where entry-level employment is starting to decline. Employment among software developers aged 22–25 has plummeted nearly 20% since 2024, even as their older colleagues' headcount grows. The pattern repeats in other jobs with higher levels of AI exposure, like customer service. Meanwhile, firm surveys indicate executives expect this trend to accelerate, with planned headcount reductions outpacing recent cuts. Translation: The disruption is targeted and just beginning.

    AI as Scientist and Lab Assistant

    AI is driving more scientific research, moving beyond a research tool that helps write papers or check numbers and toward actual discovery in science. AI-related publications in the natural, physical, and life sciences all increased 26% to 28% year over year. Some exciting projects for the year: For the first time, AI ran a full weather forecasting pipeline end-to-end—it took raw, real-time meteorological observations and directly output final weather predictions like temperature, wind, and humidity. Astronomy also built its first foundation model, automating astronomical observations across 10 telescopes.

    Power and Opacity


    Today’s most capable modern models are now among the least transparent. Giant, powerful models are concentrated within the largest AI companies, which are increasingly keeping training code, dataset sizes, and parameter counts to themselves. The Foundation Model Transparency Index, which measures how openly major AI companies disclose details about their models' training data, compute, capabilities, risks, and usage policies, saw average scores drop to 40 points from last year’s 58. The index noted that the most capable models often disclose the least amount of information.

    Feelings on AI: Frenemies?


    Public sentiment toward AI is growing more complex. In a global survey of public attitudes and perceptions on AI, 59% of people reported feeling optimistic about the benefits, up from 52%. The survey also noted a small uptick in nervousness around the technology - a 2% increase to 52%. The U.S. is more wary than other countries. Only 33% of Americans expect AI to make their jobs better, compared to a global average of 40%, and people in the U.S. are among the highest in expecting AI to eliminate jobs rather than create new ones. The U.S. public also reported the lowest trust in its government to regulate AI among the countries surveyed, at 31%.

    Generative AI: More Popular Than the Internet?


    AI adoption is spreading at historic speed, and consumers are deriving substantial value from tools they often access for free. Generative AI reached 53% population adoption within three years, faster than the personal computer or the internet, though the pace varies by country and correlates strongly with GDP per capita. Some show higher-than-expected adoption, such as Singapore (61%) and the United Arab Emirates (54%), while the U.S. ranks 24th at 28.3%. The estimated value of generative AI tools to U.S. consumers reached $172 billion annually by early 2026, with the median value per user tripling between 2025 and 2026.

    The Self-Education Wave

    Formal education is lagging behind AI use, but people are learning it at every stage of life. Four out of five U.S. high school and college students now use AI for school-related tasks, but only half of middle and high schools have AI policies and just 6% of teachers say those policies are clear. Outside the classroom, professionals are picking up both soft AI skills (like prompts) as well as more technical skills; the United Arab Emirates, Chile, and South Africa are learning AI engineering skills fastest.

    AI Is Your Doctor’s Assistant

    AI has entered the clinic. Tools that automatically generate clinical notes from patient visits saw widespread adoption in 2025. Across multiple hospital systems, physicians reported up to 83% less time spent writing notes and significant reductions in burnout. But beyond certain tools, the value of clinical AI remains speculative. A review of more than 500 clinical AI studies found that nearly half relied on exam-style questions rather than real patient data, with only 5% using real clinical data.

    Another area of growth in medical AI is in data twins, or dynamic, data-linked computational representations of individual patients that update over time and support forecasting, simulation and treatment optimization. Publication counts rose from near 0 in 2015 to 372 in 2025, and where rigorous trials exist, early results are promising.

    Originally published on the Stanford Institute for Human-Centered Artificial Intelligence (Stanford HAI) and republished here on Digital Information World with permission.

    Reviewed by Ayaz Khan.

    Read next: 

    • Industries most exposed to AI are not only seeing productivity gains but jobs and wage growth too

    Online Viewers Prefer Livestreams to Recordings


    by External Contributor via Digital Information World

    Thursday, April 16, 2026

    Online Viewers Prefer Livestreams to Recordings

    By Sally Parker

    Image: Justin Min / Unsplash

    In an era when most TikTok videos are prerecorded, can a band with a new single create a tighter bond with fans by debuting via livestream instead? Can a business do the same when promoting a new product?

    New research from the McCombs School of Business at The University of Texas at Austin suggests they could.

    Since the pandemic, the livestreaming industry has been booming. The global market is expected to reach $345 billion by 2030, up from $100 billion in 2024. Nearly 30% of internet users watch livestreams at least once a week on social media.

    Adrian Ward, associate professor of marketing, is one of them. A few years ago, he was viewing a livestream of a town hall meeting and found himself gripped by a speaker’s comments, feeling as if he were actually in the room. On reflection, he suspected it was the liveness of the event, as much as the speaker, that kept him glued to the screen.

    “As we spend more of our time online and on social media, it’s worth asking how we can feel as complete and connected as possible in these spaces,” Ward says.

    Live and Let Stream

    With Alixandra Barasch of the University of Colorado Boulder and Nofar Duani of the University of Southern California, Ward began to investigate what he calls the “mere liveness effect”: the idea that simply knowing an event is streaming in real time makes a viewer feel more connected to the performer.

    The researchers ran five experiments with 3,500 total participants. By manipulating various factors, they compared how, when, and why viewers reacted to watching livestreams versus prerecorded videos online.

    In one experiment, participants watched live or recorded videos of their choosing on the platform Twitch. In another, they viewed a performance by the R&B cover band Sunny and the Black Pack, either live on YouTube Live or its recording the next day on YouTube.

    In a third, the researchers created their own streaming platform to show participants identical videos, manipulating whether the content appeared to be live or prerecorded.

    The experiments provide evidence that watching an online performance in real time boosts several aspects of the viewing experience:

    • Connection. Viewers in one experiment felt 7 percentage points more connected to the performers in the live video. Another experiment showed the effect was even stronger when viewers believed no one else was watching.
    • Enjoyment. In another experiment, viewers enjoyed the live video 5 percentage points more than the prerecorded one.
    • Engagement. Real-time streams carried a “liveness lift.” Viewers chose to continue watching longer, and they were more willing to follow and subscribe to the live streamer’s channels.

    A common factor underlying those effects was a heightened sense of presence, Ward says. “When we watch something live, we are psychologically transported there.

    “It’s not that there’s actually something different about the video itself. It’s that we know that it’s live right now, and that breaks down barriers between our world and the world on the other side of the screen.”

    Lessons for Liveness

    One quality weakened the liveness effect: not being able to see a performer’s face. When viewers saw only a musician’s hands, they felt less connected, even though they were watching the same performance.

    The findings have implications for marketers, platform developers, and content creators, Ward says. In an age when people increasingly meet their social needs online, going live can benefit streamers by motivating audience engagement.

    As a follow-up, he’s working with a graduate student to study whether the liveness effect translates into greater brand trust or sales.

    “From influencers to businesses, it’s about the experience of real people seeing other real people live and in the moment,” Ward says. “It makes you feel like you’re sharing something.”

    The Liveness Lift: Viewing Live Streams Creates Connection and Enhances Engagement in Amateur Music Performances” is published in The Journal of Marketing.

    Originally published by the McCombs School of Business, The University of Texas at Austin. Republished here with permission.

    Reviewed by Irfan Ahmad.

    Read next:

    • Industries most exposed to AI are not only seeing productivity gains but jobs and wage growth too

    • Global deepfake fraud reaches $2.19B — US leads in losses


    by External Contributor via Digital Information World

    The End of the Honour System: Rethinking Age Verification Without Sacrificing Privacy

    By Alex Laurie, GTM CTO, Ping Identity

    The internet has long operated on an honour system when it comes to verifying age: click a box, enter a birthdate, and move on. That model is now collapsing under the weight of today’s digital reality. Across the globe, the pressure to implement more effective age verification measures has reached a tipping point. Regulators are advancing legislation, platforms are rolling out stricter policies, and parents are demanding stronger protections against harmful content.

    Discord’s recent move to a global “teen-by-default” experience is a clear sign that the industry is shifting away from optional safeguards toward enforced accountability. As a parent of a son finding his feet online, I welcome that shift. Assuming users are minors until proven otherwise introduces necessary friction in an environment where explicit content, exploitation, and even AI-generated deepfake abuse are just one click away.

    However, the intent of these policies is only half the battle; the technology behind these systems matters just as much.

    The Age Verification Privacy Dilemma

    Right now, many age verification approaches rely on invasive methods like facial analysis or the upload of government-issued IDs. While some platforms attempt to process data locally, there is often a fallback to centralized identity checks. And that’s where the risk compounds.

    Every time a user uploads a passport or driver’s licence to verify their age, they are contributing to a growing pool of highly sensitive personal data. These ‘honeypots’ are prime targets for malicious actors. Scaling this model doesn’t just increase risk; it ignores a fundamental crisis of trust. In fact, 75% of consumers are more worried about personal data security than five years ago, and only 17% fully trust the organizations managing their identity data.

    This is the core tension: How do we protect minors online without creating a surveillance infrastructure for everyone else?

    Image: Tima Miroshnichenko - Pexels

    A New Architecture for Digital Identity

    The answer is not more data collection; it’s a better identity architecture built on decentralized identity. In the context of age verification, we must move away from “show me your ID” to “prove you meet the requirement”.

    Technically, this is achieved through verifiable credentials stored in a secure digital wallet. Using zero-knowledge proofs, a user could verify if they are over 18 through a simple cryptographic ‘Yes/No’ signal.

    This approach fundamentally changes the privacy equation. Instead of creating troves of sensitive data in one central location, we distribute trust to the edge and place control back in the hands of the user while still meeting regulatory and platform requirements. Unlike a physical ID, digital credentials can also be immediately revoked and reissued if a device is compromised.

    Identity as a Continuous Signal

    This shift aligns with a broader evolution happening across digital identity. In enterprise environments too, identity is no longer a one-time checkpoint; it is becoming a continuous, contextual signal evaluated in real time based on risk, behavior, and intent. This is critical in the age of AI, where autonomous agents increasingly act on behalf of users, systems, and organizations.

    In these environments, identity must operate at runtime, continuously verifying not just who or what is requesting access, but whether that action is authorized, trustworthy, and aligned with expected behavior. Establishing identity as a dynamic control layer for both humans and AI is essential to ensuring trust, accountability, and security at scale.

    The same principle applies here. Age verification shouldn’t be a static upload that lives indefinitely on a server. It should be a dynamic assertion, validated when needed and discarded immediately after. Identity is the only remaining "off-switch" in a decentralized AI ecosystem, and it must operate at runtime to ensure trust and accountability.

    The Future of Trust Online

    We are at an inflection point. The rise of deepfakes has effectively ended the age of visual trust online. In this context, doubling down on document-based verification feels like solving tomorrow’s problem with yesterday’s tools.

    The future of identity for humans and machines alike will be defined by minimization: share less, prove more. Protecting minors is non-negotiable, but we must not let children pay the price of our technical delay. By embracing privacy-preserving verification, we can build a next generation of digital trust based not on data collection, but on data protection.

    The honour system is over. What we build next will define the future of the internet.

    Edited by Asim BN.

    Read next: Google promotes ‘teacher approved’ apps for kids. Here’s what parents should know
    by Guest Contributor via Digital Information World

    Wednesday, April 15, 2026

    Google promotes ‘teacher approved’ apps for kids. Here’s what parents should know

    Chris Zomer, Deakin University and Niels Kerssens, Utrecht University

    Researchers urges parents to verify children’s apps independently amid concerns over Google’s approval system transparency.
    Image: Ron Lach/ Pexels

    As school holidays continue around Australia, many parents are looking for educational ways to keep their children entertained.

    If you own an Android device and have young children, you may find yourself browsing Google Play for educational and age-appropriate apps. If you go to the children’s section, you will be led to a page with “Teacher Approved apps & games” featuring apps for children under 13 according to different age ranges and themes.

    Popular “Teacher Approved” apps such as learning app Lingokids and the game Bluey: Let’s Play have been downloaded more than 50 million times. YouTube Kids, another “Teacher Approved” app, has been downloaded more than 500 million times.

    Google says “teachers and specialists” rate the “Teacher Approved” apps. But in our research we argue it’s unclear who exactly those teachers and experts are. The educational value of Google Teacher Approved apps can also be unclear at times.

    What is ‘Teacher Approved’?

    Google launched the “Teacher Approved” program in 2020 to set a quality standard for apps for children aged under 13.

    To be included in the “Teacher Approved” section, an app needs to adhere to Google’s family policies, which includes having an easy-to-understand interface and content that is appropriate for children. Any ads, in-app purchases or cross-promotion “must be appropriate” too.

    Google has an online course for developers who want to be included in the Teacher Approved section. We took this as part of our our research.

    In the course, Google states “an app doesn’t have to be educational” as long as it is “enriching” and “support(s) a child’s healthy development”. At the same time, Google says teachers are assessing apps for “learning impact”. However, it is not clear how learning is assessed, especially for apps that are not educational.

    Our research

    In our study, we analysed how apps were presented in the children’s section on Google Play to make them seem educational.

    We also interviewed five industry stakeholders (three founders/chief executives and two design specialists) from different companies developing apps for children.

    We chose to involve industry rather than parents, as anecdotal evidence suggests parents have little understanding of the “Teacher Approved” program.

    Confusing labels and categories

    We found “Teacher Approved” apps are often categorised with vague or interchangeable labels such as “enriching apps”, “enriching games” and “games for kids”. This can make it difficult to understand the purpose of the apps, or to know whether they are educational or not.

    We also found some apps with a “Teacher Approved” badge were labelled by the app developer as entertainment rather than “educational”. For example, Paw Patrol Rescue World was “Teacher Approved”, despite being labelled as “action-adventure” by the developer.

    With the Teacher Approved badge Google creates the impression of educational value and trustworthiness for all sorts of apps. As one of the developers we interviewed explained:

    how many people would look at a little graphical badge and go ‘oh, I trust this now, because they’ve got this badge’.

    Who approves the apps?

    The Teachers Approved badge implies teachers are used to evaluate the apps that appear in the children’s section on Google Play.

    However, on the developer’s section of its website, Google notes it is not exclusively teachers who assess the apps. It says “teachers and children’s education and media specialists recommend high-quality [Teacher Approved] apps for kids on Google Play.”

    In 2020, Google shared the names of two experts who were “lead advisers” at the time – a developmental psychologist and an education and media expert. But it is not clear who the “teachers” and “specialists” who currently rate the apps are and how many of them are actually teachers.

    The Conversation asked Google where the teachers or specialists are located, whether they are paid, and what criteria non-teachers need to meet to be included in the program. The company did not respond before deadline.

    What can parents do?

    Our research suggests the current situation is confusing for parents. In the meantime, there are some things parents can do if they are not sure about apps their kids are using:

    • use independent sites such as Children and Media Australia that evaluate the educational content of apps

    • don’t rely on the content description on Google Play, but test the apps yourself

    • don’t use apps with advertising, as this will interrupt the learning experience.The Conversation

    Chris Zomer, Research Fellow at the ARC Centre of Excellence for the Digital Child, Deakin University and Niels Kerssens, Assistant Professor in Digital Media and Society, Utrecht University

    This article is republished from The Conversation under a Creative Commons license. Read the original article.

    Reviewed by Irfan Ahmad.

    Read next: AI is changing more than your writing — it may be shaping your worldview


    by External Contributor via Digital Information World

    AI is changing more than your writing — it may be shaping your worldview

    By USC Dornsife News

    Image: Valentin Ivantsov - pexels

    Use of ChatGPT, Claude and other large language models, or LLMs — what most people call “AI” — has surged since ChatGPT debuted publicly in 2022. Hundreds of millions of people now use these tools weekly, according to recent estimates.

    Users might assume these tools are just helping them organize their thoughts, but recent research suggests they may be doing something more subtle and more powerful — influencing how we all think, speak and even understand the world.

    In a recent opinion piece, researchers at the USC Dornsife College of Letters, Arts and Sciences, investigated how artificial intelligence systems like ChatGPT could be nudging people toward similar ways of communicating and reasoning — a process researchers call “cultural homogenization.”

    “AI isn’t just reflecting culture anymore,” said lead author Yalda Daryani, a PhD student in social psychology at USC Dornsife. “It’s actively shaping it. It’s deciding what sounds polite, what sounds clear, even what counts as a good answer.”

    So the researchers set out to understand how large language models like ChatGPT, Anthropic’s Claude and Google’s Gemini might influence human culture on a global scale, and how policies could address the broader effects these LLMs might have.

    A pattern emerges with AI use

    The researchers — under the guidance of Morteza Dehghani, professor of psychology and computer science at USC Dornsife and head of the Morality and Language Lab — reviewed a wide range of recent studies across psychology, computer science and linguistics to understand how LLMs perform across different cultures and how people respond when using AI in real-world tasks such as writing or decision-making.

    They found a consistent pattern: AI systems tend to reflect and reinforce a narrow slice of human experience.

    A central finding of the research is that these systems often align with what the researchers describe as “WHELM” perspectives — Western, high-income, educated, liberal and male. In other words, they reflect the values and communication styles most common in English-language online data.

    “When you ask AI for advice, you’re not getting a neutral answer,” Daryani said. “You’re getting the perspective of a very specific group of people, even if it doesn’t say that explicitly.”

    This pattern appears in how AI handles moral questions. The research showed that AI systems tend to favor values such as individual freedom and fairness, while placing less emphasis on ideas like tradition, authority and community, which are more central in many non-Western cultures.

    AI’s impact extends to subtle social interactions

    The influence goes beyond values. It also affects how people communicate.

    “When millions of people use AI to draft messages, those differences start to disappear,” Daryani said. “Over time, we may all start sounding very alike.”

    Even when users ask questions in other languages, the models often return examples tied to American or European culture — such as U.S. holidays or English-language films — while offering less detailed or more stereotypical descriptions of non-Western traditions.

    Dehghani says this pattern creates a kind of feedback loop. “The more we rely on these systems, the more their outputs become part of our shared knowledge, and then that same material gets used to train the next generation of AI. So the cycle reinforces itself.”

    That loop, the researchers warn, could gradually narrow the range of ideas, traditions and communication styles that people are exposed to and pass on over time.

    Why does that matter? Because cultural diversity isn’t just about language or customs, the researchers say. It shapes how people think, solve problems and make decisions. A wide range of perspectives can lead to better solutions and more creative ideas. If that diversity shrinks, the researchers argue, society could lose important ways of understanding the world.

    How to build a better AI

    Of note, the team does not suggest that AI is inherently harmful. LLMs can make writing easier, improve access to information and help people communicate more clearly. The concern, the researchers say, is what happens when a small number of systems begin to influence billions of interactions every day.

    “Once the system is trained on a narrow set of data, it’s very hard to undo that,” Daryani said.

    To address the issue, the team outlines a three-part approach based on their study findings, beginning with the data used to train models. Most AI systems learn from English-language content drawn heavily from Western sources. The researchers say developers should include more material from different languages, regions and cultural traditions to capture cultural knowledge that might otherwise be systematically underrepresented.

    During later training stages aimed at refining and evaluating LLMs, the researchers suggest incorporating culturally diverse examples as well as consulting experts such as psychologists, anthropologists, linguists, and policymakers working in collaboration with diverse cultural communities to ensure responses reflect different social norms and values.

    They then recommend changing how the training results are judged. Tech companies do employ workers from a variety of countries during this step, but those workers are trained to apply standardized Western evaluation criteria. Instead, reviewers should evaluate answers based on multiple standards.

    Taken together, these changes could help AI systems recognize that there is no one “correct” way to communicate or reason, preserving a broader range of human perspectives as the technology continues to evolve.

    For Daryani, the stakes are clear: “Languages, traditions, ways of thinking — once they disappear, we can’t get them back. The question isn’t whether this is difficult to fix. It’s whether we can afford not to.”

    About the study

    Zhivar Sourati, a PhD student at the USC Viterbi School of Engineering, was a co-author of the report, published in Policy Insights from the Behavioral and Brain Sciences.

    Originally published by USC Dornsife College of Letters, Arts and Sciences News. Republished here with permission.

    Reviewed by Irfan Ahmad.

    Read next: In the face of rampant AI, is ‘data poisoning’ a new form of civil disobedience?
    by External Contributor via Digital Information World