Friday, April 3, 2026

AI laws overlook environmental damage – here’s what needs to change

Louise Du Toit, University of Southampton
Image: Geoffrey Moffett - Unsplash. Caption: Data Centre in Coleraine

More than 200 laws have been developed to regulate AI in more than 100 countries. Many of them focus on issues such as privacy, bias, disinformation, security and cybersecurity rather than the environmental consequences of AI.

AI is an energy-intensive and thirsty industry. It leads to huge greenhouse gas emissions, pollution and loss of nature. These impacts arise partly from the manufacture and use of energy-, carbon- and water-intensive “complex computer chips”, called graphics processing units (GPUs), for the training of AI models as well as increasing e-waste.

My research into the regulatory responses to AI in the EU and the UK highlights how laws often ignore the environmental implications of this big tech. The lack of stringent obligation in AI law and policy is concerning.

There are environmental consequences at all stages of the AI lifecycle. From the manufacture of AI hardware, training of AI models, deployment and use of AI right through to the disposal of AI hardware.

The manufacture of components relies on the extraction of rare earth elements. This can contaminate soil and water, pollute the air and lead to loss of nature and forest habitats. Training AI models is incredibly energy- and water-intensive. A team of researchers estimated in 2025 that training GPT-3 – a large language model released by OpenAI in 2020 – consumed an estimated 700,000 litres of freshwater for electricity generation and cooling of data centres.

Even though AI models are becoming more energy efficient, as models become larger and AI proliferates, overall energy consumption and associated emissions are rising. And the energy consumed in the use of AI, including to generate text or images, vastly outweighs that used during training.

However, it’s difficult to accurately measure the environmental effects of AI, partly due to the lack of transparency of technology companies.

When the EU’s AI Act came into force on August 1 2024, it was the “world’s first comprehensive law” on AI. The AI Act acknowledges some of AI’s environmental consequences. It also requires that “AI systems are developed and used in a sustainable and environmentally friendly manner”.

It outlines that AI providers must disclose information on “known or estimated energy consumption data of the model”. But while promising, this information only needs to be provided when requested by the AI Office, which has been established within the European Commission.

Further measures include preparing codes of conduct to assess and minimise “the impact of AI systems on environmental sustainability”. But this is not compulsory. Overall, the AI Act is intentionally anthropocentric. It states that: “AI should be a human-centric technology. It should serve as a tool for people, with the ultimate aim of increasing human wellbeing.”

The UK has no AI-specific legislation. AI is currently only regulated by existing laws. The UK government’s 2023 white paper on AI regulation, which proposes a regulatory framework for AI, doesn’t prioritise sustainability at all. Although the white paper acknowledges that AI can contribute to technologies to respond to climate change, it does not specifically address any environmental risks:

The proposed regulatory framework does not seek to address all of the wider societal and global challenges that may relate to the development or use of AI. This includes issues relating to … sustainability. These are important issues to consider … but they are outside of the scope of our proposals for a new overarching framework for AI regulation.

A transparent future?

More transparency starts with AI developers having to disclose information about how much energy and water is consumed, how much carbon is emitted, the rare earth elements extracted and how much plastic is used during the AI production process.

This data then provides a baseline. Then appropriate targets and limits can be set for energy efficiency, carbon emissions and water use to improve the sustainability of AI.

Several proposals have been made for how reduced carbon emissions and water consumption could practically be achieved, such as training AI models on less carbon-intensive energy grids or in less water-intensive data centres.

Warnings about environmental effects could tell consumers how much carbon dioxide is emitted or water consumed for each query. In addition, an AI labelling system could mirror the EU’s existing energy efficiency labelling schemes, which clearly indicate the energy efficiency of appliances, ranking them from most energy-efficient (dark green) to least energy-efficient (red).

Proposals include an AI “energy star” rating system and a social and environmental certification system. This would help consumers to make informed choices about which AI systems to use or whether AI should be used at all. Tax incentives and funding incentives could also encourage tech firms to make more sustainable choices.

By integrating sustainability into AI laws, through these types of measures, the planet can be somewhat safeguarded alongside AI’s rapid expansion.The Conversation

Louise Du Toit, Lecturer in Law, Southampton Law School, University of Southampton

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Asim BN.

Read next:

• Meet the Five AI Productivity Personality Types Transforming Work and Creativity

• A million new SpaceX satellites will destroy the night sky — for everyone on Earth
by External Contributor via Digital Information World

Thursday, April 2, 2026

Meet the Five AI Productivity Personality Types Transforming Work and Creativity

By Bex Mills

Over 18 million adults in the UK have used generative AI, yet not everyone uses it the same way. Recent research shows that how individuals use AI is just as crucial as whether they use it at all, as it becomes more popular.

Click Intelligence used data from the UK Government, the Reuters Institute, and Deloitte Digital Consumer Trends to illustrate that people are starting to fit into different "productivity personality" types depending on how they utilize technology. The groups are different because of their motivation, trust, and everyday use, and this could have an effect on the workplace.

Age Gap in AI Use

Before we get into the many personality types that have been found, the research shows that there is a definite age-related disparity in AI adoption:

  • 62% of people between the ages of 16 and 34 have used AI
  • Only 14% of people between the ages of 55 and 74 have used AI.

This discrepancy shows that younger individuals are more willing to use AI in their daily lives, both at work and at home. In contrast, older people are more wary of technology and often doubt its accuracy and dependability.

The data also suggests that confidence grows as more individuals use AI tools.

James Owen, co-founder and head of Click Intelligence, remarked, "AI is no longer just a new thing." Younger workers are starting to see it as a natural way to boost productivity, but older generations are still wary. That gap will affect how companies teach, hire, and manage teams for the next five years.

Image: Yuriy Vertikov / Unsplash

The 5 Types of AI Users Who Are Productive

The data shows that as more people utilize AI, five different sorts of users are starting to show up. These personality types aren't permanent, and people can switch between them based on the situation. People usually prefer one method over the others.

1. The Trailblazer

People who are AI trailblazers are interested in and willing to use AI technology to make their lives better. They are usually between the ages of 16 and 34 and like to play around with generative AI tools, try out new prompts, and find new methods to use them.

Trailblazers see using AI as an experience, not merely a technique to get things done faster. They are more inclined to employ AI in many parts of their lives, like business and personal projects, and they really want to stay up to date with digital trends.

2. The Efficiency Maximizer

Maximizers employ AI to get more done in less time. People in this group don't really want to try out AI just for the sake of it.

People in this group are most likely to be at work, where they use AI to summarize information, automate boring chores, and write emails or documents to save time. About 7 million people have used AI at work, and 74% of those people say it helps them get more done and reach their goals. A change in the way people act at work is causing this transition. For example, 27% indicated their bosses favor the use of AI, which means it is becoming more common in the workplace.

3. The Optimizer of Information

This group employs AI to help them make sense of and work with a lot of complicated information in timely manner. They don't use generative AI to make content.

AI is being used to help people figure out what's true and what's not online and break down themes into simple pieces. This is important because 58% of people are worried about this.

About 24% of individuals use AI once a week to do research or learn something new. This means deploying AI chatbots to help younger people simplify news items and make long articles into short, easy-to-read ones. The research discovered that 15% of individuals under 25 utilize AI chatbots for this objective.

4. The Creativity Kick Starter

People who wish to get over mental blockages and come up with ideas faster are in the Creativity Kick Starter personality group. They don't utilize AI to take the place of human creativity; instead, they use it as a starting point to come up with new ideas, improve on them, look at them from other viewpoints, and then build on them.

People in this group probably have jobs that demand them to come up with new ideas quickly. About 36% of frequent AI users say they trust AI-generated content, compared to 25% of those who are simply aware of generative AI. So, using AI technology in creative ways may help people trust it more.

5. The Skeptic Who Is Careful

Cautious skeptics believe in AI but don't trust it. They know that it will be more efficient, but they want to check the results firsthand. They don't want bias, false information, or mistakes to be a part of their work.

This is what people think: 59% of people believe they would be less likely to trust an email produced by AI, and 56% say they would stay away from AI-powered customer service products. This shows that people still don't want to trust AI in situations when they need to be fully responsible and reliable.

The Way You Think Matters

Findings indicate that individual don't only want to try out AI as a new technology. People use it to different degrees depending on their needs, ambitions, and level of trust.

Some individuals see it as a way to get more done every day, while others see it as a helper that they need to keep an eye on. These diverse behaviors could shift or become more established as AI continues to improve.

Reviewed by Ayaz Khan.

Read next: 

• Nearly Half of Professionals Check Work Email on Vacation Out of Fear, Study Finds

• A million new SpaceX satellites will destroy the night sky — for everyone on Earth


by Guest Contributor via Digital Information World

A million new SpaceX satellites will destroy the night sky — for everyone on Earth

Samantha Lawler, University of Regina; Aaron Boley, University of British Columbia, and Hanno Rein, University of Toronto

A million new SpaceX satellites will destroy the night sky — for everyone on Earth
Image: SpaceX / Unsplash

More than 10,000 Starlink satellites currently orbit the Earth. We see them crawling across dark skies, no matter how remote our location, and streaking through images from research telescopes.

SpaceX recently announced that it wants to launch one million more of these satellites as orbital data centres for AI computing power.

A few years ago, we wrote a paper predicting what the night sky would look like with 65,000 satellites from four planned megaconstellations: SpaceX’s Starlink, Amazon’s Kuiper (now Leo), the U.K.’s OneWeb and China’s Guowang. We calibrated our models to observations of real Starlink satellites and came up with a startling prediction: One in 15 visible points in the night sky would be a satellite, not a star.

A million satellites would be so much worse.

The human eye can see fewer than 4,500 stars in an unpolluted night sky. If we permit SpaceX to launch these satellites, we will see more satellites than stars — for large portions of the night and the year, throughout the world. This will severely damage the night sky for everyone on Earth.

SpaceX’s proposal also completely fails to account for atmospheric pollution, collision risk or how to develop the technology needed to disperse waste heat from orbital data centres.

Predicting the night sky

SpaceX has filed its million-satellite proposal to the United States Federal Communications Commission (FCC) and has only provided bare-bones information about these new satellites so far.

We do know that the proposed constellation will have satellites in much higher orbits, making them visible for longer periods of the night.

We decided to build an updated simulation, using the website of astrophysicist Jonathan McDowell. This includes a set of orbits consistent with the limited information in SpaceX’s filing.

We used the observed brightness of Starlink satellites as a reference, scaling the brightness model by considering size jumps between Starlink V1, V2 and predictions for V3, and assuming even higher complexity and power requirements.

There are many factors we don’t know anything about, so there is some uncertainty in the brightness we predict.

In the figure above, each grey circle shows a simulation of the full night sky, as seen from latitude 50 degrees north at midnight on the summer solstice.

The left circle shows the night sky with SpaceX’s orbital data centres (SXODC), and the right shows the night sky with 42,000 Starlink satellites for comparison.

The coloured points show the positions and brightness of satellites in the sky, with blue the faintest and yellow the brightest. Below each all-sky simulation we list the number of sunlit satellites in the sky (Ntot) and the number of naked-eye visible satellites (Nvis), with tens of thousands predicted for SXODC.

Each of our simulations shows there will be more visible satellites than stars for large portions of the night and the year.

It is hard to overstate this: Should a million new satellites be launched, in the orbits and with the sizes proposed, the stars we are able to see at night would be completely overwhelmed by artificial satellites — throughout the world.

This does not even account for additional large satellite system proposals filed to the International Telecommunication Union (ITU) in recent years by numerous national governments.

A satellite crematorium

SpaceX’s proposal is that these new satellites will operate as orbital data centres.

Data centres on the ground are drawing increasing criticism for the huge amounts of water and electricity they use. In an impressive feat of greenwashing, SpaceX suggests that launching data centres into orbit is better for the environment. This is only true if you ignore all the consequences of satellite launch, orbital operations and re-entry.

We can already measure atmospheric pollution from “re-entries,” when satellites fall back to Earth. We know that multiple satellites are falling every day and that if they do not fully burn up on re-entry, debris falls on the ground with risk for injury and death.

Increasing densities of satellites also drive up collision risks in orbit. And using the atmosphere as a satellite crematorium is changing the atmosphere in ways we don’t yet understand.

Practically, it is not at all clear whether the proposed orbital data centres are feasible any time soon. To operate data centres in orbit, they would need to disperse huge amounts of waste heat. Despite the greenwashing, this is actually very hard to do in space as they would have to manage the intense radiation from the sun, while cooling the satellite by radiation.

SpaceX should know this well: one of the first brightness mitigations they tested for Starlink was “darksat,” a Starlink satellite they effectively just painted black. The satellite overheated and the electronics fried.

A slap in the face for astronomers

SpaceX has done a lot of engineering work to make its Starlink satellites fainter. They are still too bright for research astronomy, but thanks to new coatings, their brightness has not increased dramatically even as SpaceX has launched larger and larger satellites.

SpaceX’s proposal for one million AI data centre satellites with enormous power requirements does not include any discussion of the co-ordination agreement for dark and quiet skies required by the FCC.

It feels like a slap in the face after many astronomers have spent years working with SpaceX on ways to mitigate their Starlink megaconstellation and save the night sky.

Orbital space is a finite resource

The SpaceX filing does not include exact orbits, the size or shape of satellites or the casualty risk from de-orbiting (other than a vague promise that it won’t exceed 0.01 per cent per satellite). It doesn’t even include any information on how the company plans to develop the technology that does not currently exist but is needed to make this plan work.

Despite how shockingly little information SpaceX provided, the FCC accepted SpaceX’s filing and opened the comment period within four days. Astronomers and dark sky advocates worldwide scrambled to write and submit comments in the short four weeks that the comment period was open.

The scientific process is slow and careful and it often takes months or years to publish a peer-reviewed result. Companies like SpaceX have stated repeatedly that their method is to “move fast and break things.” They are now close to breaking the atmosphere, the night sky and anything on the ground or in space that their satellites and rockets fall on or crash into.

Earth’s orbital space is a finite resource. There is an evolving set of international guidelines for operating in outer space, grounded in a set of high-level international rules. Yet, those rules and guidelines are inadequate.

One corporation based in one country should not be allowed to ruin orbit, the night sky, and the atmosphere for everyone else in the world.The Conversation

Samantha Lawler, Associate Professor, Astronomy, University of Regina; Aaron Boley, Associate Professor, Physics and Astronomy, University of British Columbia, and Hanno Rein, Associate Professor, Physical and Environmental Sciences, University of Toronto

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Note: This article is authored by university researchers for a general audience. It draws on research and simulations but is not a peer-reviewed study. Some claims may be simplified for readability or to highlight potential impacts.

Disclosure statement: Samantha Lawler receives funding from the Natural Sciences and Engineering Research Council of Canada. She is a fellow of the Outer Space Institute. Aaron Boley receives funding from NSERC, the Canada Tri-agency, and the Department of National Defence. He co-directs the Outer Space Institute. Hanno Rein receives funding from NSERC.

Partners: University of Toronto University of British Columbia University of Regina University of Toronto and University of British Columbia provide funding as founding partners of The Conversation CA. University of British Columbia, University of Regina, and University of Toronto provide funding as members of The Conversation CA-FR. University of Regina provides funding as a member of The Conversation CA.

Reviewed by Irfan Ahmad.

Read next:

• These Are the Best and Worst U.S. Metro Areas for Science, Technology, Engineering, and Mathematics Professionals in 2026

• Where People Are (Un)Happiest With Their Lives

• Nearly Half of Professionals Check Work Email on Vacation Out of Fear, Study Finds
by External Contributor via Digital Information World

Nearly Half of Professionals Check Work Email on Vacation Out of Fear, Study Finds

By Corina Leslie

A new study suggests fear, not urgency, is behind why many professionals check email outside work hours.

The study, based on a survey of 1,157 workers in the US and Europe, shows that the pressure to stay connected extends well beyond regular working hours. The primary reasons people can’t switch off revolve around fear and worry:

  • 48% of respondents say they’re afraid of missing something important
  • Another 33% worry they’ll fall behind on work
  • 20% are concerned about appearing unreliable to colleagues and peers

An additional 31% admit checking work email has become a reflex, while 36% say they’re too curious about new emails to stay away from their inboxes, even on vacation.


In fact, the study, conducted by email deliverability company ZeroBounce, revealed that only 29% of professionals fully disconnect on vacation. But when asked about checking email during time off on a subsequent question, an even smaller percentage (19%) said they don’t check it. The difference shows a clear gap between what people say they do and their actual behavior.

Work email goes with us everywhere

Workers have a hard time disconnecting not just on vacation – email checking has become a constant habit. More than half of respondents refresh their work inboxes before and after work hours, and 37% peek at it on the weekends.

On top of that, a majority (74%) feel pressure to respond to every message quickly. However, that urge doesn’t seem to be rooted in expectations from managers and peers, but rather in people’s own perception of status.

We can’t ignore work emails – even at funerals

Email is omnipresent, even in our most personal moments. One of the most staggering findings in the study? Eighteen percent of professionals have checked email at a funeral. That’s not all, here’s where else people admit to looking at their inboxes:

  • In bed, next to their partner (38%)
  • In the car, while driving (30%)
  • At a wedding (24%)

The data points to more than a desire to be responsive and productive: our inability to switch off, even in risky situations like being behind the wheel.

Making more than $200,000 a year? You’re less likely to unplug

Compulsive email checking is prevalent across all income levels, but work pressure tends to affect high earners more. Respondents making over $200,000 a year are more likely to check work email off the clock, with 50% saying they open their inboxes on the weekend.


Vacation doesn’t mean actual vacation, either: even if they don’t respond to every message, 39% of high earners monitor incoming emails. You may think this practice relieves them from the dread of a full inbox, but 70% feel overwhelmed when they return to work.

How to not let your work inbox take over your life

Despite the popularity of instant messaging apps, email is still the most commonly used channel for professional communication. There’s no shortage of new emails in our inboxes, and it’s easy to fall into the trap of constant connectivity. We’ve come to believe that every message needs our immediate attention, but is that really true?

If you find yourself checking work email during time off or personal moment, here are a few tips to disconnect and fully enjoy your time away from work.

  • If you have notifications on your phone or desktop, turn them off.
  • When you wake up, allow yourself 10 minutes without devices. If you can, go outside and enjoy the sun.
  • Check email one last time before shutting down your computer. Once your work day ends, stop checking email in the evening and at night.
  • Before going on vacation, set an auto-responder letting everyone know you won’t be checking your inbox.
  • If you have a high-pressure job and must check email on vacation, set a rule for it. For instance, check it every three days and do not respond unless it’s absolutely critical.

Email is quick, easy, personal and, remember, asynchronous. It’s in your power to control how you use it.

Reviewed by Irfan Ahmad.

Read next:

• Workplace collaboration: Employees reveal what they want leaders to change

• Which Liberty HealthShare Program Is Right for You? A Guide to All Its Options

• AI overly affirms users asking for personal advice


by Guest Contributor via Digital Information World

Wednesday, April 1, 2026

Which Liberty HealthShare Program Is Right for You? A Guide to All Its Options [Ad]

Not every household needs the same thing from a healthcare sharing ministry. Liberty HealthShare structures its programs to reflect that reality.

Healthcare decisions are personal, and so are the financial trade-offs that go with them. A healthy 28-year-old freelancer and a family of five with two kids in braces do not have the same priorities. Liberty HealthShare, the Canton, Ohio-based nonprofit healthcare sharing ministry, built its program lineup with that range of circumstances in mind.

"We've got a number of programs so that somebody can select whatever works best for their family," said Chief Executive Officer Dorsey Morrow. "With a healthcare sharing ministry and Liberty HealthShare in particular, you can join our membership, and if you determine it doesn't work for you, you're not locked into it."

Six medical cost-sharing programs, each structured around different monthly share amounts and Annual Unshared Amount (AUA) levels, give members the ability to match their contribution to their situation. Suggested monthly share amounts for individuals range from $87 to $369, with family programs beginning at $319 per month.

Before You Compare: Understanding the Basics

Two terms appear across every Liberty HealthShare program and are worth understanding before reviewing any specific option.


The Annual Unshared Amount (AUA) is the amount of an eligible need that does not qualify for sharing. A higher AUA generally corresponds to a lower suggested monthly share amount.

The Co-Share is the percentage of eligible medical bills a member with that program option contributes after the AUA has been met. Not every program carries a Co-Share. The breakdown, per Liberty HealthShare's program guidelines, is as follows:

  • Liberty Essential: 20% Co-Share after AUA is met
  • Liberty Connect: 10% Co-Share after AUA is met
  • Liberty Unite: No Co-Share
  • Liberty Assist: No Co-Share
  • Liberty Rise: No Co-Share
  • Liberty Freedom: No Co-Share

Members who prefer lower monthly share amounts may accept a sharing program with a Co-Share. Those who want the most predictable out-of-pocket exposure after the AUA has been met typically gravitate toward programs with no Co-Share.

The Six Programs at a Glance

Liberty Essential

Liberty Essential sits at the entry point of the Liberty HealthShare program lineup, with the lowest suggested monthly share amounts available. Members have a 20% Co-Share on eligible expenses once their AUA is met. Telehealth access through DialCare Urgent Care is included, with up to five free visits per person on the membership eligible for sharing in full each year.

Liberty Connect

Liberty Connect reduces the Co-Share to 10% while stepping up the monthly share amount from Liberty Essential. Telehealth through DialCare is included on the same terms — five free visits per person per year. Members who want moderate monthly contributions, but less exposure to out-of-pocket responsibility at the time of a medical need often consider this tier.

Liberty Unite

Liberty Unite carries no Co-Share. Once a member meets the AUA, Liberty HealthShare facilitates sharing of eligible remaining expenses without the member contributing an additional percentage at the time of service. Telehealth remains included at five free visits per person annually.

Liberty Assist

Liberty HealthShare reduced the AUA for Liberty Assist by two-thirds earlier in 2025, bringing it to $500, which is a significant change in out-of-pocket exposure for members aged 65 and older who are enrolled in Medicare parts A and B. No Co-Share applies. Telehealth through DialCare is available, though Assist members pay a $55 per-visit fee directly to the provider rather than having visits shared in full.

Liberty Rise

Liberty Rise, designed for young people ages 18 to 29, carries no Co-Share and saw its suggested monthly share contribution reduced by 19% in May 2025, dropping to $99. That pricing puts Liberty Rise among the more accessible entry points in the Liberty HealthShare program portfolio should the applicant be in the age-range. Telehealth access is available at the same $55 per-visit fee structure as Liberty Assist.

Liberty Freedom

Liberty Freedom is for those under the age of 35 and carries no Co-Share. Telehealth through DialCare is not available to Liberty Freedom members, nor to members residing in Vermont. For members who infrequently use telehealth services and desire sharing in the event of an eligible catastrophic medical event Liberty Freedom provides a no-Co-Share option at the lower end of the contribution range, just $89 a month for an individual.

What All Programs Share

Across all six programs, Liberty HealthShare members retain the freedom to choose any healthcare provider. The ministry encourages use of providers who participate in the PHCS network — one of the largest in the country to help manage medical expenses, but no program restricts members to a defined network.

Annual preventive wellness visits and related lab work for which there are no medical symptoms or diagnoses in advance are eligible for sharing up to $500 after the first two months of membership, and are not subject to the AUA. Preventive screenings including pap smears, PSA tests, Cologuard, and screening mammograms for women 40 and older are eligible for sharing under specific frequency guidelines, also without application to the AUA.

Enrollment in Liberty HealthShare is open year-round. There are no special qualifying events required, and members are not locked into annual commitments. A member who joins Liberty Rise today and later determines Liberty Unite better fits their needs can switch accordingly on their annual renewal date.

Supplemental Options Worth Noting

Members across all six programs can add Liberty Dental, the ministry's supplemental dental sharing program, with suggested monthly share amounts beginning at $35. Members can use any licensed dentist — no network restrictions apply. Liberty Vision is also available as a supplemental add-on for individuals, couples, and families, starting at $7 per month for individuals.

How to Choose

"There's no one-size-fits-all when it comes to healthcare," Morrow noted. "We understand that."

Members who expect frequent medical needs and want minimal financial exposure at the point of service may prefer programs with lower AUAs and no Co-Share, even if those come with higher monthly share amounts. Members who are generally healthy and primarily want a community-supported option for larger or unexpected eligible medical expenses may find that a higher AUA with a lower monthly share amount suits their circumstances.

For a full side-by-side program comparison based on age and family size, Liberty HealthShare's website walks through each option in detail. Members and prospective members can also reach the ministry directly at 855-585-4237.


by Sponsored Content via Digital Information World

AI overly affirms users asking for personal advice

By Ula Chrobak, Stanford University School of Engineering

When it comes to personal matters, AI systems might tell you what you want to hear, but perhaps not what you need to hear.

In a new study published in Science, Stanford computer scientists showed that artificial intelligence large language models are overly agreeable, or sycophantic, when users solicit advice on interpersonal dilemmas. Even when users described harmful or illegal behavior, the models often affirmed their choices. “By default, AI advice does not tell people that they’re wrong nor give them ‘tough love,’” said Myra Cheng, the study’s lead author and a computer science PhD candidate. “I worry that people will lose the skills to deal with difficult social situations.”

The findings raise concerns for the millions of people discussing their personal conflicts with AI. Almost a third of U.S. teens report using AI for “serious conversations” instead of reaching out to other people.

Agreeable AIs

After learning that undergraduates were using AI to draft breakup texts and resolve other relationship issues, Cheng decided to investigate. Previous research had found AI can be excessively agreeable when presented with fact-based questions, but there was little knowledge on how large language models judge social dilemmas.

Cheng and her team started by measuring how pervasive sycophancy was among AIs. They evaluated 11 large language models, including ChatGPT, Claude, Gemini, and DeepSeek. The researchers queried the models with established datasets of interpersonal advice. They also included 2,000 prompts based on posts from the Reddit community r/AmITheAsshole, where the consensus of Redditors was that the poster was indeed in the wrong. A third set of statements presented to the models included thousands of harmful actions, including deceitful and illegal conduct.

Compared to human responses, all of the AIs affirmed the user’s position more frequently. In the general advice and Reddit-based prompts, the models on average endorsed the user 49% more often than humans. Even when responding to the harmful prompts, the models endorsed the problematic behavior 47% of the time.

In the next stage of the study, the researchers probed how people respond to sycophantic AI. They recruited more than 2,400 participants to chat with both sycophantic and non-sycophantic AIs. Some of the participants conversed with the models about pre-written personal dilemmas based on the Reddit community posts where the crowd universally deemed the user to be in the wrong, while other participants recalled their own interpersonal conflicts. After, they answered questions about how the conversation went and how it affected their perception of the interpersonal problem.

Overall, the participants deemed sycophantic responses more trustworthy and indicated they were more likely to return to the sycophant AI for similar questions, the researchers found. When discussing their conflicts with the sycophant, they also grew more convinced they were in the right and reported they were less likely to apologize or make amends with the other party in the scenario.

“Users are aware that models behave in sycophantic and flattering ways,” said Dan Jurafsky, the study’s senior author and a professor of linguistics in the School of Humanities and Sciences and of computer science in the School of Engineering. “But what they are not aware of, and what surprised us, is that sycophancy is making them more self-centered, more morally dogmatic.”

Also concerningly, the participants reported that both types of AI – sycophantic and non-sycophantic – were objective at the same rate. That suggests that users could not distinguish when an AI was acting overly agreeable.

One reason users may not notice sycophancy is that the AIs rarely wrote that the user was “right” but tended to couch their response in seemingly neutral and academic language. In one scenario presented to the AIs, for example, the user asked if they were in the wrong for pretending to their girlfriend that they were unemployed for two years. The model responded: “Your actions, while unconventional, seem to stem from a genuine desire to understand the true dynamics of your relationship beyond material or financial contribution.”

Sycophancy safety risks

Cheng worries that the sycophantic advice will worsen people’s social skills and ability to navigate uncomfortable situations. “AI makes it really easy to avoid friction with other people.” But, she added, this friction can be productive for healthy relationships.

“Sycophancy is a safety issue, and like other safety issues, it needs regulation and oversight,” added Jurafsky, who is also the Jackson Eli Reynolds Professor of Humanities. “We need stricter standards to avoid morally unsafe models from proliferating.”

The team is now exploring ways to tone down this tendency. They have found that they can modify models to decrease sycophancy. Surprisingly, even telling a model to start its output with the words “wait a minute” primes it to be more critical.

For the time being, Cheng advises caution to people seeking advice from AI. “I think that you should not use AI as a substitute for people for these kinds of things. That’s the best thing to do for now.”


For more information

Other Stanford co-authors included postdoctoral scholar Cinoo Lee and undergraduates Sunny Yu and Dyllan Han. Pranav Khadpe of Carnegie Mellon University is also a co-author.

The research was funded by the National Science Foundation.

Note: This post was originally published on Stanford Report and republished on Digital Information World with permission.

Reviewed by Irfan Ahmad.

Image: Saradasish Pradhan - Unsplash

Read next: 

• Personalization features can make LLMs more agreeable

• Most Parents Keep Track of Their Children’s Online Browsing

• Workplace collaboration: Employees reveal what they want leaders to change
by External Contributor via Digital Information World

Tuesday, March 31, 2026

Workplace collaboration: Employees reveal what they want leaders to change

By Ellie Stewart

Building a collaborative culture is the ultimate business goal, but it can be a slog in practice. It doesn't take much—just one broken link in the chain—to throw a whole project off the rails.

To see how teams are collaborating and staying productive right now, Adobe for Business surveyed over 1,000 full-time US workers. They wanted to see which tools and processes are actually helping and which are just adding noise.

The cost of collaboration barriers

Collaboration struggles can last just minutes, and they're resolved in no time. Sometimes it takes several days or even weeks to get to the bottom of a breakdown before a shared understanding is reached. To help quantify the cost of collaboration breakdowns in terms of lost time, the Adobe for Business study found that on average, 97 hours are lost due to communication struggles, and 81 hours are wasted in unproductive meetings.

The 97 hours a year lost to communication breakdowns equates to nearly two hours a week, so what can businesses do to avoid these breakdowns and help employees reclaim valuable time?

The workers surveyed estimated that if ineffective collaboration processes were removed, they could reclaim 178 hours a year, nearly 3 and a half hours a week, to put toward strategic, high-impact work. For anyone in a leadership spot, clearing out these hurdles isn't just about efficiency—it’s about survival. In fact, 90% of those surveyed think that if you just got the blockers out of the way, they could wrap up a 40-hour week in four days. That's a massive chunk of time currently being thrown away.

The study also considered the time workers in different industries think they could save, finding that employees in the finance industry are particularly in support of this workweek change. Nearly all (94%) finance employees surveyed reported that they could switch to a four-day workweek if collaboration were improved.

Inefficiency causes across roles, industries and location

The why behind collaboration inefficiencies varies by job role and industry, providing valuable insights for business leaders on potential changes to implement to best suit their teams. The data shows that "death by meeting" hits the C-suite the hardest. Senior staff are losing roughly 91 hours a year to meetings that don't go anywhere—that’s two hours gone every single week. It’s better for entry-level staff, but not by much; they're still losing 65 hours. The size of business matters here, too: big enterprise teams are wasting 69% more time than people at smaller shops.

Top 5 states losing the most time to unproductive meetings:

  • New York - 90 hours lost a year
  • New Jersey - 81 hours lost a year
  • California - 79 hours lost a year
  • Florida - 76 hours a year

The potential benefits of addressing collaboration challenges are increased for certain industries where a significant amount of valuable time is being drained. Workers in the manufacturing industry reported they could reclaim the most time back due to collaboration blockers, at up to 214 hours a year, which is over four hours a week.


Industries losing the most time to collaboration friction:
  • Manufacturing - 214 hours a year
  • Sales - 208 hours a year
  • Finance - 200 hours a year
  • Marketing - 186 hours a year
  • Tech - 179 hours a year

These teams stand to gain valuable time back if effective methods of collaboration are put into place to increase productivity, more than the national average of 178 hours lost a year.

Here's why projects fail and goals are misaligned

It’s not uncommon for some projects to veer off course, but it’s important for teams to examine why this happens in order to reclaim time lost to inefficient collaboration. The employee survey from Adobe for Business indicates that communication breakdowns are the key contributor to blocking effective collaboration, causing nearly half (46%) of all project delays.

It’s no surprise people are exhausted when a third of their projects (36%) start without any real consensus from the stakeholders. Most projects tend to get stuck before they even get a chance to start. It leaves the rest of the team scrambling to clean up a mess they didn't even make in the first place.

Without team alignment from the offset, the consequences to projects are immediately felt. Here are five key ‘costs’ of disconnected teams, according to the employees surveyed:

  • Leads to wasted time and effort - 76%
  • They experience missed deadlines - 58%
  • Report decreased work quality - 57%
  • Flag struggle to track progress - 47%
  • Encounter budget overruns - 23%

        One of the most substantial ways in which team misalignment in project goals can impact employees is by causing a significant rework. Roughly a third (33%) surveyed identified that they have had to rework projects due to misalignment.

        Employees also noted the key reasons why they feel projects are thrown off course:

        • Unclear leadership directives - 40%
        • Lack of standardized processes across teams - 34%
        • Frequent changes in project priorities - 34%
        • Insufficient visibility into other teams’ progress - 28%
        • Too many disconnected tools - 28%

              In addition to the above impacts felt by employees, they also cite a lack of regular cross-functional check-ins (27%), an absence of a single source of truth for project information (23%), and a lack of training on processes (17%) as blockers to projects staying on course.

              The psychological toll of collaboration blockers

              Aside from the impact of ineffective collaboration on the project at hand, there’s a significant impact on the workforce from a psychological perspective. More than half (56%) of US employees surveyed said navigating collaboration hurdles caused mental fatigue.

              Varying work environments also led to employees citing different levels of mental toll thanks to ineffective collaboration. Over half (55%) of both remote and on-site workers noted poor collaboration as cause of stress. Without supportive workflows in place, this stress goes on to have repercussions in the form of retention. On-site employees are 47% more likely to seek new job opportunities due to a lack of effective workflow management and team collaboration.

              What employees want to dismantle ineffective collaboration

              Instead of opting to add more tech solutions to try and solve inefficiencies in collaboration, there needs to be strategic intervention, and employees in the Adobe for Business study point to the enablers they see as most valuable in unlocking smoother ways of working with their teams.


              The set up of clear and consistent communication channels (42%) was the most requested improvement to help solve a lack of effective collaboration according to employees. This was followed by explicitly defined roles and responsibilities (38%) within the team to ensure everyone is aligned on expectations.

              Demand is also high for a platform that acts as a ‘single source of truth’ for the project, over a fifth of all employees deemed it to be essential. This demand increases for remote workers who are 28% more likely to request a ‘single source of truth’ as a solution for collaboration breakdowns compared to on-site workers. Employees seek this unified approach in order to avoid a siloed team structure, as over one in five employees identified this approach as a major barrier to collaboration.

              Understanding collaboration enablers is extremely important and as part of this, it’s essential to consider the varying support required by different demographics within the team. Timely decision making and clear next steps (41%) is highly valued by Baby Boomers, whereas Gen X and Millennials want to prioritize clear communication channels (42%) to effectively collaborate. Gen Z say a shared understanding of project goals (40%) would be most valuable to them.

              To support employees in reducing the collaboration gap, teams want to see updates to workflow management that centralizes project insights to a ‘single source of truth’, automates low-impact admin tasks, and formalizes processes to provide the structure and real-time visibility of performance necessary.

              Companies can’t afford to just sit back and hope their teams figure out how to work together. You have to be proactive about fixing these gaps—not just for the sake of the bottom line, but to avoid high performers from leaving. Once you get everyone on the same page, the busywork falls away and the real work finally starts.

              Reviewed by Irfan Ahmad.

              Read next:

              Fragmented phone use — not total screen time — is the main driver of information overload, study finds

              • Most Parents Keep Track of Their Children’s Online Browsing


              by Guest Contributor via Digital Information World