Tuesday, February 3, 2026

PFAS are turning up in the Great Lakes, putting fish and water supplies at risk – here’s how they get there

Christy Remucal, University of Wisconsin-Madison
Image Credit: Sharon Fjeldstrom via publicdomainpictures. Caption: Welland Canal, Ontario. License: CC0 Public Domain.

No matter where you live in the United States, you have likely seen headlines about PFAS being detected in everything from drinking water to fish to milk to human bodies.

PFAS, or per- and polyfluoroalkyl substances, are a group of over 10,000 synthetic chemicals. They have been used for decades to make products waterproof and stain- and heat-resistant – picture food wrappers, stain-resistant carpet, rain jackets and firefighting foam.

These chemicals are a growing concern because some PFAS are toxic even at very low levels and associated with health risks like thyroid issues and cancer. And some of the most common PFAS don’t naturally break down, which is why they are often referred to as “forever chemicals.”

Now, PFAS are posing a threat to the Great Lakes, one of America’s most vital water resources.

The five Great Lakes are massive, with over 10,000 miles of coastline (16,000 kilometers) across two countries and containing 21% of the world’s fresh surface water. They provide drinking water to over 30 million people and are home to a robust commercial and recreational fishing industry.

My colleagues at the University of Wisconsin-Madison and I study how chemicals like PFAS are affecting water systems. Here’s what we’re learning about how PFAS are getting into the Great Lakes, the risks they’re posing and how to reduce those risks in the future.

PFAS’ many pathways into the Great Lakes

Hundreds of rivers flow into the lakes, and each can be contaminated with PFAS from sources such as industrial sites, military operations and wastewater treatment plants in their watersheds. Some pesticides also contain PFAS, which can wash off farm fields and into creeks, rivers and lakes.

The concentration of PFAS in rivers can vary widely depending on these upstream impacts. For example, we found concentrations of over 1,700 parts-per-trillion in Great Lakes tributaries in Wisconsin near where firefighting foam has regularly been used. That’s more than 400 times higher than federal drinking water regulations for PFOS and PFOA, both 4 parts-per-trillion.

However, concentration alone does not tell the whole story. We also found that large rivers with relatively low amounts of PFAS can put more of these chemicals into the lakes each day compared with smaller rivers with high amounts of PFAS. This means that any effort to limit the amount of PFAS in the Great Lakes should consider both high-concentration hot spots and large rivers.

Groundwater is another key route carrying PFAS into the Great Lakes. Groundwater is a drinking water source for more than one-third of people in the U.S., and it can become contaminated when PFAS in firefighting foam and other PFAS sources seep into soil.

When these contaminated plumes enter the Great Lakes, they carry PFAS with them. We detected PFAS concentrations of over 260 parts-per-trillion in the bay of Green Bay in Lake Michigan. The chemicals we found were associated with firefighting foam, and we were able to trace them back to a contaminated groundwater plume.

PFAS can also enter the Great Lakes in unexpected ways, such as in rain and snowfall. PFAS can get into the atmosphere from industrial processes and waste incineration. The chemicals have been detected in rain across the world, including in states surrounding the Great Lakes.

Although PFAS concentrations in precipitation are typically lower than in rivers or groundwater, this is still an important contamination source. Scientists estimate that precipitation is a major source of PFAS to Lake Superior, which receives about half of its water through precipitation.

Where PFAS end up determines the risk

Much of the PFAS that enter Lake Superior will eventually make their way to the downstream lakes of Michigan, Huron, Erie and Ontario.

These chemicals’ ability to travel with water is one reason why PFAS are such a concern for drinking water systems. Many communities get their drinking water from the Great Lakes.


PFAS can also contaminate other parts of the environment.

The chemicals have been detected in sediments at the bottom of all the Great Lakes. Contaminated sediment can release PFAS back into the overlying water, where fish and aquatic birds can ingest it. So, future remediation efforts to remove PFAS from the lakes are about more than just the water – they involve the sediment as well.

PFAS can also accumulate in foams that form on lake shorelines during turbulent conditions. Concentrations of PFAS can be up to 7,000 times higher in natural foams compared with the water because PFAS are surfactants and build up where air and water meet, like bubbles in foam. As a result, state agencies recommend washing skin that comes in contact with foam and preventing pets from playing in foam.

Some PFAS bioaccumulate, or build up, within fish and wildlife. Elevated levels of PFAS have been detected in Great Lakes fish, raising concerns for fisheries.

High PFAS concentrations in fish in coastal areas and inland waters have led to advisories recommending people limit how much they fish they eat.

Looking ahead

Water cycles through the Great Lakes, but the process can take many years, from 2.6 years in Lake Erie to nearly 200 years in Lake Superior.

This means that PFAS that enter the lakes will be there for a very long time.


Since it is not possible to clean up the over 6 quadrillion gallons of water in the Great Lakes after they have been contaminated, preventing further contamination is key to protecting the lakes for the future.

That starts with identifying contaminated groundwater and rivers that are adding PFAS to the lakes. The Sea Grant College Program and the National Institutes of Water Resources, including the Wisconsin programs that I direct, have been supporting research to map these sources, as well as helping translate that knowledge into actions that policymakers and resource managers can take.

PFAS contamination is an issue beyond the Great Lakes and is something everyone can work to address.

  • Drinking water. If you are one of the millions of people who drink water from the Great Lakes, find out the PFAS concentrations in your drinking water. This data is increasingly available from local drinking water utilities.
  • Fish. Eating fish can provide great health benefits, but be aware of health advisories about fish caught in the Great Lakes and in inland waters so you can balance the risks. Other chemicals, such as mercury and PCBs, can also lead to fish advisories.
  • Personal choice. Scientists have proposed that PFAS only be used when they have vital functions and there are no alternatives. Consumer demand for PFAS-free products is helping reduce PFAS use in some products. Several states have also introduced legislation to ban PFAS use in some applications.

Decreasing use of PFAS will ultimately prevent downstream contamination in the Great Lakes and around the U.S.The Conversation

Christy Remucal, Professor of Civil and Environmental Engineering, University of Wisconsin-Madison

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read next: Say what’s on your mind, and AI can tell what kind of person you are


by External Contributor via Digital Information World

Say what’s on your mind, and AI can tell what kind of person you are

If you say a few words, generative AI will understand who you are—maybe even better than your close family and friends.

Image: 愚木混株 Yumu / unsplash

A new University of Michigan study* found that widely available generative AI models (e.g., ChatGPT, Claude, LLaMa) can predict personality, key behaviors and daily emotions as or even more accurately than those closest to you.

“What this study shows is AI can also help us understand ourselves better, providing insights into what makes us most human, our personalities,” said the study’s first author Aidan Wright, U-M professor of psychology and psychiatry. “Lots of people may find this of interest and useful. People have long been interested in understanding themselves better. Online personality questionnaires, some valid and many of dubious quality, are enormously popular.”

Researchers looked into whether AI programs like ChatGPT and Claude can act like general “judges” of personality. To test this, they had the AI read people’s own words—either short daily video diaries or longer recordings of what happened to be on their mind—and asked it to answer personality questions the way each person would. The study included stories and thoughts from more than 160 people collected in real-life and lab settings.

The results showed that the AI’s personality scores were very similar to how people rated themselves, and often matched them better than ratings from friends or family. Older text-analysis methods did not perform nearly as well as these newer AI systems.

“We were taken aback by just how strong these associations were, given how different these two data sources are,” Wright said.

AI’s personality ratings could also predict real parts of people’s lives, like their emotions, stress levels, social behavior and even whether they had been diagnosed with mental health conditions or sought treatment, according to the findings.

This research indicates that personality naturally shows up in our everyday thoughts, words and stories—even when we’re not trying to describe ourselves.

Chandra Sripada, U-M professor of philosophy and psychiatry, says the findings support the long-held idea that language carries deep clues about how people differ in psychological traits such as personality and mood. He adds that open-ended writing and speech can be a powerful tool for understanding personality. Thanks to generative AI, researchers can now analyze this kind of data quickly and accurately in ways that weren’t possible before.

At the same time, important questions remain. The study relied on people rating their own personalities and did not test how well AI compares with judgments from friends or family, or how results might differ across age, gender or race.

Researchers also don’t yet know whether AI and humans rely on the same signals—or whether AI could one day outperform self-reports when predicting major life outcomes like relationships, education, health, or career success.

“The study shows that AI can reliably uncover personality traits from everyday language, pointing to a new frontier in understanding human psychology,” said Colin Vize, assistant professor of psychology at the University of Pittsburgh.

Whitney Ringwald, assistant professor of psychology at the University of Minnesota, says the results “really highlight how our personality is infused in everything we do, even down to our mundane, everyday experiences and passing thoughts.”

The study’s other authors were Johannes Eichstaedt of Stanford University and Mike Angstadt and Aman Taxali, both from U-M. The findings appear in the journal Nature Human Behavior.

Contact: Jared Wadley.

*Study: Generative AI predicts personality traits based on open-ended narratives (DOI: 10.1038/s41562-025-02389-x)

Editor’s Notes: This article was originally published on Michigan News, and republished here with permission.

Read next:

The Dangers of Not Teaching Students How to Use AI Responsibly

Lit bots beware: Readers less favorable toward AI-generated creative writing, U-M research finds

by External Contributor via Digital Information World

Saturday, January 31, 2026

Lit bots beware: Readers less favorable toward AI-generated creative writing, U-M research finds

When it comes to creative writing, score one for the humans over the machines. For now, anyway.

Image: Andrea Piacquadio / Pexels

New research finds that people evaluate creative writing less favorably when they learn it was generated in whole or part by artificial intelligence. And the anti-AI bias is persistent and difficult to reduce, even when steps were taken to lessen the aversion within the experiments.

That strength and consistency of the negative attitude toward AI-generated or assisted writing jumped out at researchers, and they say it poses implications for integrating AI in creative fields. As it stands, the study finds people tend to view the creative works of machines as “relatively inauthentic and therefore less worthy of their appreciation.”

The researchers say previous research has offered preliminary evidence that AI disclosure can have negative effects on how people evaluate creative content, but their study builds on it by revealing a “surprising level of robustness” across 16 experiments involving 27,000 participants conducted between March 2023 and June 2024.

“What surprised us most was how incredibly ‘sticky’ this penalty is,” said Justin Berg, the study’s co-author and an associate professor of management and organizations at University of Michigan’s Ross School of Business.

“We threw everything at it, from changing the story’s perspective to humanizing the AI or framing it as a collaboration, and nothing reliably reduced the bias. Across all the experiments, the pattern was clear: If readers believe AI is involved, they view the work as less authentic and enjoy it less, even when the content is identical.”

Throughout the study, the researchers asked participants to read and evaluate AI-generated writing samples created using ChatGPT—chosen because it was the most well-known large language model at the time of the initial study. Across all the experiments, AI disclosure decreased evaluations by an average of 6.2%.

Berg and his colleagues, Manav Raj of the University of Pennsylvania’s Wharton School of Business and Rob Seamans of New York University’s Stern School of Business, note the results reflect attitudes during a period of rapid advancements in AI capabilities and shifting perceptions of its role in creative work. It’s an open question—and fertile ground for further study—whether the AI disclosure penalty will persist, diminish or reverse as such content becomes more pervasive.

What does appear clear—at least for now—is the use of AI in creative writing triggers different psychological responses than when the technology is employed in other domains. Understanding that bias is crucial for helping navigate the challenges for those working toward fuller, broader human-AI collaboration.

The findings, published in the Journal of Experimental Psychology, also pose practical implications for creative producers using AI, as the U.S. Congress considers AI disclosure legislation. Mandated disclosure of AI involvement in creative work could usher in negative biases toward such content and potentially affect its reception.

Contact: Jeff Karoub.

Editor’s Notes:
1. This article was originally published on Michigan News, and republished here with permission. A representative of the University of Michigan news team confirmed that AI tools were not used in its production. 
2. The study notes:"We have studied the effect of AI disclosure on evaluations in one specific domain (creative writing)"… "We are also careful to note that our study does not address whether and in what circumstances output created by an AI tool may be more or less creative than output created by a human"… and "it is important to note that the AI disclosure effects we document may evolve over time".

Read next: The Dangers of Not Teaching Students How to Use AI Responsibly

by External Contributor via Digital Information World

Friday, January 30, 2026

AI is failing ‘Humanity’s Last Exam’. So what does that mean for machine intelligence?

Image: Egor Komarov/Unsplash

Kai Riemer, University of Sydney and Sandra Peter, University of Sydney

How do you translate ancient Palmyrene script from a Roman tombstone? How many paired tendons are supported by a specific sesamoid bone in a hummingbird? Can you identify closed syllables in Biblical Hebrew based on the latest scholarship on Tiberian pronunciation traditions?

These are some of the questions in “Humanity’s Last Exam”, a new benchmark introduced in a study published this week in Nature. The collection of 2,500 questions is specifically designed to probe the outer limits of what today’s artificial intelligence (AI) systems cannot do.

The benchmark represents a global collaboration of nearly 1,000 international experts across a range of academic fields. These academics and researchers contributed questions at the frontier of human knowledge. The problems required graduate-level expertise in mathematics, physics, chemistry, biology, computer science and the humanities. Importantly, every question was tested against leading AI models before inclusion. If an AI could not answer it correctly at the time the test was designed, the question was rejected.

This process explains why the initial results looked so different from other benchmarks. While AI chatbots score above 90% on popular tests, when Humanity’s Last Exam was first released in early 2025, leading models struggled badly. GPT-4o managed just 2.7% accuracy. Claude 3.5 Sonnet scored 4.1%. Even OpenAI’s most powerful model, o1, achieved only 8%.

The low scores were the point. The benchmark was constructed to measure what remained beyond AI’s grasp. And while some commentators have suggested that benchmarks like Humanity’s Last Exam chart a path toward artificial general intelligence, or even superintelligence – that is, AI systems capable of performing any task at human or superhuman levels – we believe this is wrong for three reasons.

Benchmarks measure task performance, not intelligence

When a student scores well on the bar exam, we can reasonably predict they’ll make a competent lawyer. That’s because the test was designed to assess whether humans have acquired the knowledge and reasoning skills needed for legal practice – and for humans, that works. The understanding required to pass genuinely transfers to the job.

But AI systems are not humans preparing for careers.

When a large language model scores well on the bar exam, it tells us the model can produce correct-looking answers to legal questions. It doesn’t tell us the model understands law, can counsel a nervous client, or exercise professional judgment in ambiguous situations.

The test measures something real for humans; for AI it measures only performance on the test itself.

Using human ability tests to benchmark AI is common practice, but it’s fundamentally misleading. Assuming a high test score means the machine has become more human-like is a category error, much like concluding that a calculator “understands” mathematics because it can solve equations faster than any person.

Human and machine intelligence are fundamentally different

Humans learn continuously from experience. We have intentions, needs and goals. We live lives, inhabit bodies and experience the world directly. Our intelligence evolved to serve our survival as organisms and our success as social creatures.

But AI systems are very different.

Large language models derive their capabilities from patterns in text during training. But they don’t really learn.

For humans, intelligence comes first and language serves as a tool for communication – intelligence is prelinguistic. But for large language models, language is the intelligence – there’s nothing underneath.

Even the creators of Humanity’s Last Exam acknowledge this limitation:

High accuracy on [Humanity’s Last Exam] would demonstrate expert-level performance on closed-ended, verifiable questions and cutting-edge scientific knowledge, but it would not alone suggest autonomous research capabilities or artificial general intelligence.

Subbarao Kambhampati, professor at Arizona State University and former president of the Association for the Advancement of Artificial Intelligence, puts it more clearly:

Humanity’s essence isn’t captured by a static test but rather by our ability to evolve and tackle previously unimaginable questions.

Developers like leaderboards

There’s another problem. AI developers use benchmarks to optimise their models for leaderboard performance. They’re essentially cramming for the exam. And unlike humans, for whom the learning for the test builds understanding, AI optimisation just means getting better at the specific test.

But it’s working.

Since Humanity’s Last Exam was published online in early 2025, scores have climbed dramatically. Gemini 3 Pro Preview now tops the leaderboard at 38.3% accuracy, followed by GPT-5 at 25.3% and Grok 4 at 24.5%.

Does this improvement mean these models are approaching human intelligence? No. It means they’ve gotten better at the kinds of questions the exam contains. The benchmark has become a target to optimise against.

The industry is recognising this problem.

OpenAI recently introduced a measure called GDPval specifically designed to assess real-world usefulness.

Unlike academic-style benchmarks, GDPval focuses on tasks based on actual work products such as project documents, data analyses and deliverables that exist in professional settings.

What this means for you

If you’re using AI tools in your work or considering adopting them, don’t be swayed by benchmark scores. A model that aces Humanity’s Last Exam might still struggle with the specific tasks you need done.

It’s also worth noting the exam’s questions are heavily skewed toward certain domains. Mathematics alone accounts for 41% of the benchmark, with physics, biology and computer science making up much of the rest. If your work involves writing, communication, project management or customer service, the exam tells you almost nothing about which model might serve you best.

A practical approach is to devise your own tests based on what you actually need AI to do, then evaluate newer models against criteria that matter to you. AI systems are genuinely useful – but any discussion about superintelligence remains science fiction and a distraction from the real work of making these tools relevant to people’s lives.The Conversation

Kai Riemer, Professor of Information Technology and Organisation, University of Sydney and Sandra Peter, Director of Sydney Executive Plus, Business School, University of Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read next: Should companies replace human workers with robots? New study takes a closer look


by External Contributor via Digital Information World

Thursday, January 29, 2026

Should companies replace human workers with robots? New study takes a closer look

Written By Anthony Borrelli. Edited by Ayaz Khan.

Binghamton University School of Management researchers show how companies create more value through human-robot collaboration.

Image: Simon Kadula / Unsplash

Last year, when The New York Times reported that Amazon’s robotics team’s ultimate goal was to automate 75% of the company’s operations, replacing more than half a million human jobs in an attempt to pass cost savings onto customers, it was a stark reminder of robots’ ever-expanding role in reshaping the American workplace.

Meanwhile, at Hyundai’s auto plant in Georgia, more than 1,000 robots work alongside almost 1,500 human employees.

But as new research involving the Binghamton University School of Management (SOM) found, companies could risk losing their competitive edge by leaning too heavily on replacing human workers with robots, since competitors could easily follow suit. Instead, researchers determined businesses could generate more value by focusing on human-robot collaboration, amplifying their existing human capital into hard-to-imitate resources.

“Simply put, deploying robots in a collaborative manner with humans can alter social dynamics in ways that encourage unit members to feel, act and think together,” the study, published in the Journal of Organizational Behavior, stated. “By leveraging these resources through the deployment of robots in collaborative settings, organizations can not only generate additional economic value from their human capital but also improve their ability to capture a greater share of that value in the competitive market.”

Chou-Yu (Joey) Tsai, SOM Osterhout associate professor of entrepreneurship and the study’s co-author, said researchers initially wanted to explore how an organization’s human-robot interface could affect leadership, but then realized it could be more beneficial to focus on its impact on the organization as a whole.

The study examined the issue from two viewpoints: a substitute view and a complementary view. Both can enhance an organization’s desired outcomes in efficiency and productivity, researchers determined, but those who adopt a complementary view of human-robot collaboration were more likely to foster a greater and more positive sense of commitment among human employees.

“The most successful organizations will find a way to extract the best value from these technologies to achieve their unique goals,” Tsai said. “If you’re focused on going up against other companies by introducing robots to replace some key roles traditionally carried out by human employees, that’s not always the best strategic thinking because your competitors could easily do the same thing.”

Additionally, the study noted that on-the-job learning also remains fundamental for understanding the best ways to implement such changes.

Delegating robots to tasks that potentially offer meaning, autonomy or opportunities for mastery could undermine not only employees’ mental health, researchers said, but also the very efficiency gains employers are striving for.

“Discussion of AI and robots often centers on adoption speed, workplace disruption and job displacement,” said SOM Associate Dean for Faculty Research Rory Eckardt, another co-author on the study. “Our paper shifts attention to complementary integration by considering when these technologies strengthen teamwork and coordination, improve the work environment, and support value creation and competitive advantage.”

One effective example the researchers cited involved members of a company’s research and development team working with robot systems to better analyze complex datasets. Doing so amplifies the team’s effectiveness in achieving results and helps them work together more efficiently, according to the study.

Another example involved hospital staff using surgical robots to achieve higher-definition 3-D visualization, surpassing the limitations of the human hand to perform increasingly delicate medical procedures.

Using this collaborative approach can increase employee loyalty, according to the study, because it shows the company is providing additional support for the work being done.

“When I began my research career in leadership and organization science, I could have never predicted that technology would advance to the point where we’re researching the impact of robots on leadership development and organization effectiveness,” said SOM Dean Shelley Dionne, who co-authored the study. “But now it informs how we think about the future of workforce development and employee performance, no matter what type of organization we consider.”

The study, “Human Capital Robotic Integration and Value Creation for Organizations,” was also co-authored by Jason Marshall from Creighton University in Nebraska, Malte Jung from Cornell University, YoYo Tsung-Yu Hou from National Chengchi University in Taiwan and Biying Yang from South Dakota State University.

Originally published by Binghamton University / BingUNews (State University of New York) and republished on DIW with permission.

Read next: Which Roles Use AI More Frequently in U.S. Workplaces? Leaders Report Higher Frequency, Gallup Survey Shows
by External Contributor via Digital Information World

Wednesday, January 28, 2026

Which Roles Use AI More Frequently in U.S. Workplaces? Leaders Report Higher Frequency, Gallup Survey Shows

by Andy Kemp. Edited by Asim BN.

U.S. employees already using artificial intelligence (AI) in the workplace used it slightly more often in the fourth quarter of 2025 than in the prior quarter, continuing a gradual increase since 2023. The proportion of employees using AI daily has risen from 10% to 12%. Frequent use, defined as using AI at work at least a few times a week, has also inched up three percentage points to 26%.

These increases are on par with the expansion of frequent workplace AI use reported throughout 2025. Meanwhile, the percentage of total users, those who use AI at work at least a few times a year, was flat in Q4 after sharp increases earlier in the trend. Nearly half of U.S. workers (49%) report that they “never” use AI in their role.


Organizational AI adoption has not changed meaningfully from the previous quarter. In Q4, 38% of employees said their organization has integrated AI technology to improve productivity, efficiency and quality. Forty-one percent said their organization has not implemented AI tools, and 21% said they don’t know. These results closely mirror Q3 figures.

AI Use Varies by Industry and Role Type

AI use in the workplace is most prevalent in knowledge-based industries and least common in production and service-based sectors. Employees in technology, finance and higher education report the highest levels of AI use, especially compared with U.S. employees in retail, manufacturing and healthcare.


Line charts show trends in workplace AI use by industry among U.S. employees, from 2023 to 2025. Across all industries, total AI use increases over time, with notable variation in adoption levels. Technology shows the highest use, with total AI use at 77%, including 57% frequent users and 31% daily users. College or university and finance also report high adoption, with total AI use at 63% and 64%, respectively. Professional services reaches 62% total AI use, including 36% frequent and 16% daily users. K-12 education shows rising use to 56% total AI use. Community or social services, government or public policy, healthcare and manufacturing show more moderate adoption, with total AI use ranging from about 41% to 43%. Retail reports the lowest adoption, with total AI use at 33%, including 19% frequent users and 10% daily users.

Gains in AI use were uneven across industries in Q4. The total AI user base increased most in finance and professional services, moving up six and five points, respectively. These increases widened existing gaps between higher-growth industries and those with lower AI use. In retail, total AI use did not increase in Q4 from Q3, while manufacturing saw a three-point increase.

In industries such as technology where AI use has been most prevalent, growth in total users shows signs of leveling, with gains found primarily among those already using AI. Total AI use in technology increased by just one percentage point in Q4, from 76% to 77%, while frequent use rose from 50% to 57%.

Across industries, AI use is concentrated in roles that employees describe as remote-capable, meaning the job could reasonably be completed remotely regardless of where the employee actually works. These roles are typically desk- and office-based positions.

Since Q2 2023, total AI use among employees in remote-capable roles has increased from 28% to 66%, while frequent use has risen from 13% to 40%. Growth has been slower in roles that are not remote-capable: AI use in these positions has increased from 15% to 32%, with frequent use rising from 8% to 17%.


Line charts compare AI use among U.S. employees in remote-capable and non-remote-capable roles from 2023 to 2025. Employees in remote-capable roles show substantially higher AI adoption throughout the period. By 2025, total AI use among remote-capable employees reached 66%, including 40% who use AI frequently and 19% who use it daily. Employees in non-remote-capable roles reported much lower use. In the most recent data, total AI use among these employees is 32%, with 17% using AI frequently and 7% using it daily.

Leaders Continue to Use AI More Than Other Employees

Employees in leadership positions are more likely than managers and individual contributors to use AI at work. In Q4, 69% of leaders said they use AI at least a few times a year, compared with 55% of managers and 40% of individual contributors. Part of this difference likely reflects role type, as leaders are more likely to hold office-based and remote-capable roles where AI tools are more easily applied.


Line graph. Percentages of Americans who think the coronavirus situation in the U.S. is getting a lot or a little better, staying the same, or getting a lot or a little worse, from April 2020 to February 2022. In the latest poll, 63% of U.S. adults said it is getting better, up from 20% in January. Twelve percent said it is getting worse, down from 58% in January and 25% said it is staying the same, relatively unchanged.

Leaders also report more frequent AI use than other employees, a gap that has widened over time. Since Q2 2023, frequent AI use among leaders has risen from 17% to 44%. Over the same period, frequent use among managers has doubled from 15% to 30%, while frequent use among individual contributors has increased from 9% to 23%. Frequent use has risen among all three types of workers since Q3, contributing to the overall climb in Q4.

Implications

Modest gains in frequent AI use were seen in Q4 2025, on par with the growth seen in Q3, but the percentage of employees who say they use AI overall remained flat. Use remains most prevalent in knowledge-based industries and remote-capable roles. These differences in AI adoption may help to explain why overall adoption appears to be slowing, even as AI use continues to deepen within certain segments of the workforce.

Leaders, in particular, report substantially higher and more frequent AI use than other employees, and that separation has grown over time. Gallup research shows that lack of utility is the most common barrier to individual AI use, suggesting that clear AI use cases may be more apparent for leaders than employees in other roles. For organizations integrating AI technology, this underscores the importance of grounding decisions about AI adoption in a clear understanding of how AI may be applied to different roles and functions, not just among those closest to decision-making.

Gallup’s newest indicator tracks workplace AI adoption over time, including usage frequency, employee comfort, manager support, organizational integration and strategic communication. Explore all of their indicators for global data on what matters most in the workplace and to societies at large.

Survey Methods

These results for the quarterly Gallup workforce studies are based on self-administered web surveys conducted with a random sample of adults working full time and part time for organizations in the United States, aged 18 and older, who are members of the Gallup Panel™. Gallup uses probability-based, random sampling methods to recruit its Panel members. Gallup weighted the obtained samples to correct for nonresponse. Nonresponse adjustments were made by adjusting the sample to match the national demographics of gender, age, race, Hispanic ethnicity, education and region. Demographic weighting targets were based on the most recent Current Population Survey figures for the aged 18 and older U.S. population.

For results based on the sample of employed U.S. adults, the margin of sampling error at the 95% confidence level varies for different topics and time frames. Details for the recent quarterly surveys are noted below. In addition to sampling error, question wording and practical difficulties in conducting surveys can introduce error or bias into the findings of public opinion polls.

Survey Method Details

Survey dates, sample size (among employed U.S. adults), margin of error and design effect by quarter for each study.

Survey Dates Sample Size Margin of Error
(95% confidence level)
Design Effect
Q4 2025, Oct. 30-Nov. 13, 2025 22,368 ±1.0 percentage points 2.26
Q3 2025, Aug. 5-19, 2025 23,068 ±1.0 percentage points 2.46
Q2 2025, May 7-16, 2025 19,043 ±1.1 percentage points 2.29
Q2 2024, May 11-25, 2024 21,543 ±1.0 percentage points 2.25
Q2 2023, May 11-25, 2023 18,871 ±1.1 percentage points 2.25

This post was originally published on Gallup and is republished here with permission.

Read next: 

These Are the Best and Worst U.S. Metro Areas for Science, Technology, Engineering, and Mathematics Professionals in 2026

• Twelve Countries Say No to Banning Autonomous Weapons

by External Contributor via Digital Information World

Twelve Countries Say No to Banning Autonomous Weapons

by Anna Fleck, Data Journalist - Edited by Asim BN.

The United States and United Kingdom are among 12 countries opposing a global ban on autonomous weapons, joined by Australia, Belarus, Estonia, India, Israel, Japan, North Korea, Poland, Russia and South Korea. Data compiled by Automated Decision Research, the monitoring and research team of Stop Killer Robots, finds that another 53 nations have yet to take a clear stance, while 127 countries, including most of Africa and Latin America, support the ban. These positions have been listed following discussions at UN General Assembly and Certain Conventional Weapons meetings.

At present, there is no single law or legally-binding treaty that bans the use of lethal autonomous weapons (LAWS), which have been used in conflict zones like Ukraine and Libya. The International Committee of the Red Cross (ICRC) is calling for new international rules, citing humanitarian, legal and ethical concerns over the loss of human control in warfare. LAWS pose risks to both civilians and combatants and could escalate conflicts.

Though LAWS may use AI, it is not a requirement. However, the broader debate around military AI also remains obscure. Over the past few years, several initiatives have emerged to address military AI, but none are yet legally binding. In 2024, the UN GA resolution A/79/408 saw 166 countries supporting restrictions on LAWS, while Belarus, Korea, and Russia opposed, and 15, including Ukraine, abstained. Meanwhile, two landmark intergovernmental frameworks worth mentioning include The Political Declaration on Responsible Military Use of AI and Autonomy, an initiative launched by the U.S. and supported by over 60 nations, as well as the Responsible AI in the Military Domain (REAIM) Call for Action, endorsed by more than 50 countries. Both focus on ethical guidelines but are non-binding.

The UN Office for Disarmament Affairs has condemned LAWS as "politically unacceptable and morally repugnant," and UN Secretary-General António Guterres has called for their prohibition under international law.

Twelve Countries Say No to Banning Autonomous Weapons

This article was originally published on Statista ‘Chart of the Day’ and is made available under the Creative Commons License CC BY‑ND 3.0.

Read next: Foreign Accents Receive Higher Hypothetical Investment in Business Experiment Only With Strong Reputation, URI Study Finds
by External Contributor via Digital Information World