Thursday, February 5, 2026

Nursing professionals call for clearer AI policies as AI use in their clinics increases

By Destinie Wallis. Edited by Asim BN.

Artificial Intelligence (AI) has been changing healthcare, and recent research from Arkansas State University shows most nurses agree with the changes, but the same questionnaire shows many nurses fear there are no adequate protections for either them or their patients. Therefore, the enthusiasm for AI is tempered by fear. This suggests that healthcare organizations must act quickly to change their policies and education programs, which are the front lines for patient care.

Arkansas State University completed a study titled "Nurses and the AI Policy Gap: How Education Can Bridge Safety and Innovation." The study consisted of a survey of 135 registered nurses on how artificial intelligence was changing their everyday work. The results of the study show that the clinical environment in which nurses work is evolving with increasing use of technology, and, at the same time, an equally large gap in the guidelines, trust, and accountability.

AI is being adopted rapidly in nursing without the correct support in place

Most of the nurses who participated in the study (80%) stated that they used AI tools in some aspect of patient care, and more than 25% of those nurses stated they use AI tools daily. The types of clinical applications that are being supported by AI tools include:

  • Charting: 61% stated they used AI-assisted charting, which is currently the largest application of AI in nursing.
  • Predictive alerts: 38% stated they used predictive models to alert them to possible patient deterioration before it occurs.
  • Diagnosis: 36% stated they use AI to aid in the diagnosis of patient conditions.
  • Monitoring: 30% stated they use AI to monitor patients remotely.
  • Bots for triage and intake: 26% stated they used AI-based bots for intake and triage.

Although 50% of the respondents stated they believed that their employer had clearly defined policies regarding the use of AI, and although over 60% of the respondents stated that they believed they would have legal protection if an AI system contributed to patient harm, the lack of clear policies and legal protection represents a serious risk to patient safety and could represent a legal liability.



Legal and ethical uncertainties are limiting AI use

The rapid adoption of AI in healthcare has moved much faster than the development of regulatory frameworks to support the safe and effective use of AI. Although many nurses believe that AI has the potential to improve efficiency and decision making in patient care, many nurses also have major concerns with the ethical and legal implications of AI.

Major AI concerns include:

  • Patient harm: 63%
  • Data breach: 51%
  • Legal protection for nurses if an AI system contributes to patient harm: 49%
  • Dependence on automation: 48%

Over one-third of the respondents stated that they have avoided using certain features of AI systems because of concerns related to the law or patient safety.

The lack of regulatory frameworks for AI is additionally demonstrated through the fact that less than thirty percent of the respondents thought that current law adequately protected patients from AI related risk, and 45% of the respondents disagreed. These concerns are not theoretical. Algorithmic bias, data security breaches, and accountability issues have yet to be resolved, and therefore nurses are increasingly being asked to use systems that may pose unforeseen risks to either themselves or their patients.

Education must fill the gap for responsible AI use

Educational resources for teaching nurses about AI systems are inadequate. Only approximately half of the responding nurses reported receiving formal training from their employer regarding the use of AI. A significant number of the nurses reported learning about AI through experience (approximately 20%), peer learning (approximately 20%), vendor training (approximately 6%), and no training (approximately 3%).

These varying levels of training lead to anxiety among the nurses regarding the use of AI systems, since only approximately 31% of the nurses felt "very comfortable" with AI systems, and the remainder of the nurses reported they are still adjusting to AI systems. Therefore, we have a workforce that is aware of technology but lacks the knowledge needed to safely and effectively utilize AI systems in high-risk situations.

Educating nurses for responsible AI use

The survey respondents were strong advocates for a variety of approaches to support their use of AI in patient care, including the development of clearly defined policies, laws, and regulations. As such, this represents a clear call for change that also provides an excellent opportunity for nurse educators to assume leadership roles in providing the necessary educational content.

To appropriately utilize AI in the delivery of patient care, nurses will need to possess education in digital literacy (to effectively evaluate information) and in ethics (the potential ethical implications of utilizing AI systems). Additionally, a nurse educator could provide examples of how AI can assist with (and supplement) a nurse’s decision making process in practice as opposed to replacing it.

Some key strategies for nurse educators include:

  • Understanding AI systems: Students need to learn about the evaluation of the accuracy of an AI system, the consistency of an AI system, and the likelihood of an AI system failing. In addition, students need to have an awareness of the possibility of an AI system having bias.
  • Ethics of AI: Students need to learn about the ethics of AI (such as privacy, consent, and transparency of AI algorithms).
  • Responsible use of AI: Faculty can provide students with examples of how to responsibly use AI systems. Faculty can share with students their own experiences with both success and failure in implementing AI into clinical practice.

    Interdisciplinary collaboration amongst health care organizations and academic institutions

    Health care organizations and academic institutions must establish guidelines and standards for this type of collaboration and education. These standards must be established through consultation and emphasis on transparency, accountability and equity, as the algorithms used in AI reasoning contain embedded social and racial biases.

    Warning to health care organizations and policy makers

    There is a clear interest from health care organizations to advance innovation in their field. However, the lack of support necessary to take this next step in development has been identified within this study. As the nursing profession is open to the idea of AI, they do need a framework of understanding, legal guidelines and limits to incorporate AI into nursing practice.

    Failure to address these concerns can have far-reaching consequences that affect patient safety, as well as the trust of the clinicians working within these organizations, when these organizations fail to provide adequate support in incorporating AI technology into the clinicians’ workload. The advancement in technology requires the advancement in the education and policy fields. Nurses need AI, but cannot utilize AI without support.

    About author: Destinie Wallis has been working in the tech space for nearly ten years and focuses her attention on how new technologies like AI are transforming industries, workflows, and everyday decision-making.

    Read next:

    • PFAS are turning up in the Great Lakes, putting fish and water supplies at risk – here’s how they get there

    • News sites are locking out the Internet Archive to stop AI crawling. Is the ‘open web’ closing?

    • How Do Algorithms Work? Experts at Status Labs Weigh In


    by External Contributor via Digital Information World

    Wednesday, February 4, 2026

    News sites are locking out the Internet Archive to stop AI crawling. Is the ‘open web’ closing?

    Tai Neilson, Macquarie University
    Screenshot: DIW

    When the World Wide Web went live in the early 1990s, its founders hoped it would be a space for anyone to share information and collaborate. But today, the free and open web is shrinking.

    The Internet Archive has been recording the history of the internet and making it available to the public through its Wayback Machine since 1996. Now, some of the world’s biggest news outlets are blocking the archive’s access to their pages.

    Major publishers – including The Guardian, The New York Times, the Financial Times, and USA Today – have confirmed they’re ending the Internet Archive’s access to their content.

    While publishers say they support the archive’s preservation mission, they argue unrestricted access creates unintended consequences, exposing journalism to AI crawlers and members of the public trying to skirt their paywalls.

    Yet, publishers don’t simply want to lock out AI crawlers. Rather, they want to sell their content to data-hungry tech companies. Their back catalogues of news, books and other media have become a hot commodity as data to train AI systems.

    Robot readers

    Generative AI systems such as ChatGPT, Copilot and Gemini require access to large archives of content (such as media content, books, art and academic research) for training and to answer user prompts.

    Publishers claim technology companies have accessed a lot of this content for free and without the consent of copyright owners. Some began taking tech companies to court, claiming they had stolen their intellectual property. High-profile examples include The New York Times’ case against ChatGPT’s parent company OpenAI and News Corp’s lawsuit against Perplexity AI.

    Old news, new money

    In response, some tech companies have struck deals to pay for access to publishers’ content. NewsCorp’s contract with OpenAI is reportedly worth more than US$250 million over five years.

    Similar deals have been struck between academic publishers and tech companies. Publishing houses such as Taylor & Francis and Elsevier have come under scrutiny in the past for locking publicly funded research behind commercial paywalls.

    Now, Taylor & Francis has signed a US$10 million nonexclusive deal with Microsoft granting the company access to over 3,000 journals.

    Publishers are also using technology to stop unwanted AI bots accessing their content, including the crawlers used by the Internet Archive to record internet history. News publishers have referred to the Internet Archive as a “back door” to their catalogues, allowing unscrupulous tech companies to continue scraping their content.

    The cost of making news free

    The Wayback Machine has also been used by members of the public to avoid newspaper paywalls. Understandably, media outlets want readers to pay for news.

    News is a business, and its advertising revenue model has come under increasing pressure from the same tech companies using news content for AI training and retrieval. But this comes at the expense of public access to credible information.

    When newspapers first started moving their content online and making it free to the public in the late 1990s, they contributed to the ethos of sharing and collaboration on the early web.

    In hindsight, however, one commentator called free access the “original sin” of online news. The public became accustomed to getting their digital editions for free, and as online business models shifted, many mid- and small-sized news companies struggled to fund their operations.

    The opposite approach – placing all commercial news behind paywalls – has its own problems. As news publishers move to subscription-only models, people have to juggle multiple expensive subscriptions or limit their news appetite. Otherwise, they’re left with whatever news remains online for free or is served up by social media algorithms. The result is a more closed, commercial internet.

    This isn’t the first time that the Internet Archive has been in the crosshairs of publishers, as the organisation was previously sued and found to be in breach of copyright through its Open Library project.

    The past and future of the internet

    The Wayback Machine has served as a public record of the web for more than three decades, used by researchers, educators, journalists and amateur internet historians.

    Blocking its access to international newspapers of note will leave significant holes in the public record of the internet.

    Today, you can use the Wayback Machine to see The New York Times’ front page from June 1997: the first time the Internet Archive crawled the newspaper’s website. In another 30 years, internet researchers and curious members of the public won’t have access to today’s front page, even if the Internet Archive is still around.

    Today’s websites become tomorrow’s historical records. Without the preservation efforts of not-for-profit organisations like The Internet Archive, we risk losing vital records.

    Despite the actions of commercial publishers and emerging challenges of AI, not-for-profit organisations such as the Internet Archive and Wikipedia aim to keep the dream of an open, collaborative and transparent internet alive.The Conversation

    Tai Neilson, Senior Lecturer in Media, Macquarie University

    This article is republished from The Conversation under a Creative Commons license. Read the original article.

    Read next:

    • How to View Any Website’s Past Versions Using the Wayback Machine

    • How Do Algorithms Work? Experts at Status Labs Weigh In


    by External Contributor via Digital Information World

    How Do Algorithms Work? Experts at Status Labs Weigh In

    Written by Status Labs. Edited by Asim BN.

    Algorithms shape nearly every aspect of our digital lives. From the content that appears in your social media feeds to the search results you see on Google, these invisible systems are constantly working behind the scenes to curate, organize, and personalize your online experience. But what exactly is an algorithm, and how does it determine what you see online?

    To understand the mechanics behind these powerful digital tools, the reputation management experts at Status Labs, a leading digital reputation management firm with offices across Austin, New York, Los Angeles, Miami, London, and Hamburg, explain how these systems work. Their team has spent years helping Fortune 500 companies and high-profile executives navigate the complexities of search engine algorithms and social media platforms.

    What Is an Algorithm? Breaking Down the Basics

    At its core, an algorithm is a set of instructions designed to perform a specific task or solve a particular problem. When algorithms are discussed in the digital context, they’re generally coded formulas within software that, when triggered, prompt technology to take relevant action.

    The concept itself is straightforward: define what you need a computer to do, specify what information it needs to consider, establish the goal, and then let the system process the data according to those instructions. The computer takes in relevant information and follows the specified steps to complete the task.

    What makes modern algorithms particularly sophisticated is their ability to learn. Algorithms today don't always need to lay out step-by-step plans. Instead, they can be designed to allow computers to learn over time through pattern recognition or through the integration of AI learning and reasoning into the process.

    Where You Encounter Algorithms Daily

    If you spend any time online, you're constantly interacting with algorithms. The digital strategy specialists at Status Labs point out that algorithms are used for organization, calculation, data processing, and automated reasoning across virtually every platform you use.

    Consider these common examples:

    • The movies Netflix recommends based on your viewing history
    • The videos TikTok suggests in your feed
    • The advertisements displayed across various apps and websites
    • The search results Google serves when you type in a query

    These algorithms analyze your data to find patterns in what you click on and engage with. If you tend to click on certain types of content, algorithms learn to show you more of the same. Every piece of information you provide helps these systems determine what to display to keep you engaged.

    How Google's Algorithm Determines Search Rankings

    Google's search algorithm represents one of the most complex and consequential algorithmic systems in existence. The search giant processes hundreds of billions of web pages to deliver what it determines are the most relevant results for any given query.

    According to Google's own documentation, the company's ranking systems look at many factors and signals, including the words of your query, relevance and usability of pages, expertise of sources, and your location and settings. The goal is to present the most useful information in a fraction of a second.

    Here are some key factors that Google considers when ranking web pages:

    Intent and Relevance: The algorithm assesses whether your page content matches what users are actually searching for. This goes beyond simple keyword matching. The context and tone of your content can determine whether your website appears for a particular query.

    Quality Content: Google prioritizes content that is unique, informative, and provides genuine value to users. Fluffy, repetitive, or spammy content will not rank well. Experienced marketers emphasize that quality content is becoming increasingly important in how search engines evaluate web pages.

    User Experience: This encompasses the technical aspects of your website, including page speed, mobile-friendliness, layout accessibility, and overall site structure. Google wants users to have positive experiences when they click on search results.

    Expertise and Trust: Google's systems aim to surface content that demonstrates expertise, authoritativeness, and trustworthiness. One way the algorithm assesses this is by examining whether other prominent websites link to or reference the content.

    Why Google Keeps Its Algorithm Secret

    Google updates its algorithm multiple times each year, and the company maintains significant secrecy around the specifics of these changes. There are three primary reasons for this approach.

    First, transparency would compromise Google's competitive advantage. With over 90% of the search engine market share, revealing the exact mechanics of its algorithm would make it easier for competitors to replicate key features.

    Second, algorithms require constant refinement to become more efficient and sophisticated. The central goal of Google search is delivering valuable, relevant results, which requires ongoing adjustments based on user behavior, technological trends, and evolving search patterns.

    Third, too much transparency would invite manipulation. While search engine optimization is expected and encouraged, excessive knowledge about algorithmic specifics could lead people to artificially manipulate rankings, undermining Google's mission to provide users with the best possible information.

    Social Media Algorithms: Engagement as Currency

    Social media platforms employ their own algorithmic systems, though these operate somewhat differently from search engines. Social media algorithms focus primarily on showing users content in an organized and customized way to maximize time spent on the platform.

    Each platform has its own approach. According to industry research, Facebook uses a four-step process that considers inventory (content from friends and pages you follow), signals (who posted, when it was posted, your internet speed), predictions (likelihood of engagement), and relevance scoring.

    TikTok's algorithm has become particularly notable for its ability to surface content users didn't know they wanted. The platform analyzes what videos you've watched, what you've liked, video popularity, matching tags, and contextual factors like location and language preferences.

    LinkedIn takes a different approach, emphasizing content that delivers professional insights. The platform rewards posts offering ideas, insights, and inspiration while favoring authentic, substantive conversations over quick-hit content.

    The Role of Machine Learning in Modern Algorithms

    Today's algorithms increasingly incorporate machine learning capabilities, allowing them to improve automatically through experience. Pattern recognition algorithms can now identify regularities in data and use those patterns to make predictions or classifications.

    This technology powers everything from spam filters that learn to identify unwanted emails to recommendation systems that suggest products based on your browsing history. The algorithm examines data, identifies relevant features, extracts insights, and then implements those learnings in practice.

    Machine learning has made algorithms significantly more sophisticated. Rather than following rigid rules, these systems can adapt to new patterns and improve their accuracy over time. This is why platforms like TikTok can seem almost prescient in understanding user preferences, sometimes surfacing content that users didn't even realize they'd enjoy.

    Why Understanding Algorithms Matters

    For business owners, marketers, and anyone looking to establish a strong online presence, understanding how algorithms work is essential. Learning what search engines and social media platforms prioritize can help identify areas for improvement in your digital strategy.

    For individual users, algorithm literacy enables more empowered online participation. As more of our lives move online, understanding why certain content appears in your feed helps you evaluate the context and value of what you're seeing. This awareness makes you a more educated consumer of digital information.

    In summary: the best way to operate in a system is to understand how that system works. Algorithms will only become more sophisticated over time, making foundational knowledge increasingly valuable for anyone navigating the digital landscape.

    Practical Implications for Your Online Reputation

    Understanding algorithms has direct implications for managing your online reputation. Status Labs, which has helped clients across more than 40 countries with their digital presence, notes that algorithmic knowledge allows businesses and individuals to:

    • Create content more likely to rank well in search results
    • Understand why certain information appears prominently when someone searches for you or your company
    • Develop strategies for improving what shows up in search results
    • Make informed decisions about social media engagement and content creation

    The connection between Status Labs' expertise in reputation management and algorithmic understanding is direct. Controlling your online narrative requires knowing how platforms decide what to display, when, and to whom.

    The Future of Algorithmic Systems

    Algorithms are becoming more complex as artificial intelligence capabilities expand. The integration of large language models and advanced machine learning means that future algorithms will likely be even better at understanding context, intent, and user preferences.

    This evolution presents both opportunities and challenges. More sophisticated algorithms can deliver more relevant, personalized experiences. However, they also raise important questions about transparency, fairness, and the concentration of power in the hands of platforms that control these systems.

    For businesses and individuals alike, staying informed about algorithmic developments remains crucial. The digital landscape continues to evolve, and those who understand the underlying systems will be better positioned to navigate it successfully.

    Whether you're trying to improve your company's search visibility, understand why certain content appears in your feeds, or simply become a more informed digital citizen, algorithmic literacy provides a valuable perspective on the invisible systems shaping your online experience every day.


    Image: Customer experience creative collage / freepik

    Read next: Say what’s on your mind, and AI can tell what kind of person you are
    by External Contributor via Digital Information World

    Facial recognition technology used by police is now very accurate – but public understanding lags behind

    Kay Ritchie, University of Lincoln and Katie Gray, University of Reading
    Image: Alex Borland/ Publicdomainpictures. License: CC0 Public Domain 

    The UK government’s proposed reforms to policing in England and Wales signal an increase in the use of facial recognition technology. The number of live facial recognition vans is set to rise from ten to 50, making them available to every police force in both countries.

    The plan pledges £26 million for a national facial recognition system, and £11.6 million on live facial recognition technology. The announcement has come before the end of the government’s 12-week public consultation on police use of such technology.

    The home secretary, Shabana Mahmood, claims facial recognition technology has “already led to 1,700 arrests in the Met [police force] alone – I think it’s got huge potential.”

    We have been researching public attitudes to the use of this technology around the world since 2020. While accuracy levels are constantly evolving, we have found people’s awareness of this is not always up to date.

    In the UK, the technology has so far been used by police in three main ways. All UK forces have the capability to use “retrospective” facial recognition for analysis of images captured from CCTV – for example, to identify suspects. Thirteen of the 43 forces also use live facial recognition in public spaces to locate wanted or missing individuals.

    In addition, two forces (South Wales and Gwent) use “operator-initiated facial recognition” through a mobile app, enabling officers to take a photo when they stop someone and then compare their identity against a watchlist containing information about people of interest – either because they have committed a crime or are missing.

    In countries such as China, facial recognition technology has been used more widely by the police – for example, by integrating it into realtime mass surveillance systems. In the UK, some private companies including high-street shops use facial recognition technology to identify repeat shoplifters, for example.

    Despite this widespread use of the technology, our latest survey of public attitudes in England and Wales (yet to be peer reviewed) finds that only around 10% of people feel confident that they know a lot about how and when this technology is used. This is still a jump from our 2020 study, though, when many of our UK focus group participants said they thought the technology was just sci-fi – “something that only exists in the movies”.

    A longstanding concern has been the issue of facial recognition being less accurate when used to identify non-white faces. However, our research and other tests suggest this is not the case with the systems now being used in the UK, US and some other countries.

    How accurate is today’s technology?

    It’s a common misconception that facial recognition technology captures and stores an image of your face. In fact, it creates a digital representation of the face in numbers. This representation is then compared with digital representations of known faces to determine the degree of similarity between them.

    In recent years, we have seen a rapid improvement in the performance of facial recognition algorithms through the use of “deep convolutional neural networks” – artificial networks consisting of multiple layers, designed to mimic a human brain.

    Surrey and Sussex police forces unveil new live facial recognition vans, November 2025. Video: Sussex Police.

    There are two types of mistake a facial recognition algorithm can make: “false negatives”, where it doesn’t recognise a wanted person, and “false positives” where it incorrectly identifies the wrong person.

    The US National Institute of Standards and Technology (Nist) runs the world’s gold standard evaluation of facial recognition algorithms. The 16 algorithms currently topping its leaderboard all show overall false negative rates of less than 1%, while false positives are held at 0.3%.

    The UK’s National Physical Laboratory’s data shows the system being tested and used by UK police to search their databases returns the correct identity in 99% of cases. This accuracy level is achieved by balancing high true identification rates with low false positive rates.

    While some people are uncomfortable with even small error rates, human observers have been found to make far more mistakes when doing the same kinds of tasks. Two of the standard tests of face matching ask people to compare two images side-by-side and decide whether they show the same person. One test recorded an error rate of up to 32.5%, and the other an error rate of 34%.

    Historically, when testing the accuracy of facial recognition technology, bigger error rates have been found with non-white faces. In a 2018 study, for example, error rates for darker-skinned women were 40 times higher than for white men.

    These earlier systems were trained on small numbers of images, mostly white male faces. Recent systems have been trained on much larger, deliberately balanced image sets. They are actively tested for demographic biases and are tuned to minimise errors.

    Nist has published tests showing that although the leading algorithms still have slightly higher false positive rates for non-white faces compared with white faces, these error rates are below 0.5%.

    How the public feel about this technology

    According to our January 2026 survey of 1,001 people across England and Wales, almost 80% of people now feel “comfortable” with police using facial recognition technology to search for people on police watchlists.

    However, only around 55% said they trust the police to use facial recognition responsibly. This compares with 79% and 63% when we asked the same questions to 1,107 people throughout the UK in 2020.

    Both times, we asked to what extent people agree with police using facial recognition technology for different uses. Our results show the public remains particularly supportive of police use of facial recognition in criminal investigations (90% in 2020 and 89% in 2026), to search for missing persons (86% up to 89%), and for people who have committed a crime (90% down slightly to 89%).

    There are lots of examples of facial recognition’s role in helping police to locate wanted and vulnerable people. But as facial recognition technology is more widely adopted, our research suggests the police and Home Office need to do more to make sure the public are informed about how it is – and isn’t – being used.

    We also suggest the proposed new legal framework should apply to all users of facial recognition, not just the police. If not, public trust in the police’s use of this technology could be undermined by other users’ less responsible actions.

    It is critical that the police are using up-to-date systems to guard against demographic biases. A more streamlined national police service, as laid out in the government’s latest white paper, could help ensure the same systems are being used everywhere – and that officers are being trained consistently in how to use these systems correctly and fairly.The Conversation

    Kay Ritchie, Associate Professor in Cognitive Psychology, University of Lincoln and Katie Gray, Associate Professor, School of Psychology and Clinical Language Sciences, University of Reading

    This article is republished from The Conversation under a Creative Commons license. Read the original article.

    Read next: PFAS are turning up in the Great Lakes, putting fish and water supplies at risk – here’s how they get there


    by External Contributor via Digital Information World

    Tuesday, February 3, 2026

    PFAS are turning up in the Great Lakes, putting fish and water supplies at risk – here’s how they get there

    Christy Remucal, University of Wisconsin-Madison
    Image Credit: Sharon Fjeldstrom via publicdomainpictures. Caption: Welland Canal, Ontario. License: CC0 Public Domain.

    No matter where you live in the United States, you have likely seen headlines about PFAS being detected in everything from drinking water to fish to milk to human bodies.

    PFAS, or per- and polyfluoroalkyl substances, are a group of over 10,000 synthetic chemicals. They have been used for decades to make products waterproof and stain- and heat-resistant – picture food wrappers, stain-resistant carpet, rain jackets and firefighting foam.

    These chemicals are a growing concern because some PFAS are toxic even at very low levels and associated with health risks like thyroid issues and cancer. And some of the most common PFAS don’t naturally break down, which is why they are often referred to as “forever chemicals.”

    Now, PFAS are posing a threat to the Great Lakes, one of America’s most vital water resources.

    The five Great Lakes are massive, with over 10,000 miles of coastline (16,000 kilometers) across two countries and containing 21% of the world’s fresh surface water. They provide drinking water to over 30 million people and are home to a robust commercial and recreational fishing industry.

    My colleagues at the University of Wisconsin-Madison and I study how chemicals like PFAS are affecting water systems. Here’s what we’re learning about how PFAS are getting into the Great Lakes, the risks they’re posing and how to reduce those risks in the future.

    PFAS’ many pathways into the Great Lakes

    Hundreds of rivers flow into the lakes, and each can be contaminated with PFAS from sources such as industrial sites, military operations and wastewater treatment plants in their watersheds. Some pesticides also contain PFAS, which can wash off farm fields and into creeks, rivers and lakes.

    The concentration of PFAS in rivers can vary widely depending on these upstream impacts. For example, we found concentrations of over 1,700 parts-per-trillion in Great Lakes tributaries in Wisconsin near where firefighting foam has regularly been used. That’s more than 400 times higher than federal drinking water regulations for PFOS and PFOA, both 4 parts-per-trillion.

    However, concentration alone does not tell the whole story. We also found that large rivers with relatively low amounts of PFAS can put more of these chemicals into the lakes each day compared with smaller rivers with high amounts of PFAS. This means that any effort to limit the amount of PFAS in the Great Lakes should consider both high-concentration hot spots and large rivers.

    Groundwater is another key route carrying PFAS into the Great Lakes. Groundwater is a drinking water source for more than one-third of people in the U.S., and it can become contaminated when PFAS in firefighting foam and other PFAS sources seep into soil.

    When these contaminated plumes enter the Great Lakes, they carry PFAS with them. We detected PFAS concentrations of over 260 parts-per-trillion in the bay of Green Bay in Lake Michigan. The chemicals we found were associated with firefighting foam, and we were able to trace them back to a contaminated groundwater plume.

    PFAS can also enter the Great Lakes in unexpected ways, such as in rain and snowfall. PFAS can get into the atmosphere from industrial processes and waste incineration. The chemicals have been detected in rain across the world, including in states surrounding the Great Lakes.

    Although PFAS concentrations in precipitation are typically lower than in rivers or groundwater, this is still an important contamination source. Scientists estimate that precipitation is a major source of PFAS to Lake Superior, which receives about half of its water through precipitation.

    Where PFAS end up determines the risk

    Much of the PFAS that enter Lake Superior will eventually make their way to the downstream lakes of Michigan, Huron, Erie and Ontario.

    These chemicals’ ability to travel with water is one reason why PFAS are such a concern for drinking water systems. Many communities get their drinking water from the Great Lakes.


    PFAS can also contaminate other parts of the environment.

    The chemicals have been detected in sediments at the bottom of all the Great Lakes. Contaminated sediment can release PFAS back into the overlying water, where fish and aquatic birds can ingest it. So, future remediation efforts to remove PFAS from the lakes are about more than just the water – they involve the sediment as well.

    PFAS can also accumulate in foams that form on lake shorelines during turbulent conditions. Concentrations of PFAS can be up to 7,000 times higher in natural foams compared with the water because PFAS are surfactants and build up where air and water meet, like bubbles in foam. As a result, state agencies recommend washing skin that comes in contact with foam and preventing pets from playing in foam.

    Some PFAS bioaccumulate, or build up, within fish and wildlife. Elevated levels of PFAS have been detected in Great Lakes fish, raising concerns for fisheries.

    High PFAS concentrations in fish in coastal areas and inland waters have led to advisories recommending people limit how much they fish they eat.

    Looking ahead

    Water cycles through the Great Lakes, but the process can take many years, from 2.6 years in Lake Erie to nearly 200 years in Lake Superior.

    This means that PFAS that enter the lakes will be there for a very long time.


    Since it is not possible to clean up the over 6 quadrillion gallons of water in the Great Lakes after they have been contaminated, preventing further contamination is key to protecting the lakes for the future.

    That starts with identifying contaminated groundwater and rivers that are adding PFAS to the lakes. The Sea Grant College Program and the National Institutes of Water Resources, including the Wisconsin programs that I direct, have been supporting research to map these sources, as well as helping translate that knowledge into actions that policymakers and resource managers can take.

    PFAS contamination is an issue beyond the Great Lakes and is something everyone can work to address.

    • Drinking water. If you are one of the millions of people who drink water from the Great Lakes, find out the PFAS concentrations in your drinking water. This data is increasingly available from local drinking water utilities.
    • Fish. Eating fish can provide great health benefits, but be aware of health advisories about fish caught in the Great Lakes and in inland waters so you can balance the risks. Other chemicals, such as mercury and PCBs, can also lead to fish advisories.
    • Personal choice. Scientists have proposed that PFAS only be used when they have vital functions and there are no alternatives. Consumer demand for PFAS-free products is helping reduce PFAS use in some products. Several states have also introduced legislation to ban PFAS use in some applications.

    Decreasing use of PFAS will ultimately prevent downstream contamination in the Great Lakes and around the U.S.The Conversation

    Christy Remucal, Professor of Civil and Environmental Engineering, University of Wisconsin-Madison

    This article is republished from The Conversation under a Creative Commons license. Read the original article.

    Read next: Say what’s on your mind, and AI can tell what kind of person you are


    by External Contributor via Digital Information World

    Say what’s on your mind, and AI can tell what kind of person you are

    If you say a few words, generative AI will understand who you are—maybe even better than your close family and friends.

    Image: æ„šæœ¨æ··æ ª Yumu / unsplash

    A new University of Michigan study* found that widely available generative AI models (e.g., ChatGPT, Claude, LLaMa) can predict personality, key behaviors and daily emotions as or even more accurately than those closest to you.

    “What this study shows is AI can also help us understand ourselves better, providing insights into what makes us most human, our personalities,” said the study’s first author Aidan Wright, U-M professor of psychology and psychiatry. “Lots of people may find this of interest and useful. People have long been interested in understanding themselves better. Online personality questionnaires, some valid and many of dubious quality, are enormously popular.”

    Researchers looked into whether AI programs like ChatGPT and Claude can act like general “judges” of personality. To test this, they had the AI read people’s own words—either short daily video diaries or longer recordings of what happened to be on their mind—and asked it to answer personality questions the way each person would. The study included stories and thoughts from more than 160 people collected in real-life and lab settings.

    The results showed that the AI’s personality scores were very similar to how people rated themselves, and often matched them better than ratings from friends or family. Older text-analysis methods did not perform nearly as well as these newer AI systems.

    “We were taken aback by just how strong these associations were, given how different these two data sources are,” Wright said.

    AI’s personality ratings could also predict real parts of people’s lives, like their emotions, stress levels, social behavior and even whether they had been diagnosed with mental health conditions or sought treatment, according to the findings.

    This research indicates that personality naturally shows up in our everyday thoughts, words and stories—even when we’re not trying to describe ourselves.

    Chandra Sripada, U-M professor of philosophy and psychiatry, says the findings support the long-held idea that language carries deep clues about how people differ in psychological traits such as personality and mood. He adds that open-ended writing and speech can be a powerful tool for understanding personality. Thanks to generative AI, researchers can now analyze this kind of data quickly and accurately in ways that weren’t possible before.

    At the same time, important questions remain. The study relied on people rating their own personalities and did not test how well AI compares with judgments from friends or family, or how results might differ across age, gender or race.

    Researchers also don’t yet know whether AI and humans rely on the same signals—or whether AI could one day outperform self-reports when predicting major life outcomes like relationships, education, health, or career success.

    “The study shows that AI can reliably uncover personality traits from everyday language, pointing to a new frontier in understanding human psychology,” said Colin Vize, assistant professor of psychology at the University of Pittsburgh.

    Whitney Ringwald, assistant professor of psychology at the University of Minnesota, says the results “really highlight how our personality is infused in everything we do, even down to our mundane, everyday experiences and passing thoughts.”

    The study’s other authors were Johannes Eichstaedt of Stanford University and Mike Angstadt and Aman Taxali, both from U-M. The findings appear in the journal Nature Human Behavior.

    Contact: Jared Wadley.

    *Study: Generative AI predicts personality traits based on open-ended narratives (DOI: 10.1038/s41562-025-02389-x)

    Editor’s Notes: This article was originally published on Michigan News, and republished here with permission.

    Read next:

    The Dangers of Not Teaching Students How to Use AI Responsibly

    Lit bots beware: Readers less favorable toward AI-generated creative writing, U-M research finds

    by External Contributor via Digital Information World

    Saturday, January 31, 2026

    Lit bots beware: Readers less favorable toward AI-generated creative writing, U-M research finds

    When it comes to creative writing, score one for the humans over the machines. For now, anyway.

    Image: Andrea Piacquadio / Pexels

    New research finds that people evaluate creative writing less favorably when they learn it was generated in whole or part by artificial intelligence. And the anti-AI bias is persistent and difficult to reduce, even when steps were taken to lessen the aversion within the experiments.

    That strength and consistency of the negative attitude toward AI-generated or assisted writing jumped out at researchers, and they say it poses implications for integrating AI in creative fields. As it stands, the study finds people tend to view the creative works of machines as “relatively inauthentic and therefore less worthy of their appreciation.”

    The researchers say previous research has offered preliminary evidence that AI disclosure can have negative effects on how people evaluate creative content, but their study builds on it by revealing a “surprising level of robustness” across 16 experiments involving 27,000 participants conducted between March 2023 and June 2024.

    “What surprised us most was how incredibly ‘sticky’ this penalty is,” said Justin Berg, the study’s co-author and an associate professor of management and organizations at University of Michigan’s Ross School of Business.

    “We threw everything at it, from changing the story’s perspective to humanizing the AI or framing it as a collaboration, and nothing reliably reduced the bias. Across all the experiments, the pattern was clear: If readers believe AI is involved, they view the work as less authentic and enjoy it less, even when the content is identical.”

    Throughout the study, the researchers asked participants to read and evaluate AI-generated writing samples created using ChatGPT—chosen because it was the most well-known large language model at the time of the initial study. Across all the experiments, AI disclosure decreased evaluations by an average of 6.2%.

    Berg and his colleagues, Manav Raj of the University of Pennsylvania’s Wharton School of Business and Rob Seamans of New York University’s Stern School of Business, note the results reflect attitudes during a period of rapid advancements in AI capabilities and shifting perceptions of its role in creative work. It’s an open question—and fertile ground for further study—whether the AI disclosure penalty will persist, diminish or reverse as such content becomes more pervasive.

    What does appear clear—at least for now—is the use of AI in creative writing triggers different psychological responses than when the technology is employed in other domains. Understanding that bias is crucial for helping navigate the challenges for those working toward fuller, broader human-AI collaboration.

    The findings, published in the Journal of Experimental Psychology, also pose practical implications for creative producers using AI, as the U.S. Congress considers AI disclosure legislation. Mandated disclosure of AI involvement in creative work could usher in negative biases toward such content and potentially affect its reception.

    Contact: Jeff Karoub.

    Editor’s Notes:
    1. This article was originally published on Michigan News, and republished here with permission. A representative of the University of Michigan news team confirmed that AI tools were not used in its production. 
    2. The study notes:"We have studied the effect of AI disclosure on evaluations in one specific domain (creative writing)"… "We are also careful to note that our study does not address whether and in what circumstances output created by an AI tool may be more or less creative than output created by a human"… and "it is important to note that the AI disclosure effects we document may evolve over time".

    Read next: The Dangers of Not Teaching Students How to Use AI Responsibly

    by External Contributor via Digital Information World