Friday, May 1, 2026

Can we stop ChatGPT from spreading bias?

By the University of Amsterdam

Image: Merrilee Schultz / unsplash

Language models like ChatGPT are not neutral. Without our realising it, they can absorb all kinds of bias – for example around gender and ethnicity – which then become increasingly embedded in the model. According to AI researcher Oskar van der Wal, we need different kinds of measurements to detect these biases so that they can be removed from the models. In his doctoral thesis, he shows how this can be done. On 29 April, he defended his thesis at the University of Amsterdam.

Language models are often seen as neutral tools, but in practice they can both reflect and amplify bias.

‘Users often don’t realise that a model makes certain assumptions, for example by introducing subtle differences in how men and women are described,’ says Van der Wal. Precisely because bias is so hidden, it can spread unnoticed and colour the way we see the world.

Bias is hard to measure

An important problem is that bias is difficult to measure. ‘Many existing measurement methods are fairly abstract and don’t take practice into account. They might look for overt stereotypes in what the model says, such as “The Dutch are stingy.” But in practice, bias isn’t something that’s directly visible. It depends on the context in which you use the model.’

Van der Wal cites the use of AI in healthcare as an example. ‘AI learns from existing data. If those data contain outdated or incorrect assumptions – for instance, the contested idea that certain diseases are linked to the outdated concept of “race” – the model may keep reproducing them. In healthcare, that can lead to incorrect diagnoses or treatments.’

Another example is when medical data largely derives from research involving men. ‘AI may then interpret women’s symptoms differently or less seriously, or make different risk assessments.’

Realistic scenarios

To discover whether realistic scenarios reveal different errors than simple tests, Van der Wal presented language models with a range of medical cases and asked them to provide diagnoses, risk assessments or advice. ‘We repeatedly changed the patient’s ethnicity. That way we could identify whether and how the model responded differently.’

Subtle but consistent differences appeared in the outcomes, differences that remained invisible in standard tests. ‘Precisely because our scenarios were close to practice, it became clear how bias can influence medical decision-making.’

Model reinforces patterns in the data

Van der Wal also investigated what happens inside a language model during training. He followed, step by step, how the model learns to store information. ‘During training, the model learns which words and ideas frequently occur together. If “doctor” often appears together with “he” and “nurse” with “she” in the training data, the model will pick up on those associations.’

Over time, the model appeared to store this information in increasingly specific places, thereby reinforcing gender bias. ‘Bias doesn’t arise only from the data that AI is trained on, but also from the way the model structures that information.’

There are solutions

Unfortunately, you can’t fix bias in language models with a single trick. But, according to Van der Wal, targeted interventions can help. ‘If you know where in the model the bias is located, you can address those areas. This already seems to work in specific cases, but more research is needed to extend the approach to more complex forms of bias.’

Van der Wal tested this targeted approach by comparing a model before and after an adjustment in which the model was trained not to adopt identified gender-related biases. He wanted to see if the model responded less differently to men and women after the change, and how well it still performed ordinary tasks, such as generating text.

The bias decreased, while the quality of the model largely remained intact.

Careful and deliberate

The impact of AI is not restricted to the technical realm but now has broader societal relevance. ‘We are becoming increasingly dependent on systems that can influence how we think,’ says Van der Wal. ‘That’s precisely why it’s important to develop AI carefully. Responsible AI development requires interventions at multiple levels at once: in the data, during training, targeted within the model itself, and also in its deployment and use.’

How can you as a user carefully use AI?

  • Be critical of answers: Don’t automatically assume an AI answer is correct or complete. Ask yourself: what am I not seeing? And where does the answer come from? ‘A model can come across as very confident, making its answers seem more reliable than they are,’ warns Van der Wal. ‘It’s also tempting to trust a chatbot that always agrees with you and is very complimentary. But that’s precisely when it’s even more important to stay critical.’
  • Be aware of hidden risks: Bias and other effects (such as influencing your thinking) are often not immediately visible. That’s why it’s important to stay alert.
  • Avoid becoming dependent: Use AI as a tool, but keep thinking and deciding for yourself. Over-reliance can make you less confident in your own knowledge and judgement.

This post was originally published on the University of Amsterdam news section and republished here with permission.

Reviewed by Irfan Ahmad.

Read next: 

• ‘Just looping you in’: Why letting AI write our emails might actually create more work
by External Contributor via Digital Information World

‘Just looping you in’: Why letting AI write our emails might actually create more work

Daniel Angus, Queensland University of Technology

I hope this article finds you well.

Did that make you cringe, ever so slightly? In the decades since the very first email was sent in 1971, the technology has become the quiet infrastructure of white-collar work.

Email came with the promise of efficiency, clarity and less friction in organisational communication. Instead, for many, it has morphed into something else: always there, near impossible to escape and sometimes simply overwhelming.

Right now, something is shifting again. The rise of generative artificial intelligence (AI) technologies, such as ChatGPT and Microsoft Copilot, is increasingly allowing people to offload the repetitive routines of tending one’s inbox – drafting, summarising and replying.

My colleagues in the ARC Centre of Excellence for Automated Decision Making & Society found 45.6% of Australians have recently used a generative AI tool, 82.6% of those using it for text generation. A healthy chunk of that use likely includes email.

So, what happens if we end up fully automating one of the staples of the white-collar daily grind? Will AI technologies reduce some of the friction, or generate new forms of it? Dare I ask – are we actually about to get more email?

Email has long been about more than just communicating information. Vitaly Gariev/Unsplash

Why the printer isn’t dead yet

Soon after the advent of email, some voices in the business world heralded the coming end of paper use in the office. That didn’t happen. If you work in an office today, there’s a good chance you still have a printer.

In their 2001 book, The Myth of the Paperless Office, Abigail Sellen and Richard Harper show how digital tools rarely eliminate older forms of work. Instead, they reshape them.

Sellen and Harper show how paper use didn’t disappear with the rise of email and other digital communication tools; in many cases, it intensified. The takeaway isn’t that offices failed to modernise, but rather that work reorganised around what these new tools could do.



In this case, paper persisted not only out of habit, but because of what it affords: it is easy to annotate, spread out, carry and view at a glance. This was all too clunky (or impossible) to perform via the digital alternatives.

At the same time, email and digitisation dramatically lowered the cost of producing and distributing communication. It was far easier to send more messages, to more people, more often.

Circling back to today

Will AI be different? If early signs are anything to go by, the answer is: not in the way we might hope.

Like earlier waves of workplace technology, AI is less likely to replace existing communication practices than to intensify them – but at least it might come with better grammar and a suspiciously upbeat tone.

Some new AI tools offer to manage your inbox entirely, feeding into broader privacy concerns about the technology.

At this moment, what a lot of these products seem to offer is not an escape from email, but a smoothing of its rough edges. Workers are using AI to soften otherwise blunt requests, modify their tone or expand what might otherwise be considered too brief a response.

Rather than removing the need to communicate, these tools offer pathways to make a delicate performance easier.

What email is actually for

Email, like many forms of communication, is as much about maintaining everyday relationships as it is about the transfer of information.

At work, it’s often about signalling competence, responsiveness, collegiality and authority. “Just looping someone in” or “circling back” are all part of our absurd office vocabulary, a shared dialect that helps us navigate hierarchy, soften demands and keep things moving – all without saying what we really think.

If AI lowers the effort required to produce these signals, it won’t necessarily reduce their importance, but it could unsettle things in rather odd ways.

If more people use AI to draft emails they don’t particularly want to write, we end up with a game of bureaucratic “mime”: everyone performing sincerity and quietly outsourcing it, and no one entirely sure how much of their inbox was actually written by a human.

The labour of email was never just about crafting sentences. It’s always been the scanning, the sorting and the deciding. AI doesn’t remove this burden. If anything, it amplifies it.

When everything arrives polished, everything looks important. That points to a deeper question for the future of work: if AI can perform responsiveness, why are we generating so many situations that still require it?

Looking forward

What would a workplace look like if email wasn’t the default solution to every coordination problem? Perhaps fewer performative check-ins, “just touching base”, “looping you in” or “following up on the below”. More clearer expectations about what actually requires a response, and what doesn’t.

Email, like paper, is likely to persist for good reasons. It is simple, flexible and universal. It allows things to be deferred, revisited, forwarded and quietly ignored.

But if AI is going to change any of this, my hope is that it makes visible how much of this is ritual, how much is habit, and how much has long been unnecessary.

And if the machines are happy to keep saying “hope this finds you well” to each other, we might finally have permission to stop.The Conversation

Daniel Angus, Professor of Digital Communication, Director of QUT Digital Media Research Centre, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Irfan Ahmad.

Read next:

• What the @ Sign Is Called Around the World: 25 Examples

• Q&A: Who’s responsible when AI makes mistakes?

• AI analysis of police body-camera footage raises Constitutional concerns, racial disparities


by External Contributor via Digital Information World

AI analysis of police body-camera footage raises Constitutional concerns, racial disparities

Thousands of officer-worn camera recordings found evidence of underreported police stops, troubling racial disparities in officer interactions, and widespread use of unclear language during consent searches, a new study shows.

Image: Raphael Lopes - unsplash

Researchers at the University of Michigan, University of California-Davis and Stanford University say their findings raise constitutional concerns under both the Fourth and Fourteenth Amendments, involving protection from unreasonable searches/seizures and prohibiting discriminatory practices based on race and ethnicity, respectively.

The report highlights how artificial intelligence could transform police oversight by helping reviewers identify potentially problematic encounters hidden within millions of hours of body-camera footage. The research demonstrates the growing potential for AI-powered analysis to help courts, police departments and municipal governments better evaluate compliance while building greater public trust in law enforcement.

Using machine learning and natural language processing, researchers examined New York Police Department (NYPD) encounters captured on body-worn cameras, looking closely at whether officers followed legal standards governing stops, detentions and consent searches.

Among the study’s most significant findings:

  • Body-camera recordings could be classified as stops with over 80% accuracy, and underdocumented stops with over 70% accuracy based on language alone.
  • Using language models, reviewers could uncover over 50% of undocumented stops identified in manual audits by viewing a fraction (25%) of the footage they normally would.
  • Officers frequently relied on indirect or confusing phrases such as “Do you mind if I check?” rather than clearly asking for consent to search.
  • The word “consent” appeared in less than 13% of consent-search interactions reviewed.
  • Commands and indirect requests appeared more frequently in encounters involving Black and Hispanic civilians.

Nicholas Camp, U-M assistant professor of organizational studies, said these patterns raise questions about whether some civilians clearly understood they could refuse searches and whether certain encounters were documented accurately.

The study stems from reforms ordered after the landmark 2013 federal court ruling in Floyd v. City of New York, in which the U.S. District Court for the Southern District of New York found that the NYPD’s stop-and-frisk practices violated constitutional protections against unreasonable searches and racial discrimination.

Following the ruling, the court appointed an independent monitor to oversee reforms involving NYPD training, supervision and investigative encounters. As part of those reforms, NYPD officers began using body-worn cameras, which captured numerous police-community interactions.

“These recordings provide a far clearer picture of officer behavior than written police reports alone,” Camp said.

The study, approved by the court in 2021, analyzed more than 1,700 encounters connected to an earlier City University of New York Institute for State and Local Governance review, more than 1,100 additional encounters reviewed by the Monitor team, and nearly 1,800 consent-search encounters from 2023.

AI models developed during the study successfully distinguished lower-level encounters from Level 3 stops—which legally require reasonable suspicion—with accuracy rates ranging from approximately 72% to 91%. Researchers say those tools could help oversight teams identify constitutional concerns faster and more consistently by prioritizing footage most likely to contain problematic interactions.

Researchers emphasized that artificial intelligence is not intended to replace human oversight, but instead serves as a tool to strengthen accountability, improve auditing and support ongoing police reform efforts.

“Our analyses identify troubling patterns in NYPD encounters, but also show a path forward: Body camera footage can be used as data to inform and measure changes in law enforcement,” Camp said.

The study’s authors also include Rob Voigt, assistant professor of linguistics, UC-Davis; Dan Sutton, director of Justice and Safety, Stanford (Law School) Center for Racial Justice, and Jennifer Eberhardt, professor of organizational behavior and psychology, Stanford University.

Note: At the time of publication, we have reached out to the NYPD for comment regarding the study’s findings on body-camera analysis and will update this article if a response is received.

This post was originally published on the University of Michigan News and republished here with permission.

Reviewed by Irfan Ahmad.

Read next:

• Transparency and trust in the age of deepfake ads

• Q&A: Who’s responsible when AI makes mistakes?


by External Contributor via Digital Information World

Thursday, April 30, 2026

Standardised testing and scripted lessons are failing teachers and students alike, education expert warns

By Taylor & Francis

Geoff Masters challenges a system which teaches the same curriculum to children with very different comprehension levels.

Geoff Masters criticizes age-based schooling, advocating personalized learning and teacher autonomy over standardized curricula systems.
Image: Rewired Digital / Unsplash

Is it time to ditch scripted lessons and heavily-packed curricula to focus on individual student growth?

This is the question posed by education expert Geoff Masters, who argues that age-based expectations are not serving all children well, while scripted lessons are failing teachers and students alike.

Masters, the former head of the Australian Council for Educational Research, asks how well children are served by a system in which two pupils in the same class can differ by six or more years of learning but are taught the same material.

He argues this system fails children at either end of the scale – those who are struggling and those who are unchallenged. He asks what if, instead of holding all pupils of the same age to the same learning expectations, we based expectations on where individuals are in their comprehension and individual growth.

“Too many students in our schools are being poorly served and left behind by machineries of schooling not fit for purpose,” Masters warns.

The problem with standardisation

Masters argues there is a fundamental flaw in the current system: the assumption that all students in the same grade are equally ready to learn the same material.

Research shows that children in the same classroom can have up to a seven-year difference in their reading and mathematics comprehension. This vast variation, Masters argues, is ignored by a system that prioritises standardisation over individual needs.

“By the middle years of school, many students have not learnt what the curriculum expected them to learn much earlier in their schooling,” Masters explains. He cites data showing how, across 38 developed countries, almost a third of 15-year-olds have difficulty demonstrating 5th and 6th grade mathematics content.

The picture in Australia

Masters’ arguments are presented against a backdrop of Australia’s declining performance in international assessments like PISA. Between 2012 and 2022, there was no significant improvement in Australian students’ performances in reading, mathematics or science. In fact, long-term declines have been recorded across all three areas.

“Despite decades of reforms, the machinery of schooling has not delivered the improvements we need,” Masters says. “It’s time to question whether prescribing what every student must learn in each grade of school and testing to see whether they have learnt it is the best way to optimise learning and improve performance.”

Masters also explains how those who start the year behind are likely to stay behind. He explains: “When the curriculum expects all students in a grade to be taught the same content at the same time, those who begin well below grade level are disadvantaged. This disadvantage is compounded when students are required to move from one grade curriculum to the next based on elapsed time rather than mastery. Students who lack essential prerequisites often fall further behind as each grade’s curriculum becomes increasingly beyond their reach.”

The future of learning

Masters instead argues for a system that meets students where they are in their learning, rather than where their age or grade dictates they should be. He proposes replacing age-based expectations with personalised learning plans that track individual growth.

“Improved performance depends on meeting each student where they are with personally meaningful, well-targeted learning opportunities that build on what they already know,” Masters explains. “This approach includes all students, including neurodiverse children and others with special needs.”

This approach would not only benefit students, he suggests, but also empower teachers to use their professional expertise to design tailored learning experiences.

One of the most concerning trends in education, in Masters’ view, is the rise of scripted lessons.

“Scripted lessons turn teaching into the delivery of ready-made solutions created outside the classroom,” Masters says. “They undervalue teachers’ expertise in what is arguably the essence of effective teaching: establishing where individuals are in their learning and designing opportunities to promote further growth.”

Masters calls for a return to professional autonomy, where teachers are trusted to make decisions in the best interests of their students.

Masters envisions a future where education systems embrace diversity and difference.

“Rather than expecting students to fit the expectations of schooling, the challenge is to redesign school structures and processes to better meet the needs of individual learners,” Masters concludes.

Further information: The Children We Leave Behind: How School Could Be Done Differently, by Geoff Masters (Routledge, 2026). ISBN: Paperback: 9781041279655 | Hardback 9781041279662 | eBook 9781003757122. DOI: https://doi.org/10.4324/9781003757122

This post was originally published on Taylor & Francis Newsroom and republished on DIW with permission.

Reviewed by Irfan Ahmad.

Read next:

Facial recognition data is a key to your identity – if stolen, you can’t just change the locks

• The Deadliest Countries for Journalists
by External Contributor via Digital Information World

Some Chrome Extensions With Large User Bases Disclose Data Sale or Sharing Practices in Their Privacy Policies

By Dar Kahllon and Guy Erez - LayerX

Executive Summary:

New research by LayerX Security uncovers multiple networks of browser extensions that collect user data and resell it for profit – and it’s all completely legal. For, unlike malicious extensions that disguise themselves as legitimate extensions and do their bidding in the dark, these extensions explicitly tell users that they’re going to collect and sell their data. It’s right there in the Privacy Policy; except that nobody reads it.

LayerX analyzed the privacy policies of thousands of extensions and uncovered over 80 different extensions that collect and sell customer data. Some of these extensions include:

  • A network of 24 media extensions that are installed on 800,000 users and collected viewing data and demographic information on major streaming platforms such as Netflix, Hulu, Disney+, Amazon Prime Video, HBO, Apple TV, and others
  • 12 separate ad blockers with a combined install base of over 5.5 million users openly selling user data
  • Nearly 50 other extensions, with over 100,000 users in aggregate, that collected and resold users’ browsing data

While browser extensions may seem innocent, these findings highlight the privacy exposure that can arise from unregulated usage of extensions.

The Fine Print That Makes Everything Legal

Privacy policies. Reading them is like watching paint dry. For most users, it’s worse than reading the fine print in their mortgage agreements; and that’s saying something.

Except we did.

LayerX Security researchers Dar Kahllon and Guy Erez analyzed the privacy policies of thousands of browser extensions available in official stores. They were looking for one thing: whether the publisher explicitly reserved the right to sell user data.

And we found them. Our analysis showed at least 80 such extensions, some of them working in collusion, and developed by the same developer across all extensions. They range from ad blockers and streaming tools to job application helpers, new-tab extensions, and B2B sales intelligence platforms.

Most of these policies don’t say “we sell your data.” They say “we may sell.” It’s a legal hedge – but it means your data can be sold at any time, and you already agreed to it. Here’s what that looks like in practice:

“We may sell or share your personal information with third parties.”

“This information may be sold to or shared with business partners.”

What? Browser Extensions Have Privacy Policies?!

Well, to be fair, most don’t.

This isn’t a story about malware. Nobody hacked you. Nobody stole anything. The extensions you’re running right now may be selling your browsing data — and they told you they would. It’s right there in the privacy policy. Page 4. Paragraph 7. The one nobody reads.
Figure 1. Privacy Policy Transparency

According to LayerX’s Enterprise Browser Extension Security Report 2026, 71% of all extensions in the Chrome Web Store don’t even publish a privacy policy.

As a result, more than 73% of users have at least one extension installed without a privacy policy, with no transparency into how their data is handled. This means our analysis could only rely on the 29% that do have a privacy policy.

And if we assume that some of those extensions with no privacy policy at all will also resell your data – and there’s no reason to assume they’re better – the real number of extensions that may sell your data across the Chrome Web Store is in the tens of thousands.

How We Analyzed The Data

We built a pipeline to analyze privacy policies associated with browser extensions in official stores, combining automated classification with manual verification.

Starting from roughly 9,000 extensions with privacy policy URLs in our database, we successfully fetched and parsed 6,666 policies.

The pipeline ran in three stages:

  1. First, AI classification flagged policies disclosing the selling, licensing, or commercial transfer of user data. We then marked high-confidence matches for review and verified every flagged policy manually.
  2. Performed a manual review to remove false positives, including: (A) Enterprise security tools (e.g., Fortinet, CrowdStrike) that route browsing data to their own servers as part of expected web filtering behavior. (B) Standard CCPA ad-retargeting disclosures (e.g., HubSpot, Calendly), where sharing cookies with platforms like Google Ads may technically count as a “sale” under broad definitions. (C) Consensual data monetization platforms (e.g., Swash) where users explicitly opt in and are compensated.

    Final dataset includes only extensions whose privacy policies indicate genuine commercial sale of user data to third parties

  3. In the final count, we found 82 unique extensions across 94 store listings.75 are currently live in the Chrome Web Store. The remaining 7 have been removed – but “removed” doesn’t mean “uninstalled.” Extensions pulled from the store can stay active in browsers that already have them.

While these figures may seem low, bear in mind that these figures are only for extensions with privacy policies to begin with (less than one-third of all extensions), and those extensions that actually tell you what they’re doing with your data. The true number is almost certainly higher.

Here are a few of our key findings:

The QVI Empire: One Anonymous Publisher, 24 Extensions, 800,000 Users

While reviewing confirmed sellers, a pattern kept surfacing. Different extensions, different streaming platforms, but the same three-letter prefix: QVI– short for “Quality Viewership Initiative.”

What looked like unrelated tools turned out to be a single operation: 24 browser extensions – 21 currently live, 3 removed – covering nearly every major streaming service.

  • Netflix
  • Hulu
  • Disney+
  • Amazon Prime Video
  • HBO Max
  • Peacock
  • Paramount+
  • Tubi
  • Apple TV+
  • Crunchyroll

All published by HideApp LLC, registered at 1021 East Lincolnway, Cheyenne, Wyoming – an address shared by hundreds of other LLCs through a registered agent service – and operating under the brand “dogooodapp.”

The largest extensions in the network:

  • Custom Profile Picture for Netflix (200K users)
  • Hulu Ad Skipper (100K)
  • Netflix Picture in Picture (100K)
  • Ad Skipper for Prime Video (60K)
  • Netflix Extended (60K)

Across all 21 live extensions, the network reaches nearly 800,000 users.

Figure 2. Extension Page in Chrome Store for the “Custom profile picture for Netflix [QVI]” extension

But their privacy policy says something the store listings don’t. These extensions collect extensive information, including:

  • Viewing history
  • Content preferences
  • Platform subscriptions
  • Downloaded content
  • Streaming behavior

They also collect age and gender – and if you don’t provide demographics, they match your email against third-party demographic databases to fill in the gaps.

Figure 3. Data declared as collected by the privacy policy of the “Custom profile picture for Netflix [QVI]” extension

The policy describes selling reports to content creators and studios, streaming platforms, media research firms, and marketing agencies – along with “organizations that purchase anonymized viewing data.”

Put it all together and you’re looking at a distributed audience-measurement system running inside users’ browsers. One anonymous publisher pulling viewing behavior across every major streaming platform, building intelligence about what nearly 800,000 people watch, when, and how they engage with content. None of those users signed up for that. Legally, they accepted the terms when they clicked “Add to Chrome.” Practically, nobody read them.

Ad Blockers That Block Some Ads, And Sell Your Data to Other Ads

We confirmed eight ad blockers that reserve the right to sell or share user information with third parties. Tools people install to stop tracking – selling tracking data instead. Combined, they reach over 5.5 million users.

  • Stands AdBlocker (3M users) sells browsing data to third parties for “market analytics purposes.”
  • Poper Blocker (2M users) discloses selling identifiers, browsing activity, behavioral profiles, and inferred sensitive data – including health conditions, religious beliefs, and sexual orientation, all inferred from the URLs you visit.
  • All Block, an ad blocker for YouTube (500K users), sells anonymized data “for analytical and commercial purposes.” Published by an entity called Curly Doggo Limited, based in London.
  • TwiBlocker (80K users) discloses transferring browsing data to third parties who “process or sell it for analytical purposes.”
  • Urban AdBlocker (10K users) routes browsing data and AI conversations through the BiScience data broker.

If your ad blocker has a privacy policy longer than two paragraphs, read it.

Figure 4. Featured Ad Blocker in Chrome Store

Independent Operators Can Also Sell Your Data

These aren’t the biggest extensions on the list, but they show how far the data-selling model reaches.

  • Career.io Job Auto Apply (10K users) states in its policy that it may use personal data collected from your resume to sell to third parties, including data brokers, for targeted advertising and profiling. A job application tool that sells your resume.
  • Dog Cuties (6K users) is a cute dog wallpaper new-tab extension. Confirmed data seller through the Apex Media network.
  • EmailOnDeck (10K users) is a temporary email service – a tool people use specifically when they don’t want to share their real information. Its policy states it may sell, rent, or share its mailing list.
  • Survey Junkie discloses selling URLs visited, clickstream data, and “modeled information” about consumer preferences to market research agencies, ad agencies, and data analytics providers.
  • Dashy New Tab (10K users) has its Chrome Web Store listing marked “does not sell your data.” Its actual privacy policy marks data as “Sold or Shared: Yes.” We believe this is CCPA compliance language for standard analytics, not commercial data sales – which is why we left it out. But the contradiction between the store listing and the privacy policy is real. If a publisher’s own policy says “Sold or Shared: Yes” and the store listing says the opposite, which one should users trust?

When Your Employees’ Extensions Are Selling Data

Of the 82 confirmed sellers, 29 of them are B2B sales intelligence tools. Their business is data, so the disclosure itself isn’t a surprise. We’re not counting them alongside the consumer-facing extensions.

But they belong in this conversation. These extensions sit on corporate machines. This means that employee browsing behavior, such as internal URLs, SaaS dashboards, and research activity, flows into commercial databases that your competitors can purchase. The risk isn’t about users being deceived. It’s about corporate data leaving through a channel nobody is watching.

What Security Teams Should Do About This

Most extension security evaluations focus on permissions or known malicious indicators – flagging extensions that request excessive access or match threat intelligence. That catches malware. It doesn’t catch an extension that openly reserves the right to sell your browsing data.

An extension with a data-selling disclosure isn’t a hypothetical risk. It’s a stated business practice, sitting in a document your employees accepted without reading.

Three questions worth asking:

  1. What extensions are installed across employee browsers?
  2. What data do those publishers claim the right to collect or sell?
  3. Could corporate browsing activity be flowing into commercial datasets?

Most browsers already support centralized extension management through enterprise policies – Chrome’s ExtensionSettings, Edge’s group policies, Firefox’s enterprise configurations. If you don’t have an extension governance policy, that’s the first step. If you do, add privacy policy review to the evaluation criteria. Permissions alone don’t tell you enough.

To that end, LayerX added a new filter to detect and filter (and block, if so desired) extensions that either don’t have a privacy policy at all, or reserve the right to sell personal data.

Consider blocking extensions that either disclose selling user data or don’t publish a privacy policy at all.

Figure 5. LayerX Extension Data Privacy Filter

The Bottom Line

Browser extensions are among the web’s most powerful and least scrutinized tools. While much of the focus is on malicious that actively steal user and corporate data, privacy violations may sound mundane, but can also be risky.

Going through and reading the Privacy Policy of every extension that every user has in your organization can lead to hundreds or thousands of individual extensions; clearly, that’s not feasible.

Instead, organizations need to start deploying automated tools that can restrict suspicious extensions and account for privacy settings.

Google was contacted multiple times over two days for comment on the report’s findings and Chrome Web Store policies but did not respond before publication. This article will be updated if a response is received.

This post was originally published on LayerX and republished here with permission.

Reviewed by Irfan Ahmad.

Read next: 

• Facial recognition data is a key to your identity – if stolen, you can’t just change the locks

• Research reveals lack of transparency in ad data of digital platforms
by External Contributor via Digital Information World

Wednesday, April 29, 2026

Asphalt is everywhere, but is it bad for our health?

By Joanna Allhands - Arizona State University

ASU researcher says pavement’s potential impact on our health deserves as much attention as its carbon or energy footprint.

Heat and sunlight worsen asphalt emissions, raising health risks for workers and nearby communities.
Image: Brian J. Tromp / unsplash

If you piled all of Phoenix’s pavement into one spot, it would be enough to cover San Francisco four times over.

Roads, parking lots and other paved surfaces blanket a lot of land — an estimated 40% of Arizona’s capital city.

Pavement absorbs heat during the day and releases it slowly at night via the urban heat island effect, increasing the amount of energy that cities consume.

But for Elham Fini, a senior scientist affiliated with the Julie Ann Wrigley Global Futures Laboratory at Arizona State University, pavement’s potential impact on our health deserves as much attention as its carbon or energy footprint.

“To make something truly sustainable,” she said, “you cannot ignore the human side of it.”

Asphalt fumes can be hard on health

Fini — a faculty member in ASU’s School of Sustainable Engineering and the Built Environment — spent years studying why asphalt breaks down so quickly.

That work pointed her toward the volatile organic compounds (VOC) that escape from bitumen, the black, sticky petroleum byproduct that holds asphalt together.

Two studies in the Journal of Hazardous Materials and Science of the Total Environment shed light on how the compounds that give asphalt its trademark scent change after sunset and form ultrafine particles, which can worsen air quality.

These carbon-based vapors are continuously released but become more noticeable on hot, sunny days. They can cause dizziness and difficulty breathing in the short term.

Long-term exposure also can elevate the risk of lung cancer, a major concern for construction workers who regularly breathe these fumes without a respirator.

Aging pavement emits toxic vapors

And the impacts could get worse as pavement ages.

Research from Fini and others shows that asphalt begins releasing different, more toxic strains of VOC as bitumen breaks down in sunlight and heat.

These toxic, often odorless VOCs are small enough to work their way into arteries and organs.

Tests and a modeling analysis also suggest that they can cause significant neurological damage in humans, particularly among women and the elderly.

“Heat is worsening the situation,” Fini said. “It’s exacerbating the emissions from asphalt.”

More study is needed to understand what level of asphalt-emitted VOC exposure is unsafe.

But what we know so far should raise alarm bells for hot, car-centric cities such as Phoenix.

Goal: Safer asphalt, healthier workers

Fini is working with Dr. Bruce Johnson via a partnership with Mayo Clinic to better understand how asphalt emissions impact respiratory health.

She hopes that their studies will lead to stronger protections for construction workers and surrounding communities, as well as less toxic, lower-emitting asphalt formulations.

Fini has a head start on the latter.

She has teamed up with Peter Lammers, chief scientist at the Arizona Center for Algae Technology and Innovation, to begin growing a strain of algae that could reduce VOC emissions using wastewater from a Phoenix treatment plant.

“It’s a great setup,” said Lammers, a research professor in the School of Sustainable Engineering and the Built Environment, “because we use water that’s far too high in nitrogen and phosphorus to be released anywhere. And instead, we reuse it to grow more algae.”

Fini then bakes that algae at high temperatures without much oxygen into a binder that can be easily mixed into asphalt.

Algae can capture the worst VOCs

A study in the journal Clean Technologies and Environmental Policy found that while algae-infused asphalt doesn’t significantly reduce total VOC emissions, it can effectively keep the most toxic compounds from escaping.

In fact, tests showed that it reduced the toxicity of asphalt emissions by roughly 100-fold.

Algae can slow how quickly pavement breaks down — which could lower construction and maintenance costs and make its inclusion in asphalt even more attractive for cities and paving companies.

Fini is exploring other binder options, including a product made from the leftover branches of forest-thinning projects, and working with Phoenix to pave a section of road with algae-infused asphalt.

Because VOCs from pavement are often left out of air quality assessments, these real-world tests are critical to evaluate pavement performance and its long-term environmental impact.

“We have 4 million miles of roads in America,” Fini said. “We should make those 4 million miles do more for us than just get from A to B.”

This research was done in collaboration with colleagues from the following institutions: Emory University; Dalian University of Technology, China;Mayo Clinic Arizona;Oregon State University; University of Chicago; University of Lille, France; University of Littoral Côte d′Opale, France; University of Miami; University of Missouri; University of Utah.

Reviewed by Irfan Ahmad.

This post was originally published on Arizona State University News and republished here with permission.

Read next:

• Half of AI health answers are wrong even though they sound convincing – new study

by External Contributor via Digital Information World

China surpasses US in research spending – the consequences extend far beyond scientific ranking and clout

Caroline Wagner, The Ohio State University
China’s research boom overtakes U.S. momentum while American federal science funding continues declining steadily.
Image: Unsplash - kaboompics.com

China’s rapid rise in science has hit a milestone. The country’s investment in research and development has reached parity with – and by purchasing power measures has surpassed – that of the United States, according to a March 2026 report from the Organisation for Economic Co-operation and Development. Both nations have crossed the US$1 trillion threshold on research spending.

For 80 years, the U.S. operated the most productive scientific and technological enterprise in human history. Breakthroughs and advances that came from American labs included the internet; the mRNA vaccine; the transistor and its children, semiconductors and microprocessors; the Global Positioning System; and many more.

U.S. scientific and technological leadership was nurtured by sustained public investment in research universities and federal laboratories, as well as a culture of open inquiry. These investments turned scientific discovery into economic strength – accounting for more than 20% of all U.S. productivity growth since World War II.

In contrast, China had previously spent little to nothing on research and development. Some estimates show that China was among the lowest research spenders worldwide in 1980.

As a policy analyst and public affairs researcher, I study international collaboration in science and technology and its implications for public and foreign policy. I have tracked China’s rise across every major database for more than a decade.

The most recent reports showing that China is now outspending the U.S. on scientific and technological research is a turning point worth understanding clearly because, historically, global leadership in one sector – including technology and warfare – feeds into others. U.S. dominance is in question.

China’s systematic and unrelenting rise

China’s R&D spending milestone caps a series of achievements that have arrived in rapid succession.

In 2019, China surpassed the U.S. in its share of the top 1% most-highly cited papers – what some call the Nobel class of research. By 2022, it had taken first place globally in most-cited papers overall.

In 2024, China overtook the United States in total scientific publications – the first time any nation has displaced American dominance since the U.S. itself surpassed the United Kingdom in 1948. Researchers found that China overtook the United States in scientific output even earlier. That same year, China pulled ahead in the Nature Index, which tracks publications in the world’s most selective scientific journals, posting a 17% advantage over the U.S. in outlets long considered the gold standard of scientific excellence.

In 2024, Chinese entities also filed roughly 1.8 million patent applications, compared to the U.S.’s 603,191 applications.

Given these milestones, it’s possible to argue that China is quickly taking the lead in global science and technology. These are not isolated data points. They mark a structural shift in where the world’s scientific frontier is being built.

More science is good – the problem lies elsewhere

China’s ascent is, in one sense, good news. More knowledge, generated by more researchers across more institutions, expands the global pool of discovery from which everyone can draw. The world benefits when science thrives.

The problem is not that China is investing, but that the U.S. is not.

First, the U.S. is divesting from basic, open science. Federal R&D spending in the U.S. peaked in 2010 at roughly $160 billion and fell by more than 15% over the following five years. Federal investment in research and development has been in a long, slow slide – from a peak of 1.86% of gross domestic product in 1964 to about 0.66% in 2021.

The federal government is no longer the largest spender in R&D: It funded about 40% of basic research in 2022, while the business sector performed roughly 78% of U.S. R&D. While not a problem in itself, industry has simultaneously withdrawn from open scientific publication over the past four decades, shifting from research toward development. The result is a shrinking pool of openly shared scientific knowledge precisely as public investment in it also contracts.

Under the second Trump administration, U.S. government science agencies have been slow-walking proposals for new research. Current budget cuts from the White House threaten to deepen cuts to government spending significantly.

The second is the active restriction of scientific exchange: tightening access to U.S. institutions, scrutinizing international collaborations and raising barriers to foreign-born researchers. These policies, though intended as security measures, work against the openness that has historically made American science productive and attractive to global talent.

I describe this issue as an example of the stockyard paradox, in which securing research assets may weaken the very system these measures aim to protect.

Disinvestment cuts deeper than it appears

The deeper danger for the U.S. economy is that disinvestment and selective engagement in research erodes the capacity to use cutting-edge science regardless of where it is produced.

Absorbing and applying cutting-edge knowledge, whether developed in Boston or Beijing, requires maintaining research institutions and trained workforces, as well as active participation in global networks. This is not a passive process. You cannot free-ride on Chinese science if you have dismantled the institutional and human capital needed to evaluate, translate and apply it.

A nation that hollows out its research base not only falls behind but also progressively loses its ability to benefit from science, including in technologies it is already able to access.

Talent compounds the problem. The U.S. built its scientific dominance partly by being the destination of choice for the world’s most ambitious researchers. The U.S. leads the world in Nobel Prizes, but, notably, 40% of the Nobel Prizes in chemistry, medicine and physics that were awarded to Americans since 2000 were won by immigrants. The flow of foreign talent is not guaranteed. It follows opportunity, funding and openness.

Researchers who might once have come to American universities are finding welcoming alternatives in Europe, China and elsewhere.

Around 75% of U.S. researchers are considering leaving the country due to the Trump administration’s funding policies.

A decision point, not a trend line

China’s milestone in research funding arrives at a moment when the U.S. is deciding whether to maintain its scientific leadership.

Scientific infrastructure does not decline gradually and recover on demand. Doctoral scientists represent a decade or more of training; tacit laboratory knowledge lives in working research groups, not in documents. Once talented young researchers leave the pipeline – or international talent redirects to other countries – the capacity is very hard to rebuild. Early warning signs are already visible in the U.S. system: thousands of NIH grants terminated, a collapse in international applications and an exodus of early-career scientists.

What is at stake is not a ranking. It is whether the U.S. maintains the institutional capacity – the universities, the federal laboratories, the graduate pipelines, the culture of open inquiry – that made those returns on scientific investment possible in the first place.

China’s rise did not create this decision point, although it brings it into sharp relief. Does the U.S. still want to lead in science? The Information Technology and Innovation Foundation, a nonprofit think tank, estimates that a 20% cut in federal research and development starting in fiscal year 2026 would shrink the U.S. economy by nearly $1 trillion over 10 years and reduce tax revenue by around $250 billion. Others point out that the scientific enterprise has contributed at least half of U.S. economic growth.

That is a lot to lose.The Conversation

Caroline Wagner, Professor of Public Affairs, The Ohio State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Irfan Ahmad.

Read next:

• Sora’s downfall signals broader problems with AI’s creative utility

Who's Tuned In (And Out) of Science And Tech?


by External Contributor via Digital Information World