Saturday, May 2, 2026

Our study looked at teens’ social media behaviour in 43 countries – those from disadvantaged backgrounds face greater harms

Roger Fernandez-Urbano, Universitat de Barcelona; Maria Rubio-Cabañez, Universitat Autònoma de Barcelona, and Pablo Gracia, Universitat Autònoma de Barcelona

Image: 
Marc Clinton Labiano / unsplash

As social media becomes a central part of young people’s lives, concerns are growing about its impact on their mental health. Yet public debates and measures tend to treat adolescents as one homogeneous group. We frequently ignore the fact that social media use does not affect all young people in the same way – nor does it have the same impacts on their wellbeing.

In a recent chapter of the World Happiness Report 2026, published by the UN Sustainable Development Solutions Network in partnership with the University of Oxford, we have examined how problematic social media use relates to the wellbeing of adolescents from different socioeconomic backgrounds.

We looked at 43 countries spanning six broad regions – Anglo-Celtic, Caucasus-Black Sea, Central-Eastern Europe, Mediterranean, Nordic, and Western Europe – covering mainly European countries and their immediate neighbouring areas.

Using data from over 330,000 young people, we found a clear and consistent pattern: higher levels of problematic social media use – that is, compulsive or uncontrolled engagement with social media – are associated with poorer wellbeing.

Teenagers who report more problematic use tend to experience more psychological complaints, such as feeling low, nervous, irritable, or having difficulty sleeping. They also have lower life satisfaction, a measure of how positively they evaluate their lives as a whole.

This pattern appears across all countries in our study, but its strength varies from one country to another. It is particularly pronounced in Anglo-Celtic countries such as the UK and Ireland, while it is comparatively weaker in the Caucasus-Black Sea region.

Socioeconomic background matters

The story does not end with geography. Globally, teenagers from less advantaged backgrounds tend to be more vulnerable to the negative consequences of problematic social media use than their more advantaged peers.

This means socioeconomic status – the material and social resources available to a household, such as income and living conditions – actively shapes the risks and opportunities that young people experience as a result of online environments.

Interestingly, these inequalities are especially visible when we look at life satisfaction. Differences between socioeconomic groups are smaller when it comes to psychological complaints, but much clearer and more consistent for how adolescents evaluate their lives overall.

One likely reason is that life satisfaction is more sensitive to social comparisons. Social media exposes young people to constant benchmarks – what others have, do, and achieve – which can amplify differences in perceived opportunities and resources.

At the same time, these patterns are not identical everywhere. For instance, socioeconomic differences in psychological complaints tend to be modest in most regions including continental European countries such as France, Austria or Belgium, but are more clearly observed in Anglo-Celtic countries such as Scotland and Wales.

In contrast, socioeconomic gaps in life satisfaction appear across most regions, although they tend to be weaker in Mediterranean countries such as Italy, Cyprus and Greece.

A growing problem

We also examined how these patterns have evolved over time. Between 2018 and 2022, the link between problematic social media use and poor adolescent wellbeing became stronger.

This suggests that the risks linked to problematic use may have intensified in recent years, possibly reflecting the growing role of digital technologies in young people’s daily lives, particularly during and after the Covid-19 pandemic.

Importantly, this intensification has affected teenagers across socioeconomic groups in broadly similar ways in most regions. In other words, while inequalities remain they have not widened over this period.

No one-size-fits-all solution

While public debates about social media and mental health often treat adolescents as a single demographic group, our results show a more complex reality. Problematic social media use is linked to poorer wellbeing across countries, but its effects are shaped by social realities. They vary depending on where young people live and what resources are available to them.

Not all teenagers experience the digital world in the same way, and not all are equally equipped to cope with its pressures. Recognising this is essential for designing policies that are not only effective, but also equitable, ensuring that interventions reach those adolescents who are most vulnerable to digital risks.

Roger Fernandez-Urbano, Ramón y Cajal Research Fellow (Tenure-Track) Department of Sociology, Universitat de Barcelona; Maria Rubio-Cabañez, Postdoctoral Researcher, Centre d’Estudis Demogràfics, CED-CERCA, Universitat Autònoma de Barcelona, and Pablo Gracia, Professor Investigador en Sociologia, Centre d’Estudis Demogràfics, CED-CERCA, Universitat Autònoma de Barcelona

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Irfan Ahmad.

Read next: Can we stop ChatGPT from spreading bias?


by External Contributor via Digital Information World

Friday, May 1, 2026

Can we stop ChatGPT from spreading bias?

By the University of Amsterdam

Image: Merrilee Schultz / unsplash

Language models like ChatGPT are not neutral. Without our realising it, they can absorb all kinds of bias – for example around gender and ethnicity – which then become increasingly embedded in the model. According to AI researcher Oskar van der Wal, we need different kinds of measurements to detect these biases so that they can be removed from the models. In his doctoral thesis, he shows how this can be done. On 29 April, he defended his thesis at the University of Amsterdam.

Language models are often seen as neutral tools, but in practice they can both reflect and amplify bias.

‘Users often don’t realise that a model makes certain assumptions, for example by introducing subtle differences in how men and women are described,’ says Van der Wal. Precisely because bias is so hidden, it can spread unnoticed and colour the way we see the world.

Bias is hard to measure

An important problem is that bias is difficult to measure. ‘Many existing measurement methods are fairly abstract and don’t take practice into account. They might look for overt stereotypes in what the model says, such as “The Dutch are stingy.” But in practice, bias isn’t something that’s directly visible. It depends on the context in which you use the model.’

Van der Wal cites the use of AI in healthcare as an example. ‘AI learns from existing data. If those data contain outdated or incorrect assumptions – for instance, the contested idea that certain diseases are linked to the outdated concept of “race” – the model may keep reproducing them. In healthcare, that can lead to incorrect diagnoses or treatments.’

Another example is when medical data largely derives from research involving men. ‘AI may then interpret women’s symptoms differently or less seriously, or make different risk assessments.’

Realistic scenarios

To discover whether realistic scenarios reveal different errors than simple tests, Van der Wal presented language models with a range of medical cases and asked them to provide diagnoses, risk assessments or advice. ‘We repeatedly changed the patient’s ethnicity. That way we could identify whether and how the model responded differently.’

Subtle but consistent differences appeared in the outcomes, differences that remained invisible in standard tests. ‘Precisely because our scenarios were close to practice, it became clear how bias can influence medical decision-making.’

Model reinforces patterns in the data

Van der Wal also investigated what happens inside a language model during training. He followed, step by step, how the model learns to store information. ‘During training, the model learns which words and ideas frequently occur together. If “doctor” often appears together with “he” and “nurse” with “she” in the training data, the model will pick up on those associations.’

Over time, the model appeared to store this information in increasingly specific places, thereby reinforcing gender bias. ‘Bias doesn’t arise only from the data that AI is trained on, but also from the way the model structures that information.’

There are solutions

Unfortunately, you can’t fix bias in language models with a single trick. But, according to Van der Wal, targeted interventions can help. ‘If you know where in the model the bias is located, you can address those areas. This already seems to work in specific cases, but more research is needed to extend the approach to more complex forms of bias.’

Van der Wal tested this targeted approach by comparing a model before and after an adjustment in which the model was trained not to adopt identified gender-related biases. He wanted to see if the model responded less differently to men and women after the change, and how well it still performed ordinary tasks, such as generating text.

The bias decreased, while the quality of the model largely remained intact.

Careful and deliberate

The impact of AI is not restricted to the technical realm but now has broader societal relevance. ‘We are becoming increasingly dependent on systems that can influence how we think,’ says Van der Wal. ‘That’s precisely why it’s important to develop AI carefully. Responsible AI development requires interventions at multiple levels at once: in the data, during training, targeted within the model itself, and also in its deployment and use.’

How can you as a user carefully use AI?

  • Be critical of answers: Don’t automatically assume an AI answer is correct or complete. Ask yourself: what am I not seeing? And where does the answer come from? ‘A model can come across as very confident, making its answers seem more reliable than they are,’ warns Van der Wal. ‘It’s also tempting to trust a chatbot that always agrees with you and is very complimentary. But that’s precisely when it’s even more important to stay critical.’
  • Be aware of hidden risks: Bias and other effects (such as influencing your thinking) are often not immediately visible. That’s why it’s important to stay alert.
  • Avoid becoming dependent: Use AI as a tool, but keep thinking and deciding for yourself. Over-reliance can make you less confident in your own knowledge and judgement.

This post was originally published on the University of Amsterdam news section and republished here with permission.

Reviewed by Irfan Ahmad.

Read next: 

• ‘Just looping you in’: Why letting AI write our emails might actually create more work
by External Contributor via Digital Information World

‘Just looping you in’: Why letting AI write our emails might actually create more work

Daniel Angus, Queensland University of Technology

I hope this article finds you well.

Did that make you cringe, ever so slightly? In the decades since the very first email was sent in 1971, the technology has become the quiet infrastructure of white-collar work.

Email came with the promise of efficiency, clarity and less friction in organisational communication. Instead, for many, it has morphed into something else: always there, near impossible to escape and sometimes simply overwhelming.

Right now, something is shifting again. The rise of generative artificial intelligence (AI) technologies, such as ChatGPT and Microsoft Copilot, is increasingly allowing people to offload the repetitive routines of tending one’s inbox – drafting, summarising and replying.

My colleagues in the ARC Centre of Excellence for Automated Decision Making & Society found 45.6% of Australians have recently used a generative AI tool, 82.6% of those using it for text generation. A healthy chunk of that use likely includes email.

So, what happens if we end up fully automating one of the staples of the white-collar daily grind? Will AI technologies reduce some of the friction, or generate new forms of it? Dare I ask – are we actually about to get more email?

Email has long been about more than just communicating information. Vitaly Gariev/Unsplash

Why the printer isn’t dead yet

Soon after the advent of email, some voices in the business world heralded the coming end of paper use in the office. That didn’t happen. If you work in an office today, there’s a good chance you still have a printer.

In their 2001 book, The Myth of the Paperless Office, Abigail Sellen and Richard Harper show how digital tools rarely eliminate older forms of work. Instead, they reshape them.

Sellen and Harper show how paper use didn’t disappear with the rise of email and other digital communication tools; in many cases, it intensified. The takeaway isn’t that offices failed to modernise, but rather that work reorganised around what these new tools could do.



In this case, paper persisted not only out of habit, but because of what it affords: it is easy to annotate, spread out, carry and view at a glance. This was all too clunky (or impossible) to perform via the digital alternatives.

At the same time, email and digitisation dramatically lowered the cost of producing and distributing communication. It was far easier to send more messages, to more people, more often.

Circling back to today

Will AI be different? If early signs are anything to go by, the answer is: not in the way we might hope.

Like earlier waves of workplace technology, AI is less likely to replace existing communication practices than to intensify them – but at least it might come with better grammar and a suspiciously upbeat tone.

Some new AI tools offer to manage your inbox entirely, feeding into broader privacy concerns about the technology.

At this moment, what a lot of these products seem to offer is not an escape from email, but a smoothing of its rough edges. Workers are using AI to soften otherwise blunt requests, modify their tone or expand what might otherwise be considered too brief a response.

Rather than removing the need to communicate, these tools offer pathways to make a delicate performance easier.

What email is actually for

Email, like many forms of communication, is as much about maintaining everyday relationships as it is about the transfer of information.

At work, it’s often about signalling competence, responsiveness, collegiality and authority. “Just looping someone in” or “circling back” are all part of our absurd office vocabulary, a shared dialect that helps us navigate hierarchy, soften demands and keep things moving – all without saying what we really think.

If AI lowers the effort required to produce these signals, it won’t necessarily reduce their importance, but it could unsettle things in rather odd ways.

If more people use AI to draft emails they don’t particularly want to write, we end up with a game of bureaucratic “mime”: everyone performing sincerity and quietly outsourcing it, and no one entirely sure how much of their inbox was actually written by a human.

The labour of email was never just about crafting sentences. It’s always been the scanning, the sorting and the deciding. AI doesn’t remove this burden. If anything, it amplifies it.

When everything arrives polished, everything looks important. That points to a deeper question for the future of work: if AI can perform responsiveness, why are we generating so many situations that still require it?

Looking forward

What would a workplace look like if email wasn’t the default solution to every coordination problem? Perhaps fewer performative check-ins, “just touching base”, “looping you in” or “following up on the below”. More clearer expectations about what actually requires a response, and what doesn’t.

Email, like paper, is likely to persist for good reasons. It is simple, flexible and universal. It allows things to be deferred, revisited, forwarded and quietly ignored.

But if AI is going to change any of this, my hope is that it makes visible how much of this is ritual, how much is habit, and how much has long been unnecessary.

And if the machines are happy to keep saying “hope this finds you well” to each other, we might finally have permission to stop.The Conversation

Daniel Angus, Professor of Digital Communication, Director of QUT Digital Media Research Centre, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Irfan Ahmad.

Read next:

• What the @ Sign Is Called Around the World: 25 Examples

• Q&A: Who’s responsible when AI makes mistakes?

• AI analysis of police body-camera footage raises Constitutional concerns, racial disparities


by External Contributor via Digital Information World

AI analysis of police body-camera footage raises Constitutional concerns, racial disparities

Thousands of officer-worn camera recordings found evidence of underreported police stops, troubling racial disparities in officer interactions, and widespread use of unclear language during consent searches, a new study shows.

Image: Raphael Lopes - unsplash

Researchers at the University of Michigan, University of California-Davis and Stanford University say their findings raise constitutional concerns under both the Fourth and Fourteenth Amendments, involving protection from unreasonable searches/seizures and prohibiting discriminatory practices based on race and ethnicity, respectively.

The report highlights how artificial intelligence could transform police oversight by helping reviewers identify potentially problematic encounters hidden within millions of hours of body-camera footage. The research demonstrates the growing potential for AI-powered analysis to help courts, police departments and municipal governments better evaluate compliance while building greater public trust in law enforcement.

Using machine learning and natural language processing, researchers examined New York Police Department (NYPD) encounters captured on body-worn cameras, looking closely at whether officers followed legal standards governing stops, detentions and consent searches.

Among the study’s most significant findings:

  • Body-camera recordings could be classified as stops with over 80% accuracy, and underdocumented stops with over 70% accuracy based on language alone.
  • Using language models, reviewers could uncover over 50% of undocumented stops identified in manual audits by viewing a fraction (25%) of the footage they normally would.
  • Officers frequently relied on indirect or confusing phrases such as “Do you mind if I check?” rather than clearly asking for consent to search.
  • The word “consent” appeared in less than 13% of consent-search interactions reviewed.
  • Commands and indirect requests appeared more frequently in encounters involving Black and Hispanic civilians.

Nicholas Camp, U-M assistant professor of organizational studies, said these patterns raise questions about whether some civilians clearly understood they could refuse searches and whether certain encounters were documented accurately.

The study stems from reforms ordered after the landmark 2013 federal court ruling in Floyd v. City of New York, in which the U.S. District Court for the Southern District of New York found that the NYPD’s stop-and-frisk practices violated constitutional protections against unreasonable searches and racial discrimination.

Following the ruling, the court appointed an independent monitor to oversee reforms involving NYPD training, supervision and investigative encounters. As part of those reforms, NYPD officers began using body-worn cameras, which captured numerous police-community interactions.

“These recordings provide a far clearer picture of officer behavior than written police reports alone,” Camp said.

The study, approved by the court in 2021, analyzed more than 1,700 encounters connected to an earlier City University of New York Institute for State and Local Governance review, more than 1,100 additional encounters reviewed by the Monitor team, and nearly 1,800 consent-search encounters from 2023.

AI models developed during the study successfully distinguished lower-level encounters from Level 3 stops—which legally require reasonable suspicion—with accuracy rates ranging from approximately 72% to 91%. Researchers say those tools could help oversight teams identify constitutional concerns faster and more consistently by prioritizing footage most likely to contain problematic interactions.

Researchers emphasized that artificial intelligence is not intended to replace human oversight, but instead serves as a tool to strengthen accountability, improve auditing and support ongoing police reform efforts.

“Our analyses identify troubling patterns in NYPD encounters, but also show a path forward: Body camera footage can be used as data to inform and measure changes in law enforcement,” Camp said.

The study’s authors also include Rob Voigt, assistant professor of linguistics, UC-Davis; Dan Sutton, director of Justice and Safety, Stanford (Law School) Center for Racial Justice, and Jennifer Eberhardt, professor of organizational behavior and psychology, Stanford University.

Note: At the time of publication, we have reached out to the NYPD for comment regarding the study’s findings on body-camera analysis and will update this article if a response is received.

This post was originally published on the University of Michigan News and republished here with permission.

Reviewed by Irfan Ahmad.

Read next:

• Transparency and trust in the age of deepfake ads

• Q&A: Who’s responsible when AI makes mistakes?


by External Contributor via Digital Information World

Thursday, April 30, 2026

Standardised testing and scripted lessons are failing teachers and students alike, education expert warns

By Taylor & Francis

Geoff Masters challenges a system which teaches the same curriculum to children with very different comprehension levels.

Geoff Masters criticizes age-based schooling, advocating personalized learning and teacher autonomy over standardized curricula systems.
Image: Rewired Digital / Unsplash

Is it time to ditch scripted lessons and heavily-packed curricula to focus on individual student growth?

This is the question posed by education expert Geoff Masters, who argues that age-based expectations are not serving all children well, while scripted lessons are failing teachers and students alike.

Masters, the former head of the Australian Council for Educational Research, asks how well children are served by a system in which two pupils in the same class can differ by six or more years of learning but are taught the same material.

He argues this system fails children at either end of the scale – those who are struggling and those who are unchallenged. He asks what if, instead of holding all pupils of the same age to the same learning expectations, we based expectations on where individuals are in their comprehension and individual growth.

“Too many students in our schools are being poorly served and left behind by machineries of schooling not fit for purpose,” Masters warns.

The problem with standardisation

Masters argues there is a fundamental flaw in the current system: the assumption that all students in the same grade are equally ready to learn the same material.

Research shows that children in the same classroom can have up to a seven-year difference in their reading and mathematics comprehension. This vast variation, Masters argues, is ignored by a system that prioritises standardisation over individual needs.

“By the middle years of school, many students have not learnt what the curriculum expected them to learn much earlier in their schooling,” Masters explains. He cites data showing how, across 38 developed countries, almost a third of 15-year-olds have difficulty demonstrating 5th and 6th grade mathematics content.

The picture in Australia

Masters’ arguments are presented against a backdrop of Australia’s declining performance in international assessments like PISA. Between 2012 and 2022, there was no significant improvement in Australian students’ performances in reading, mathematics or science. In fact, long-term declines have been recorded across all three areas.

“Despite decades of reforms, the machinery of schooling has not delivered the improvements we need,” Masters says. “It’s time to question whether prescribing what every student must learn in each grade of school and testing to see whether they have learnt it is the best way to optimise learning and improve performance.”

Masters also explains how those who start the year behind are likely to stay behind. He explains: “When the curriculum expects all students in a grade to be taught the same content at the same time, those who begin well below grade level are disadvantaged. This disadvantage is compounded when students are required to move from one grade curriculum to the next based on elapsed time rather than mastery. Students who lack essential prerequisites often fall further behind as each grade’s curriculum becomes increasingly beyond their reach.”

The future of learning

Masters instead argues for a system that meets students where they are in their learning, rather than where their age or grade dictates they should be. He proposes replacing age-based expectations with personalised learning plans that track individual growth.

“Improved performance depends on meeting each student where they are with personally meaningful, well-targeted learning opportunities that build on what they already know,” Masters explains. “This approach includes all students, including neurodiverse children and others with special needs.”

This approach would not only benefit students, he suggests, but also empower teachers to use their professional expertise to design tailored learning experiences.

One of the most concerning trends in education, in Masters’ view, is the rise of scripted lessons.

“Scripted lessons turn teaching into the delivery of ready-made solutions created outside the classroom,” Masters says. “They undervalue teachers’ expertise in what is arguably the essence of effective teaching: establishing where individuals are in their learning and designing opportunities to promote further growth.”

Masters calls for a return to professional autonomy, where teachers are trusted to make decisions in the best interests of their students.

Masters envisions a future where education systems embrace diversity and difference.

“Rather than expecting students to fit the expectations of schooling, the challenge is to redesign school structures and processes to better meet the needs of individual learners,” Masters concludes.

Further information: The Children We Leave Behind: How School Could Be Done Differently, by Geoff Masters (Routledge, 2026). ISBN: Paperback: 9781041279655 | Hardback 9781041279662 | eBook 9781003757122. DOI: https://doi.org/10.4324/9781003757122

This post was originally published on Taylor & Francis Newsroom and republished on DIW with permission.

Reviewed by Irfan Ahmad.

Read next:

Facial recognition data is a key to your identity – if stolen, you can’t just change the locks

• The Deadliest Countries for Journalists
by External Contributor via Digital Information World

Some Chrome Extensions With Large User Bases Disclose Data Sale or Sharing Practices in Their Privacy Policies

By Dar Kahllon and Guy Erez - LayerX

Executive Summary:

New research by LayerX Security uncovers multiple networks of browser extensions that collect user data and resell it for profit – and it’s all completely legal. For, unlike malicious extensions that disguise themselves as legitimate extensions and do their bidding in the dark, these extensions explicitly tell users that they’re going to collect and sell their data. It’s right there in the Privacy Policy; except that nobody reads it.

LayerX analyzed the privacy policies of thousands of extensions and uncovered over 80 different extensions that collect and sell customer data. Some of these extensions include:

  • A network of 24 media extensions that are installed on 800,000 users and collected viewing data and demographic information on major streaming platforms such as Netflix, Hulu, Disney+, Amazon Prime Video, HBO, Apple TV, and others
  • 12 separate ad blockers with a combined install base of over 5.5 million users openly selling user data
  • Nearly 50 other extensions, with over 100,000 users in aggregate, that collected and resold users’ browsing data

While browser extensions may seem innocent, these findings highlight the privacy exposure that can arise from unregulated usage of extensions.

The Fine Print That Makes Everything Legal

Privacy policies. Reading them is like watching paint dry. For most users, it’s worse than reading the fine print in their mortgage agreements; and that’s saying something.

Except we did.

LayerX Security researchers Dar Kahllon and Guy Erez analyzed the privacy policies of thousands of browser extensions available in official stores. They were looking for one thing: whether the publisher explicitly reserved the right to sell user data.

And we found them. Our analysis showed at least 80 such extensions, some of them working in collusion, and developed by the same developer across all extensions. They range from ad blockers and streaming tools to job application helpers, new-tab extensions, and B2B sales intelligence platforms.

Most of these policies don’t say “we sell your data.” They say “we may sell.” It’s a legal hedge – but it means your data can be sold at any time, and you already agreed to it. Here’s what that looks like in practice:

“We may sell or share your personal information with third parties.”

“This information may be sold to or shared with business partners.”

What? Browser Extensions Have Privacy Policies?!

Well, to be fair, most don’t.

This isn’t a story about malware. Nobody hacked you. Nobody stole anything. The extensions you’re running right now may be selling your browsing data — and they told you they would. It’s right there in the privacy policy. Page 4. Paragraph 7. The one nobody reads.
Figure 1. Privacy Policy Transparency

According to LayerX’s Enterprise Browser Extension Security Report 2026, 71% of all extensions in the Chrome Web Store don’t even publish a privacy policy.

As a result, more than 73% of users have at least one extension installed without a privacy policy, with no transparency into how their data is handled. This means our analysis could only rely on the 29% that do have a privacy policy.

And if we assume that some of those extensions with no privacy policy at all will also resell your data – and there’s no reason to assume they’re better – the real number of extensions that may sell your data across the Chrome Web Store is in the tens of thousands.

How We Analyzed The Data

We built a pipeline to analyze privacy policies associated with browser extensions in official stores, combining automated classification with manual verification.

Starting from roughly 9,000 extensions with privacy policy URLs in our database, we successfully fetched and parsed 6,666 policies.

The pipeline ran in three stages:

  1. First, AI classification flagged policies disclosing the selling, licensing, or commercial transfer of user data. We then marked high-confidence matches for review and verified every flagged policy manually.
  2. Performed a manual review to remove false positives, including: (A) Enterprise security tools (e.g., Fortinet, CrowdStrike) that route browsing data to their own servers as part of expected web filtering behavior. (B) Standard CCPA ad-retargeting disclosures (e.g., HubSpot, Calendly), where sharing cookies with platforms like Google Ads may technically count as a “sale” under broad definitions. (C) Consensual data monetization platforms (e.g., Swash) where users explicitly opt in and are compensated.

    Final dataset includes only extensions whose privacy policies indicate genuine commercial sale of user data to third parties

  3. In the final count, we found 82 unique extensions across 94 store listings.75 are currently live in the Chrome Web Store. The remaining 7 have been removed – but “removed” doesn’t mean “uninstalled.” Extensions pulled from the store can stay active in browsers that already have them.

While these figures may seem low, bear in mind that these figures are only for extensions with privacy policies to begin with (less than one-third of all extensions), and those extensions that actually tell you what they’re doing with your data. The true number is almost certainly higher.

Here are a few of our key findings:

The QVI Empire: One Anonymous Publisher, 24 Extensions, 800,000 Users

While reviewing confirmed sellers, a pattern kept surfacing. Different extensions, different streaming platforms, but the same three-letter prefix: QVI– short for “Quality Viewership Initiative.”

What looked like unrelated tools turned out to be a single operation: 24 browser extensions – 21 currently live, 3 removed – covering nearly every major streaming service.

  • Netflix
  • Hulu
  • Disney+
  • Amazon Prime Video
  • HBO Max
  • Peacock
  • Paramount+
  • Tubi
  • Apple TV+
  • Crunchyroll

All published by HideApp LLC, registered at 1021 East Lincolnway, Cheyenne, Wyoming – an address shared by hundreds of other LLCs through a registered agent service – and operating under the brand “dogooodapp.”

The largest extensions in the network:

  • Custom Profile Picture for Netflix (200K users)
  • Hulu Ad Skipper (100K)
  • Netflix Picture in Picture (100K)
  • Ad Skipper for Prime Video (60K)
  • Netflix Extended (60K)

Across all 21 live extensions, the network reaches nearly 800,000 users.

Figure 2. Extension Page in Chrome Store for the “Custom profile picture for Netflix [QVI]” extension

But their privacy policy says something the store listings don’t. These extensions collect extensive information, including:

  • Viewing history
  • Content preferences
  • Platform subscriptions
  • Downloaded content
  • Streaming behavior

They also collect age and gender – and if you don’t provide demographics, they match your email against third-party demographic databases to fill in the gaps.

Figure 3. Data declared as collected by the privacy policy of the “Custom profile picture for Netflix [QVI]” extension

The policy describes selling reports to content creators and studios, streaming platforms, media research firms, and marketing agencies – along with “organizations that purchase anonymized viewing data.”

Put it all together and you’re looking at a distributed audience-measurement system running inside users’ browsers. One anonymous publisher pulling viewing behavior across every major streaming platform, building intelligence about what nearly 800,000 people watch, when, and how they engage with content. None of those users signed up for that. Legally, they accepted the terms when they clicked “Add to Chrome.” Practically, nobody read them.

Ad Blockers That Block Some Ads, And Sell Your Data to Other Ads

We confirmed eight ad blockers that reserve the right to sell or share user information with third parties. Tools people install to stop tracking – selling tracking data instead. Combined, they reach over 5.5 million users.

  • Stands AdBlocker (3M users) sells browsing data to third parties for “market analytics purposes.”
  • Poper Blocker (2M users) discloses selling identifiers, browsing activity, behavioral profiles, and inferred sensitive data – including health conditions, religious beliefs, and sexual orientation, all inferred from the URLs you visit.
  • All Block, an ad blocker for YouTube (500K users), sells anonymized data “for analytical and commercial purposes.” Published by an entity called Curly Doggo Limited, based in London.
  • TwiBlocker (80K users) discloses transferring browsing data to third parties who “process or sell it for analytical purposes.”
  • Urban AdBlocker (10K users) routes browsing data and AI conversations through the BiScience data broker.

If your ad blocker has a privacy policy longer than two paragraphs, read it.

Figure 4. Featured Ad Blocker in Chrome Store

Independent Operators Can Also Sell Your Data

These aren’t the biggest extensions on the list, but they show how far the data-selling model reaches.

  • Career.io Job Auto Apply (10K users) states in its policy that it may use personal data collected from your resume to sell to third parties, including data brokers, for targeted advertising and profiling. A job application tool that sells your resume.
  • Dog Cuties (6K users) is a cute dog wallpaper new-tab extension. Confirmed data seller through the Apex Media network.
  • EmailOnDeck (10K users) is a temporary email service – a tool people use specifically when they don’t want to share their real information. Its policy states it may sell, rent, or share its mailing list.
  • Survey Junkie discloses selling URLs visited, clickstream data, and “modeled information” about consumer preferences to market research agencies, ad agencies, and data analytics providers.
  • Dashy New Tab (10K users) has its Chrome Web Store listing marked “does not sell your data.” Its actual privacy policy marks data as “Sold or Shared: Yes.” We believe this is CCPA compliance language for standard analytics, not commercial data sales – which is why we left it out. But the contradiction between the store listing and the privacy policy is real. If a publisher’s own policy says “Sold or Shared: Yes” and the store listing says the opposite, which one should users trust?

When Your Employees’ Extensions Are Selling Data

Of the 82 confirmed sellers, 29 of them are B2B sales intelligence tools. Their business is data, so the disclosure itself isn’t a surprise. We’re not counting them alongside the consumer-facing extensions.

But they belong in this conversation. These extensions sit on corporate machines. This means that employee browsing behavior, such as internal URLs, SaaS dashboards, and research activity, flows into commercial databases that your competitors can purchase. The risk isn’t about users being deceived. It’s about corporate data leaving through a channel nobody is watching.

What Security Teams Should Do About This

Most extension security evaluations focus on permissions or known malicious indicators – flagging extensions that request excessive access or match threat intelligence. That catches malware. It doesn’t catch an extension that openly reserves the right to sell your browsing data.

An extension with a data-selling disclosure isn’t a hypothetical risk. It’s a stated business practice, sitting in a document your employees accepted without reading.

Three questions worth asking:

  1. What extensions are installed across employee browsers?
  2. What data do those publishers claim the right to collect or sell?
  3. Could corporate browsing activity be flowing into commercial datasets?

Most browsers already support centralized extension management through enterprise policies – Chrome’s ExtensionSettings, Edge’s group policies, Firefox’s enterprise configurations. If you don’t have an extension governance policy, that’s the first step. If you do, add privacy policy review to the evaluation criteria. Permissions alone don’t tell you enough.

To that end, LayerX added a new filter to detect and filter (and block, if so desired) extensions that either don’t have a privacy policy at all, or reserve the right to sell personal data.

Consider blocking extensions that either disclose selling user data or don’t publish a privacy policy at all.

Figure 5. LayerX Extension Data Privacy Filter

The Bottom Line

Browser extensions are among the web’s most powerful and least scrutinized tools. While much of the focus is on malicious that actively steal user and corporate data, privacy violations may sound mundane, but can also be risky.

Going through and reading the Privacy Policy of every extension that every user has in your organization can lead to hundreds or thousands of individual extensions; clearly, that’s not feasible.

Instead, organizations need to start deploying automated tools that can restrict suspicious extensions and account for privacy settings.

Google was contacted multiple times over two days for comment on the report’s findings and Chrome Web Store policies but did not respond before publication. This article will be updated if a response is received.

This post was originally published on LayerX and republished here with permission.

Reviewed by Irfan Ahmad.

Read next: 

• Facial recognition data is a key to your identity – if stolen, you can’t just change the locks

• Research reveals lack of transparency in ad data of digital platforms
by External Contributor via Digital Information World

Wednesday, April 29, 2026

Asphalt is everywhere, but is it bad for our health?

By Joanna Allhands - Arizona State University

ASU researcher says pavement’s potential impact on our health deserves as much attention as its carbon or energy footprint.

Heat and sunlight worsen asphalt emissions, raising health risks for workers and nearby communities.
Image: Brian J. Tromp / unsplash

If you piled all of Phoenix’s pavement into one spot, it would be enough to cover San Francisco four times over.

Roads, parking lots and other paved surfaces blanket a lot of land — an estimated 40% of Arizona’s capital city.

Pavement absorbs heat during the day and releases it slowly at night via the urban heat island effect, increasing the amount of energy that cities consume.

But for Elham Fini, a senior scientist affiliated with the Julie Ann Wrigley Global Futures Laboratory at Arizona State University, pavement’s potential impact on our health deserves as much attention as its carbon or energy footprint.

“To make something truly sustainable,” she said, “you cannot ignore the human side of it.”

Asphalt fumes can be hard on health

Fini — a faculty member in ASU’s School of Sustainable Engineering and the Built Environment — spent years studying why asphalt breaks down so quickly.

That work pointed her toward the volatile organic compounds (VOC) that escape from bitumen, the black, sticky petroleum byproduct that holds asphalt together.

Two studies in the Journal of Hazardous Materials and Science of the Total Environment shed light on how the compounds that give asphalt its trademark scent change after sunset and form ultrafine particles, which can worsen air quality.

These carbon-based vapors are continuously released but become more noticeable on hot, sunny days. They can cause dizziness and difficulty breathing in the short term.

Long-term exposure also can elevate the risk of lung cancer, a major concern for construction workers who regularly breathe these fumes without a respirator.

Aging pavement emits toxic vapors

And the impacts could get worse as pavement ages.

Research from Fini and others shows that asphalt begins releasing different, more toxic strains of VOC as bitumen breaks down in sunlight and heat.

These toxic, often odorless VOCs are small enough to work their way into arteries and organs.

Tests and a modeling analysis also suggest that they can cause significant neurological damage in humans, particularly among women and the elderly.

“Heat is worsening the situation,” Fini said. “It’s exacerbating the emissions from asphalt.”

More study is needed to understand what level of asphalt-emitted VOC exposure is unsafe.

But what we know so far should raise alarm bells for hot, car-centric cities such as Phoenix.

Goal: Safer asphalt, healthier workers

Fini is working with Dr. Bruce Johnson via a partnership with Mayo Clinic to better understand how asphalt emissions impact respiratory health.

She hopes that their studies will lead to stronger protections for construction workers and surrounding communities, as well as less toxic, lower-emitting asphalt formulations.

Fini has a head start on the latter.

She has teamed up with Peter Lammers, chief scientist at the Arizona Center for Algae Technology and Innovation, to begin growing a strain of algae that could reduce VOC emissions using wastewater from a Phoenix treatment plant.

“It’s a great setup,” said Lammers, a research professor in the School of Sustainable Engineering and the Built Environment, “because we use water that’s far too high in nitrogen and phosphorus to be released anywhere. And instead, we reuse it to grow more algae.”

Fini then bakes that algae at high temperatures without much oxygen into a binder that can be easily mixed into asphalt.

Algae can capture the worst VOCs

A study in the journal Clean Technologies and Environmental Policy found that while algae-infused asphalt doesn’t significantly reduce total VOC emissions, it can effectively keep the most toxic compounds from escaping.

In fact, tests showed that it reduced the toxicity of asphalt emissions by roughly 100-fold.

Algae can slow how quickly pavement breaks down — which could lower construction and maintenance costs and make its inclusion in asphalt even more attractive for cities and paving companies.

Fini is exploring other binder options, including a product made from the leftover branches of forest-thinning projects, and working with Phoenix to pave a section of road with algae-infused asphalt.

Because VOCs from pavement are often left out of air quality assessments, these real-world tests are critical to evaluate pavement performance and its long-term environmental impact.

“We have 4 million miles of roads in America,” Fini said. “We should make those 4 million miles do more for us than just get from A to B.”

This research was done in collaboration with colleagues from the following institutions: Emory University; Dalian University of Technology, China;Mayo Clinic Arizona;Oregon State University; University of Chicago; University of Lille, France; University of Littoral Côte d′Opale, France; University of Miami; University of Missouri; University of Utah.

Reviewed by Irfan Ahmad.

This post was originally published on Arizona State University News and republished here with permission.

Read next:

• Half of AI health answers are wrong even though they sound convincing – new study

by External Contributor via Digital Information World