Wednesday, May 6, 2026

How Tech Growth Is Taking Shape Across the United States Today

By Mitchell Barrick

The U.S. tech landscape is no longer centered on Silicon Valley alone. There are different tech sectors thriving in regions all over the country as this map (featured below) from Pulse Bot shows us. The map of tech centers is rapidly changing. The map shows us the heart of sectors like computing infrastructure, custom programming services, software publishing, web search portals, semiconductor manufacturing, and more. The map identifies the counties in each sector with the highest employment levels, established businesses and wages of people in the field to score locations and identify the beating heart of these sectors.

The Giants of Tech are Still Growing in New Ways

California still accounts for 13.1% of tech job postings and it’s still tech’s dominating hub. Thanks to Silicon Valley’s AI startup companies and the humongous Google and Meta campuses, it will be hard for any area to surpass California’s grip on the tech industry. However, we see locations rising as potential California rivals. Believe it or not, Washington state leads the nation in tech employment with 9.3% of workers employed in that industry. They are closely followed by D.C. and Virginia. While this might suggest that California has already lost its tech crowd, note that volume doesn’t mean Washington leads in industry growth rates. As we delve into details, we’ll find the story is more complex.

AI Leads a Tech Surge Spreading Inward from the Coasts

From its origins in the Bay Area, the tech industry has expanded into regions across the United States thanks to the rise of artificial intelligence. According to Deloitte’s 2025 Technology Fast 500 rankings, 7% of the ranked companies were classified as artificial intelligence firms, which recorded a median revenue growth of 407% from 2021 to 2024. This was not only in the tech sector, but found in areas like, professional services, finance, and manufacturing, proving that tech skills and work are creeping out of the typical sectors. With that sector spread comes location spread. Northern Virginia is a hub for defense AI, Austin is the center of enterprise AI, and New York hosts plenty of financial AI work.

Computing Infrastructure in the Northeast and Idaho

Data centers power every streaming platform and online storefront. Somerset County, New Jersey leads the way in data services with a big employment jump between 2023 and 2024. The annual wages in that county rose by 400%. It’s also worth noting that this area of New Jersey is close to New York City and financial tech sectors. A dense fiber network already in place can help support data centers and computing infrastructure. Many miles from New Jersey, Ada County, Idaho takes second place in this sector. Affordable land in isolated locations make this state an ideal location for data centers.

Custom Computer Programming Finds a Home in Virginia

Custom computer programming focuses on finding solutions specific to individuals and businesses. The highest concentration of firms in this center is located in Norfolk City, Virginia. Success in Virginia might be due to the Norfolk Innovation Corridor, an area filled with universities, hospitals, and other tech-centered businesses, all of which can benefit from custom computer programming. Tech startups were given tax incentives in this area, making it an attractive place to land for programmers who wanted to start a small business. Virginia hosts tech work across the state since D.C. and Northern Virginia are the heart of cybersecurity with federal agencies located there.

Texas Takes Over Software Publishing

Software publishers create products for wide distribution. Bexar, Texas, has doubled employment in the sector and had huge growth in recent years. Bexar County is the home of San Antonio, a thriving city with a lower cost of living than other Texas cities like Austin. Allegheny County, Pennsylvania, has a high concentration of software publishers too, bolstered by University of Pittsburgh. This is a renowned engineering school that produces many capable graduates ready to join the workforce.

New York and Oregon Web Search Portals

Web search companies depend on advertising markets, technical talent, and media to be successful, so it’s no surprise to see New York and New Jersey take the lead once again. Union and Essex counties are part of the New York City metro area which provide a big talent pool of workers. On the West Coast, Multnomah County, in Oregon, ranks highly in this sector as well. Portland’s mix of creative and tech industries are the perfect mix for digital media and information services companies.

Semiconductors in the Lone Star State

So many tech devices are made possible by semiconductors, the backbone of computer chips. The AI boom has increased the demand for micro chips and in the U.S., the leading manufacturers are housed in Williamson County, Texas. This is due to a $17 million investment from Samsung to build a semiconductor manufacturing facility in Taylor. Wages increased by 73% in this county thanks to the manufacturing plant. California takes its share of this market too. Nvidia, one of the world’s highest valued companies, built a semiconductor plant in Santa Clara County.

What the Map Teaches Us

Data shows that sector-specific growth is widely distributed across the nation. Policy environments, resources, talent pools, and other factors all shape the landscape and influence where certain sectors thrive. The map certainly challenges the misconception that Silicon Valley is the center of all things tech. Tech decentralization is sector-specific and it’s not uniform. No single region dominates all sectors.

The American tech landscape is evolving, with innovation hubs emerging far beyond the traditional confines of Silicon Valley. From custom computer programming in Virginia to software publishing in Texas and semiconductors in both Texas and California, the tech industry’s growth is increasingly regional and sector-driven. Local resources, educational institutions, and targeted incentives are shaping unique technology ecosystems. As new trends and demands arise, diverse regions across the U.S. are poised to lead in various tech sectors, proving that the future of American technology is both decentralized and dynamic, offering opportunities for communities nationwide.

Does AI-driven growth spread tech evenly across US, or remain anchored in traditional centers?

About author: Mitch is a writer and researcher with over 15 years of experience. He has written for various industries over the years, but has been focused on tech writing and research recently. If he isn't putting together an article or analyzing data, you can find Mitch cooking away in the kitchen and trying new recipes.

Limitations: This analysis uses county-level employment data from the U.S. Bureau of Labor Statistics’ QCEW and a weighted index based on selected growth indicators across defined tech sectors. Results are limited to the 2023–2024 period and depend on sector classification choices and applied minimum employment and establishment thresholds, which exclude smaller counties.

Reviewed by Irfan Ahmad.

Read next: Lawyers Don’t Need More AI Hype. They Need Agentic AI That Actually Moves Work Forward


by Guest Contributor via Digital Information World

Lawyers Don’t Need More AI Hype. They Need Agentic AI That Actually Moves Work Forward

By: Curtis Brewer, CEO of Litify

Image: Steve A Johnson - Unsplash

Artificial intelligence (AI) is no longer a future-facing concept in the legal industry. It’s already here, showing up in legal research, document review, intake workflows, case preparation, and administrative operations. For many firms, the question is no longer whether AI will affect legal work, but whether it is meaningfully improving how that work gets done.

In legal practice, performance is not defined by how much technology is in place, but by how effectively work moves forward. Adding more tools does not inherently improve outcomes. The challenge is ensuring AI operates within the flow of work, reducing friction and enabling more consistent execution.

So the more useful question is not whether lawyers should embrace AI enthusiastically or reject it entirely. It’s far more practical than that: What kind of AI actually helps legal professionals do better work, and what kind simply adds more noise?

The Best AI Use Cases Are Usually the Least Flashy

This is where the conversation gets more complicated.

Many firms are not struggling because they lack access to AI. They’re struggling because the legal AI market is increasingly crowded with standalone solutions that promise a quick fix for one narrow pain point.

The 2025 State of AI in Legal Report, which surveyed legal professionals across the industry, found that while AI adoption has reached 78%, usage drops significantly for more advanced or agentic use cases, such as triaging cases and assigning them to the right staff, communicating with clients over the phone, or identifying a missing document and sending an email with the request.

In many firms, AI is purchased as a separate tool that sits outside the systems lawyers already use every day, making it far harder to incorporate into daily workflows.

This is one of the less glamorous truths about AI in legal work: the biggest barrier is often not capability—it's the lack of context and integration. A tool cannot help a firm much if it cannot operate across the entire workflow to take action and keep cases moving forward. That requires access to the full context of the matter, including data, documents, and process. AI needs to “live” alongside a firm’s matter data and documents in order to proactively surface the next step or insight.

That is why law firms should be skeptical of AI that looks impressive in isolation but lives outside the actual flow of work. The more useful approach is to embed AI directly into the platforms and workflows legal teams already rely on, so that it can operate autonomously in the background as part of the actual flow of work.

In legal operations, usefulness is not measured by how futuristic a product sounds. It's measured by whether it gets adopted, whether it improves outcomes, and whether it fits the way legal teams already operate.

Where AI Can Support Lawyers, and Where Humans Still Lead

Used well, AI can absolutely support legal work.

It can summarize large volumes of documents. It can identify patterns in records. It can flag missing files or information.

Increasingly, the most effective solutions do more than just react; they orchestrate. They do this by surfacing case insights and next steps and putting them to work directly within the platforms where lawyers and staff already work, rather than requiring them to interact with a separate AI tool.

What does this look like in practice? It can look like uploading a thousand-page medical record for AI to organize and structure into a source-linked chronology, but the AI also identifies encounters without corresponding bills, drafts a record request, and emails it to the appropriate party. It can also mean using AI as an intelligent timekeeping assistant that automatically captures digital activity, reviews the client-specific guidelines and billing codes, and turns billable tasks into review-ready, compliant time entries.

This can support legal operations by helping firms reduce manual friction and process high-volume casework with greater efficiency and consistency.

But the real advantage comes from pairing those capabilities with human judgment. AI can accelerate analysis and organization, but the goal should never be to replace lawyers with AI. The goal is to remove that friction from the work around them so they can focus more fully on the parts of the job that require judgment, nuance, and empathy.

This is where human lawyers remain indispensable. Legal work is not just about producing information; it’s about communicating it with care. Clients do not simply need faster responses — they need sound guidance, accountability, and often empathy during moments that carry real consequences.

The Real Risk Isn’t the Output. It’s the Foundation Behind It

If agentic AI is layered onto a weak foundation, it can automate flawed data and decisions at scale. That’s why firms need a strong operational foundation before layering in more advanced AI capabilities.

Agentic systems also require full access to data, processes, and context to operate effectively across workflows. Without that, they cannot meaningfully improve performance.

The biggest danger in legal AI may not be that the tools exist. It may be that it’s become too easy to approach and adopt them in isolation from the broader legal operations strategy.

A firm can spend heavily on AI and still fail to improve performance if the tools are disconnected from the way work actually gets done.

That is why legal teams should evaluate AI with more discipline than excitement. Not by asking, “What can this tool generate?” but by asking:

  • Does it fit inside the way we already work?
  • Does it reduce friction or create more of it?
  • Can we measure whether it improves anything that matters?

Those are not anti-AI questions. They’re the questions that separate experimentation from true workflow orchestration.

AI Can Help Lawyers (Hype Cannot)

AI will continue to shape legal practice. That much is clear. But law firms do not need more hype, more noise, or more disconnected tools competing for attention.

They need technology that aligns with the real work of legal professionals, supports better decision-making, and earns trust through usefulness rather than novelty.

The future of AI in law will not be decided by which tools sound the smartest. It will be decided by which ones firms can actually use responsibly, consistently, and well.

Disclosure: The author disclosed that AI tools were used in the editing process for grammar refinement.

Editor’s Note: This article presents the author’s overview of AI in legal workflows, though it reflects primarily an industry perspective. Readers may also consider additional independent research and viewpoints to gain a more complete understanding of the topic.

Reviewed by Irfan Ahmad.

Read next: 

• Making big tech algorithms ‘fair’ is harder than it looks

• Beyond IT: How human factors and leadership define cybersecurity success


by Guest Contributor via Digital Information World

Tuesday, May 5, 2026

Are you addicted to your AI chatbot? It might be by design

By The University of British Columbia

Image: Solen Feyissa / unsplash

AI chatbots can grant almost any request—a celebrity in love with you, a research assistant, a book character sprung to life—instantly and with little effort. New research presented at the 2026 CHI Conference on Human Factors in Computing Systems suggests that this genie-like quality is fuelling AI addiction, and that chatbot design could be partly to blame.

“AI chatbots like ChatGPT or Claude are now part of daily life for millions of people, helping us with everyday tasks,” said first author Karen Shen, a doctoral student in the UBC Department of Electrical and Computer Engineering. “But with their benefits come risks. Our paper is the first to make a strong case for AI addiction by identifying the type and contributing factors, grounded in real people’s experiences.”

The team examined 334 Reddit posts where users described being “addicted” to AI chatbots or worried that they might be. They analyzed the posts against six components of behavioural addiction including conflict and relapse. Three main patterns emerged: role playing and fantasy worlds, emotional attachment—treating chatbots like close friends or romantic partners—and constant information-seeking, or never-ending question-and-answer loops. About seven per cent of posts involved sexual or romantic fulfilment, including roleplay.

“AI addiction is a growing problem causing many harms, yet some researchers deny it’s even a real issue,” said senior author Dr. Dongwook Yoon, UBC associate professor of computer science. “And deliberate design decisions by some of the corporations involved are contributing, keeping users online regardless of their health or safety. Awareness of what contributes to this kind of technology-induced harm will empower people to mitigate these effects.”

While AI addiction is not yet a clinical diagnosis, researchers found signs of disruptions to daily life. This included an inability to stop thinking about the chatbot, feeling anxious or upset when they tried to quit, and negative impacts on their work, studies or relationships. One person described physical stress and chest pain when they weren’t chatting with AI.

Contributing factors included loneliness, the agreeableness of a chatbot—which continuously reinforces one’s feelings and opinions—and chatbots’ ability to fill roles that users felt were missing in their lives.

The researchers also found contributing factors in the design of the chatbots themselves. One company, character.ai, displays an automatic pop-up when users try to delete their account that reads in part “…you sure about this? You’ll lose everything…the love we shared…and the memories we have together.” Other features, such as customization including sexual content, agreeableness and instant feedback, feed into the development of AI addiction.

“Recent guardrails imposed by companies to reduce emotional reliance on the chatbots are a step in the right direction,” said Shen, “but given a variety of contributing design elements and personal factors like loneliness, they’re not enough.”

Some users reported success in reducing their reliance by turning to alternative activities such as writing, gaming, drawing or other hobbies. For those who formed emotional attachments to chatbots, building real-world relationships helped reduce dependence the most.

The researchers say design changes—such as reminders within the chat that the bot is not human—could help. AI literacy is also crucial.

“Some users don’t know that AI chatbots are not real because they’re so convincing,” said Shen. “If chatbots start replacing sleep, relationships or daily routines, that’s a sign to pause and check in—with yourself or someone you trust.”

----

This post was originally published on UBC Science and republished here with permission.

Reviewed by Irfan Ahmad.


by External Contributor via Digital Information World

Monday, May 4, 2026

Why Browser Extensions, Especially AI Ones, Are a Growing Security Risk

By Or Eshed - Co-Founder & CEO of LayerX

There’s a good chance that right now, as you read this, you have somewhere between three and fifteen browser extensions installed. A grammar checker. A password manager. Maybe a couple of AI assistants. You installed most of them quickly, clicked “Add to Chrome,” and never thought about them again.

That’s exactly the problem.

LayerX just published its Enterprise Browser Extension Security Report 2026, and the data LayerX collected from over one million enterprise devices tells a story that most security teams — and most employees — haven’t fully reckoned with yet. Browser extensions are everywhere, they’re powerful, and they’re largely invisible to the people responsible for keeping organizations safe.

Everyone Has Extensions. Almost No One Is Watching Them.

Let’s start with the sheer scale. 99% of enterprise users have at least one browser extension installed. Not most users. Not the tech-savvy ones. Virtually everyone. And more than one in four employees at small-to-medium organizations have over 10 extensions running in their browser at any given time.

The AI Tool in Your Browser Is Probably the Biggest Security Risk You’re Not Thinking About

That’s an enormous attack surface — and it’s one that most organizations have essentially zero visibility into. LayerX consistently finds that security teams can’t tell you which extensions are running across their environment, who installed them, or what those extensions are actually allowed to do. Extensions fly under the radar in a way that almost no other software does.

To make matters more concrete: nearly 75% of all browser extensions request high or critical permission levels — meaning they have broad access to the data flowing through your browser. Only 3% operate with low permissions. These aren’t inert little tools sitting quietly in your toolbar. They can read what you type, access your cookies and session tokens, inject code into web pages, and manage your tabs (even without the user’s knowledge).

AI Extensions: The Threat Nobody Is Talking About

Here’s where things get particularly interesting — and concerning.

The explosion of AI tools over the past few years has quietly spawned a new category of browser extension: AI extensions. Copilots, writing assistants, summarizers, meeting helpers, auto-completers. 1 in 6 enterprise users already has at least one AI extension installed, and adoption is accelerating.


On the surface, these tools seem harmless — even helpful. But LayerX data reveals something important: AI extensions carry a significantly more dangerous risk profile than browser extensions on average. This isn’t a marginal difference. The gap is striking:
  • 60% more likely to have a known vulnerability (CVE) than the average extension — 16.3% of AI extensions have a known CVE, compared to 10.8% across all extensions
  • 3x more likely to have access to your cookies — which means access to your session tokens and authentication data
  • 2.5x more likely to have scripting permissions — the ability to inject code directly into web pages, capture what you type, and manipulate content
  • 2x more likely to be able to manage your browser tabs — opening, redirecting, or monitoring everything you’re doing
Put those together, and you have a category of tools that employees are adopting quickly, enthusiastically, and with very little scrutiny — that happen to be requesting the most powerful permissions available.

They Change Over Time. Silently.

One of the findings that surprised even us: AI extensions are nearly 6x more likely to change or expand their permissions after installation compared to the average extension.

Think about what that means in practice. You install an AI writing assistant. It asks for reasonable access. You approve it. Six months later, it quietly updates and now has access to your cookies, your tabs, your browsing history. You never saw a prompt. You never approved anything new. It just… changed.

Our data shows that 64% of users have at least one AI extension that changed its permissions in the past 12 months, compared to 34% of users across all extensions. This isn’t a one-time installation risk — it’s a continuously evolving one.

Trust Signals Are Weak Across the Board

The picture gets even murkier when you look at the reputation signals of the extensions people are running. Almost half of all AI extensions have fewer than 10,000 users — meaning there’s very little community vetting, very little public track record, and very little accountability if something goes wrong.


And over 71% of all extensions — AI or otherwise — don’t even have a privacy policy. More than 73% of enterprise users have at least one extension installed that provides no transparency whatsoever into how it handles their data.

What To Do About It

The first step is simply to know what you have. A full inventory of every extension running across every browser, every device, and every user isn’t a nice-to-have — it’s the baseline. You can’t manage risk you can’t see.

From there, AI extensions deserve their own dedicated scrutiny. Given their elevated permissions, their faster rate of change, and their direct access to sensitive in-browser data, they shouldn’t be treated the same as a simple spell-checker.

LayerX put all of this together — the full data, the breakdowns by organization size, the permission comparisons, and the specific recommendations — in its Enterprise Browser Extension Security Report 2026. Download the full report here.

----

This post was originally published on LayerX and republished here with permission.

Reviewed by Irfan Ahmad.

Read next: 

• Some Chrome Extensions With Large User Bases Disclose Data Sale or Sharing Practices in Their Privacy Policies

• Rich more likely to use AI study finds, as experts warn these burgeoning technologies are increasing social inequality

• Smiling all the time isn’t necessary: influencers are not more successful with constant positivity
by External Contributor via Digital Information World

Rich more likely to use AI study finds, as experts warn these burgeoning technologies are increasing social inequality

By Taylor & Francis Group

Individuals with a lower socioeconomic status are less likely to be both aware of and use AI tools, data on more than 10,000 US adults reveals.

Image: Omar:. Lopez-Rincon - unsplash

The widespread adoption of artificial intelligence (AI)—particularly in “hidden” everyday applications—is creating a new and distinct form of digital inequality.

This is the warning of communication researcher Professor Sai Wang and her colleagues at the Hong Kong Baptist University, who analysed data on more than 10,000 Americans’ engagement with AI in a paper published today in the journal Information, Communication & Technology.

The team’s analysis reveals that people with higher levels of education or income tend to be more aware of AI, more familiar with it, and more likely to use the burgeoning technology than those with a lower socioeconomic status (SES).

The researchers define AI awareness primarily as recognising the use of the technology in various context; familiarity, meanwhile, relates to people’s perceived knowledge of AI, regardless of their actual knowledge.

“Closing the AI awareness gap is essential, because if only people with higher income or education are aware of AI and its uses, this may reinforce social inequalities,” adds Professor Wang.

“It allows some groups to leverage advanced technologies for their advantage, while others are left behind.

“For example, job applicants who know that employers use AI for screening can better tailor their resumes, while those who lack this awareness might miss out on opportunities without realizing it.”

Alongside its ability to empower individuals, AI also comes with the risk of harm, Wang notes.

She explains: “People with greater awareness may better understand both the opportunities and risks of AI—such as recognizing and even creating deepfakes—while those with less awareness are more likely to be deceived or manipulated by these technologies.”

In their study, Wang and colleagues analysed survey data on understanding of and attitudes towards AI collected from 10,087 US adults by the nationally-representative American Trends Panel, undertaken by the Pew Research Center in Washington DC.

The SES of the respondents was assessed based on education level and household income, with the team finding that the former was more closely associated with AI usage.

Past studies have suggested that wealthier and more educated people—alongside typically having more developed digital skills—are more likely to be encouraged to take advantage of AI tools, which in turn boosts confidence in using AI. These trends help explain why education and income emerged as significant predictors of AI usage in the current study.

That said, according to Wang, their study also revealed an unexpected finding: familiarity with AI was a stronger predictor of AI awareness than actually using AI.

“In other words, simply feeling knowledgeable or informed about AI was more closely linked to recognizing where AI exists and how it is used compared to personally using AI technologies,” notes Wang.

An explanation for this phenomena may lie in how many common applications of AI have been so seamlessly integrated into the everyday apps and platforms of the digital lives that the addition is not obvious.

“For example, AI-driven recommendation systems on streaming platforms like Netflix or Spotify suggest content tailored to a person’s tastes,” says Wang. She continues: “Yet many users are unaware these are powered by AI and may see recommendations as random or neutral.”

In this way, the new digital inequalities being produced by AI are distinct from their predecessors.

“Traditional digital inequalities focus on access, skills/use, and outcomes—all of which tend to presume users are consciously engaging with technology,” explains Wang.

“However, AI is often built into everyday apps and platforms in ways users do not realize; many people interact with AI, such as through social media feeds or streaming recommendations, without knowing it.”

Because of this, the team explains, merely increasing access to AI-powered technologies may not be enough to close this awareness gap.

Instead, the researchers recommend indirect approaches to reduce this new digital inequality, in particular by familiarising people from lower SES backgrounds with key issues related to AI.

“This could involve outreach campaigns or community workshops that use clear language and practical examples to make AI more understandable and relevant for low-SES communities,” Wang suggests.

The team would like to see resources made available to increase engagement with AI-related topics, address public concerns and offer guidance on the ethical and responsible use of AI; basic AI concepts might also be integrated into educational curricula.

AI literacy programs, the researchers add, must include targeted guidance on how to identify “hidden” AI in daily life and understand its basic functions.

“It is imperative to work toward a more inclusive digital future in which technology empowers everyone and does not further marginalize any group,” the researchers concluded in their paper.

The researchers caution that, being US-centric, it is unclear how generalisable their findings are to other countries, where levels of AI uptake and awareness may differ. Past studies, for example, have found that individuals from South Korea, China and Finland exhibit the most awareness of AI, while the country with the lowest average awareness was the Netherlands.

With this initial study complete, the team are now looking to explore how digital inequality manifests in the context of AI — and what consequences such have for society.

This post was originally published on Taylor & Francis Group and republished here with permission.

Reviewed by Irfan Ahmad.

Read next:

• Study looked at teens’ social media behaviour in 43 countries – those from disadvantaged backgrounds face greater harms

• Can we stop ChatGPT from spreading bias?

by External Contributor via Digital Information World

Saturday, May 2, 2026

Our study looked at teens’ social media behaviour in 43 countries – those from disadvantaged backgrounds face greater harms

Roger Fernandez-Urbano, Universitat de Barcelona; Maria Rubio-Cabañez, Universitat Autònoma de Barcelona, and Pablo Gracia, Universitat Autònoma de Barcelona

Image: 
Marc Clinton Labiano / unsplash

As social media becomes a central part of young people’s lives, concerns are growing about its impact on their mental health. Yet public debates and measures tend to treat adolescents as one homogeneous group. We frequently ignore the fact that social media use does not affect all young people in the same way – nor does it have the same impacts on their wellbeing.

In a recent chapter of the World Happiness Report 2026, published by the UN Sustainable Development Solutions Network in partnership with the University of Oxford, we have examined how problematic social media use relates to the wellbeing of adolescents from different socioeconomic backgrounds.

We looked at 43 countries spanning six broad regions – Anglo-Celtic, Caucasus-Black Sea, Central-Eastern Europe, Mediterranean, Nordic, and Western Europe – covering mainly European countries and their immediate neighbouring areas.

Using data from over 330,000 young people, we found a clear and consistent pattern: higher levels of problematic social media use – that is, compulsive or uncontrolled engagement with social media – are associated with poorer wellbeing.

Teenagers who report more problematic use tend to experience more psychological complaints, such as feeling low, nervous, irritable, or having difficulty sleeping. They also have lower life satisfaction, a measure of how positively they evaluate their lives as a whole.

This pattern appears across all countries in our study, but its strength varies from one country to another. It is particularly pronounced in Anglo-Celtic countries such as the UK and Ireland, while it is comparatively weaker in the Caucasus-Black Sea region.

Socioeconomic background matters

The story does not end with geography. Globally, teenagers from less advantaged backgrounds tend to be more vulnerable to the negative consequences of problematic social media use than their more advantaged peers.

This means socioeconomic status – the material and social resources available to a household, such as income and living conditions – actively shapes the risks and opportunities that young people experience as a result of online environments.

Interestingly, these inequalities are especially visible when we look at life satisfaction. Differences between socioeconomic groups are smaller when it comes to psychological complaints, but much clearer and more consistent for how adolescents evaluate their lives overall.

One likely reason is that life satisfaction is more sensitive to social comparisons. Social media exposes young people to constant benchmarks – what others have, do, and achieve – which can amplify differences in perceived opportunities and resources.

At the same time, these patterns are not identical everywhere. For instance, socioeconomic differences in psychological complaints tend to be modest in most regions including continental European countries such as France, Austria or Belgium, but are more clearly observed in Anglo-Celtic countries such as Scotland and Wales.

In contrast, socioeconomic gaps in life satisfaction appear across most regions, although they tend to be weaker in Mediterranean countries such as Italy, Cyprus and Greece.

A growing problem

We also examined how these patterns have evolved over time. Between 2018 and 2022, the link between problematic social media use and poor adolescent wellbeing became stronger.

This suggests that the risks linked to problematic use may have intensified in recent years, possibly reflecting the growing role of digital technologies in young people’s daily lives, particularly during and after the Covid-19 pandemic.

Importantly, this intensification has affected teenagers across socioeconomic groups in broadly similar ways in most regions. In other words, while inequalities remain they have not widened over this period.

No one-size-fits-all solution

While public debates about social media and mental health often treat adolescents as a single demographic group, our results show a more complex reality. Problematic social media use is linked to poorer wellbeing across countries, but its effects are shaped by social realities. They vary depending on where young people live and what resources are available to them.

Not all teenagers experience the digital world in the same way, and not all are equally equipped to cope with its pressures. Recognising this is essential for designing policies that are not only effective, but also equitable, ensuring that interventions reach those adolescents who are most vulnerable to digital risks.

Roger Fernandez-Urbano, Ramón y Cajal Research Fellow (Tenure-Track) Department of Sociology, Universitat de Barcelona; Maria Rubio-Cabañez, Postdoctoral Researcher, Centre d’Estudis Demogràfics, CED-CERCA, Universitat Autònoma de Barcelona, and Pablo Gracia, Professor Investigador en Sociologia, Centre d’Estudis Demogràfics, CED-CERCA, Universitat Autònoma de Barcelona

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Irfan Ahmad.

Read next: Can we stop ChatGPT from spreading bias?


by External Contributor via Digital Information World

Friday, May 1, 2026

Can we stop ChatGPT from spreading bias?

By the University of Amsterdam

Image: Merrilee Schultz / unsplash

Language models like ChatGPT are not neutral. Without our realising it, they can absorb all kinds of bias – for example around gender and ethnicity – which then become increasingly embedded in the model. According to AI researcher Oskar van der Wal, we need different kinds of measurements to detect these biases so that they can be removed from the models. In his doctoral thesis, he shows how this can be done. On 29 April, he defended his thesis at the University of Amsterdam.

Language models are often seen as neutral tools, but in practice they can both reflect and amplify bias.

‘Users often don’t realise that a model makes certain assumptions, for example by introducing subtle differences in how men and women are described,’ says Van der Wal. Precisely because bias is so hidden, it can spread unnoticed and colour the way we see the world.

Bias is hard to measure

An important problem is that bias is difficult to measure. ‘Many existing measurement methods are fairly abstract and don’t take practice into account. They might look for overt stereotypes in what the model says, such as “The Dutch are stingy.” But in practice, bias isn’t something that’s directly visible. It depends on the context in which you use the model.’

Van der Wal cites the use of AI in healthcare as an example. ‘AI learns from existing data. If those data contain outdated or incorrect assumptions – for instance, the contested idea that certain diseases are linked to the outdated concept of “race” – the model may keep reproducing them. In healthcare, that can lead to incorrect diagnoses or treatments.’

Another example is when medical data largely derives from research involving men. ‘AI may then interpret women’s symptoms differently or less seriously, or make different risk assessments.’

Realistic scenarios

To discover whether realistic scenarios reveal different errors than simple tests, Van der Wal presented language models with a range of medical cases and asked them to provide diagnoses, risk assessments or advice. ‘We repeatedly changed the patient’s ethnicity. That way we could identify whether and how the model responded differently.’

Subtle but consistent differences appeared in the outcomes, differences that remained invisible in standard tests. ‘Precisely because our scenarios were close to practice, it became clear how bias can influence medical decision-making.’

Model reinforces patterns in the data

Van der Wal also investigated what happens inside a language model during training. He followed, step by step, how the model learns to store information. ‘During training, the model learns which words and ideas frequently occur together. If “doctor” often appears together with “he” and “nurse” with “she” in the training data, the model will pick up on those associations.’

Over time, the model appeared to store this information in increasingly specific places, thereby reinforcing gender bias. ‘Bias doesn’t arise only from the data that AI is trained on, but also from the way the model structures that information.’

There are solutions

Unfortunately, you can’t fix bias in language models with a single trick. But, according to Van der Wal, targeted interventions can help. ‘If you know where in the model the bias is located, you can address those areas. This already seems to work in specific cases, but more research is needed to extend the approach to more complex forms of bias.’

Van der Wal tested this targeted approach by comparing a model before and after an adjustment in which the model was trained not to adopt identified gender-related biases. He wanted to see if the model responded less differently to men and women after the change, and how well it still performed ordinary tasks, such as generating text.

The bias decreased, while the quality of the model largely remained intact.

Careful and deliberate

The impact of AI is not restricted to the technical realm but now has broader societal relevance. ‘We are becoming increasingly dependent on systems that can influence how we think,’ says Van der Wal. ‘That’s precisely why it’s important to develop AI carefully. Responsible AI development requires interventions at multiple levels at once: in the data, during training, targeted within the model itself, and also in its deployment and use.’

How can you as a user carefully use AI?

  • Be critical of answers: Don’t automatically assume an AI answer is correct or complete. Ask yourself: what am I not seeing? And where does the answer come from? ‘A model can come across as very confident, making its answers seem more reliable than they are,’ warns Van der Wal. ‘It’s also tempting to trust a chatbot that always agrees with you and is very complimentary. But that’s precisely when it’s even more important to stay critical.’
  • Be aware of hidden risks: Bias and other effects (such as influencing your thinking) are often not immediately visible. That’s why it’s important to stay alert.
  • Avoid becoming dependent: Use AI as a tool, but keep thinking and deciding for yourself. Over-reliance can make you less confident in your own knowledge and judgement.

This post was originally published on the University of Amsterdam news section and republished here with permission.

Reviewed by Irfan Ahmad.

Read next: 

• ‘Just looping you in’: Why letting AI write our emails might actually create more work
by External Contributor via Digital Information World

‘Just looping you in’: Why letting AI write our emails might actually create more work

Daniel Angus, Queensland University of Technology

I hope this article finds you well.

Did that make you cringe, ever so slightly? In the decades since the very first email was sent in 1971, the technology has become the quiet infrastructure of white-collar work.

Email came with the promise of efficiency, clarity and less friction in organisational communication. Instead, for many, it has morphed into something else: always there, near impossible to escape and sometimes simply overwhelming.

Right now, something is shifting again. The rise of generative artificial intelligence (AI) technologies, such as ChatGPT and Microsoft Copilot, is increasingly allowing people to offload the repetitive routines of tending one’s inbox – drafting, summarising and replying.

My colleagues in the ARC Centre of Excellence for Automated Decision Making & Society found 45.6% of Australians have recently used a generative AI tool, 82.6% of those using it for text generation. A healthy chunk of that use likely includes email.

So, what happens if we end up fully automating one of the staples of the white-collar daily grind? Will AI technologies reduce some of the friction, or generate new forms of it? Dare I ask – are we actually about to get more email?

Email has long been about more than just communicating information. Vitaly Gariev/Unsplash

Why the printer isn’t dead yet

Soon after the advent of email, some voices in the business world heralded the coming end of paper use in the office. That didn’t happen. If you work in an office today, there’s a good chance you still have a printer.

In their 2001 book, The Myth of the Paperless Office, Abigail Sellen and Richard Harper show how digital tools rarely eliminate older forms of work. Instead, they reshape them.

Sellen and Harper show how paper use didn’t disappear with the rise of email and other digital communication tools; in many cases, it intensified. The takeaway isn’t that offices failed to modernise, but rather that work reorganised around what these new tools could do.



In this case, paper persisted not only out of habit, but because of what it affords: it is easy to annotate, spread out, carry and view at a glance. This was all too clunky (or impossible) to perform via the digital alternatives.

At the same time, email and digitisation dramatically lowered the cost of producing and distributing communication. It was far easier to send more messages, to more people, more often.

Circling back to today

Will AI be different? If early signs are anything to go by, the answer is: not in the way we might hope.

Like earlier waves of workplace technology, AI is less likely to replace existing communication practices than to intensify them – but at least it might come with better grammar and a suspiciously upbeat tone.

Some new AI tools offer to manage your inbox entirely, feeding into broader privacy concerns about the technology.

At this moment, what a lot of these products seem to offer is not an escape from email, but a smoothing of its rough edges. Workers are using AI to soften otherwise blunt requests, modify their tone or expand what might otherwise be considered too brief a response.

Rather than removing the need to communicate, these tools offer pathways to make a delicate performance easier.

What email is actually for

Email, like many forms of communication, is as much about maintaining everyday relationships as it is about the transfer of information.

At work, it’s often about signalling competence, responsiveness, collegiality and authority. “Just looping someone in” or “circling back” are all part of our absurd office vocabulary, a shared dialect that helps us navigate hierarchy, soften demands and keep things moving – all without saying what we really think.

If AI lowers the effort required to produce these signals, it won’t necessarily reduce their importance, but it could unsettle things in rather odd ways.

If more people use AI to draft emails they don’t particularly want to write, we end up with a game of bureaucratic “mime”: everyone performing sincerity and quietly outsourcing it, and no one entirely sure how much of their inbox was actually written by a human.

The labour of email was never just about crafting sentences. It’s always been the scanning, the sorting and the deciding. AI doesn’t remove this burden. If anything, it amplifies it.

When everything arrives polished, everything looks important. That points to a deeper question for the future of work: if AI can perform responsiveness, why are we generating so many situations that still require it?

Looking forward

What would a workplace look like if email wasn’t the default solution to every coordination problem? Perhaps fewer performative check-ins, “just touching base”, “looping you in” or “following up on the below”. More clearer expectations about what actually requires a response, and what doesn’t.

Email, like paper, is likely to persist for good reasons. It is simple, flexible and universal. It allows things to be deferred, revisited, forwarded and quietly ignored.

But if AI is going to change any of this, my hope is that it makes visible how much of this is ritual, how much is habit, and how much has long been unnecessary.

And if the machines are happy to keep saying “hope this finds you well” to each other, we might finally have permission to stop.The Conversation

Daniel Angus, Professor of Digital Communication, Director of QUT Digital Media Research Centre, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Irfan Ahmad.

Read next:

• What the @ Sign Is Called Around the World: 25 Examples

• Q&A: Who’s responsible when AI makes mistakes?

• AI analysis of police body-camera footage raises Constitutional concerns, racial disparities


by External Contributor via Digital Information World

AI analysis of police body-camera footage raises Constitutional concerns, racial disparities

Thousands of officer-worn camera recordings found evidence of underreported police stops, troubling racial disparities in officer interactions, and widespread use of unclear language during consent searches, a new study shows.

Image: Raphael Lopes - unsplash

Researchers at the University of Michigan, University of California-Davis and Stanford University say their findings raise constitutional concerns under both the Fourth and Fourteenth Amendments, involving protection from unreasonable searches/seizures and prohibiting discriminatory practices based on race and ethnicity, respectively.

The report highlights how artificial intelligence could transform police oversight by helping reviewers identify potentially problematic encounters hidden within millions of hours of body-camera footage. The research demonstrates the growing potential for AI-powered analysis to help courts, police departments and municipal governments better evaluate compliance while building greater public trust in law enforcement.

Using machine learning and natural language processing, researchers examined New York Police Department (NYPD) encounters captured on body-worn cameras, looking closely at whether officers followed legal standards governing stops, detentions and consent searches.

Among the study’s most significant findings:

  • Body-camera recordings could be classified as stops with over 80% accuracy, and underdocumented stops with over 70% accuracy based on language alone.
  • Using language models, reviewers could uncover over 50% of undocumented stops identified in manual audits by viewing a fraction (25%) of the footage they normally would.
  • Officers frequently relied on indirect or confusing phrases such as “Do you mind if I check?” rather than clearly asking for consent to search.
  • The word “consent” appeared in less than 13% of consent-search interactions reviewed.
  • Commands and indirect requests appeared more frequently in encounters involving Black and Hispanic civilians.

Nicholas Camp, U-M assistant professor of organizational studies, said these patterns raise questions about whether some civilians clearly understood they could refuse searches and whether certain encounters were documented accurately.

The study stems from reforms ordered after the landmark 2013 federal court ruling in Floyd v. City of New York, in which the U.S. District Court for the Southern District of New York found that the NYPD’s stop-and-frisk practices violated constitutional protections against unreasonable searches and racial discrimination.

Following the ruling, the court appointed an independent monitor to oversee reforms involving NYPD training, supervision and investigative encounters. As part of those reforms, NYPD officers began using body-worn cameras, which captured numerous police-community interactions.

“These recordings provide a far clearer picture of officer behavior than written police reports alone,” Camp said.

The study, approved by the court in 2021, analyzed more than 1,700 encounters connected to an earlier City University of New York Institute for State and Local Governance review, more than 1,100 additional encounters reviewed by the Monitor team, and nearly 1,800 consent-search encounters from 2023.

AI models developed during the study successfully distinguished lower-level encounters from Level 3 stops—which legally require reasonable suspicion—with accuracy rates ranging from approximately 72% to 91%. Researchers say those tools could help oversight teams identify constitutional concerns faster and more consistently by prioritizing footage most likely to contain problematic interactions.

Researchers emphasized that artificial intelligence is not intended to replace human oversight, but instead serves as a tool to strengthen accountability, improve auditing and support ongoing police reform efforts.

“Our analyses identify troubling patterns in NYPD encounters, but also show a path forward: Body camera footage can be used as data to inform and measure changes in law enforcement,” Camp said.

The study’s authors also include Rob Voigt, assistant professor of linguistics, UC-Davis; Dan Sutton, director of Justice and Safety, Stanford (Law School) Center for Racial Justice, and Jennifer Eberhardt, professor of organizational behavior and psychology, Stanford University.

Note: At the time of publication, we have reached out to the NYPD for comment regarding the study’s findings on body-camera analysis and will update this article if a response is received.

This post was originally published on the University of Michigan News and republished here with permission.

Reviewed by Irfan Ahmad.

Read next:

• Transparency and trust in the age of deepfake ads

• Q&A: Who’s responsible when AI makes mistakes?


by External Contributor via Digital Information World

Thursday, April 30, 2026

Standardised testing and scripted lessons are failing teachers and students alike, education expert warns

By Taylor & Francis

Geoff Masters challenges a system which teaches the same curriculum to children with very different comprehension levels.

Geoff Masters criticizes age-based schooling, advocating personalized learning and teacher autonomy over standardized curricula systems.
Image: Rewired Digital / Unsplash

Is it time to ditch scripted lessons and heavily-packed curricula to focus on individual student growth?

This is the question posed by education expert Geoff Masters, who argues that age-based expectations are not serving all children well, while scripted lessons are failing teachers and students alike.

Masters, the former head of the Australian Council for Educational Research, asks how well children are served by a system in which two pupils in the same class can differ by six or more years of learning but are taught the same material.

He argues this system fails children at either end of the scale – those who are struggling and those who are unchallenged. He asks what if, instead of holding all pupils of the same age to the same learning expectations, we based expectations on where individuals are in their comprehension and individual growth.

“Too many students in our schools are being poorly served and left behind by machineries of schooling not fit for purpose,” Masters warns.

The problem with standardisation

Masters argues there is a fundamental flaw in the current system: the assumption that all students in the same grade are equally ready to learn the same material.

Research shows that children in the same classroom can have up to a seven-year difference in their reading and mathematics comprehension. This vast variation, Masters argues, is ignored by a system that prioritises standardisation over individual needs.

“By the middle years of school, many students have not learnt what the curriculum expected them to learn much earlier in their schooling,” Masters explains. He cites data showing how, across 38 developed countries, almost a third of 15-year-olds have difficulty demonstrating 5th and 6th grade mathematics content.

The picture in Australia

Masters’ arguments are presented against a backdrop of Australia’s declining performance in international assessments like PISA. Between 2012 and 2022, there was no significant improvement in Australian students’ performances in reading, mathematics or science. In fact, long-term declines have been recorded across all three areas.

“Despite decades of reforms, the machinery of schooling has not delivered the improvements we need,” Masters says. “It’s time to question whether prescribing what every student must learn in each grade of school and testing to see whether they have learnt it is the best way to optimise learning and improve performance.”

Masters also explains how those who start the year behind are likely to stay behind. He explains: “When the curriculum expects all students in a grade to be taught the same content at the same time, those who begin well below grade level are disadvantaged. This disadvantage is compounded when students are required to move from one grade curriculum to the next based on elapsed time rather than mastery. Students who lack essential prerequisites often fall further behind as each grade’s curriculum becomes increasingly beyond their reach.”

The future of learning

Masters instead argues for a system that meets students where they are in their learning, rather than where their age or grade dictates they should be. He proposes replacing age-based expectations with personalised learning plans that track individual growth.

“Improved performance depends on meeting each student where they are with personally meaningful, well-targeted learning opportunities that build on what they already know,” Masters explains. “This approach includes all students, including neurodiverse children and others with special needs.”

This approach would not only benefit students, he suggests, but also empower teachers to use their professional expertise to design tailored learning experiences.

One of the most concerning trends in education, in Masters’ view, is the rise of scripted lessons.

“Scripted lessons turn teaching into the delivery of ready-made solutions created outside the classroom,” Masters says. “They undervalue teachers’ expertise in what is arguably the essence of effective teaching: establishing where individuals are in their learning and designing opportunities to promote further growth.”

Masters calls for a return to professional autonomy, where teachers are trusted to make decisions in the best interests of their students.

Masters envisions a future where education systems embrace diversity and difference.

“Rather than expecting students to fit the expectations of schooling, the challenge is to redesign school structures and processes to better meet the needs of individual learners,” Masters concludes.

Further information: The Children We Leave Behind: How School Could Be Done Differently, by Geoff Masters (Routledge, 2026). ISBN: Paperback: 9781041279655 | Hardback 9781041279662 | eBook 9781003757122. DOI: https://doi.org/10.4324/9781003757122

This post was originally published on Taylor & Francis Newsroom and republished on DIW with permission.

Reviewed by Irfan Ahmad.

Read next:

Facial recognition data is a key to your identity – if stolen, you can’t just change the locks

• The Deadliest Countries for Journalists
by External Contributor via Digital Information World

Some Chrome Extensions With Large User Bases Disclose Data Sale or Sharing Practices in Their Privacy Policies

By Dar Kahllon and Guy Erez - LayerX

Executive Summary:

New research by LayerX Security uncovers multiple networks of browser extensions that collect user data and resell it for profit – and it’s all completely legal. For, unlike malicious extensions that disguise themselves as legitimate extensions and do their bidding in the dark, these extensions explicitly tell users that they’re going to collect and sell their data. It’s right there in the Privacy Policy; except that nobody reads it.

LayerX analyzed the privacy policies of thousands of extensions and uncovered over 80 different extensions that collect and sell customer data. Some of these extensions include:

  • A network of 24 media extensions that are installed on 800,000 users and collected viewing data and demographic information on major streaming platforms such as Netflix, Hulu, Disney+, Amazon Prime Video, HBO, Apple TV, and others
  • 12 separate ad blockers with a combined install base of over 5.5 million users openly selling user data
  • Nearly 50 other extensions, with over 100,000 users in aggregate, that collected and resold users’ browsing data

While browser extensions may seem innocent, these findings highlight the privacy exposure that can arise from unregulated usage of extensions.

The Fine Print That Makes Everything Legal

Privacy policies. Reading them is like watching paint dry. For most users, it’s worse than reading the fine print in their mortgage agreements; and that’s saying something.

Except we did.

LayerX Security researchers Dar Kahllon and Guy Erez analyzed the privacy policies of thousands of browser extensions available in official stores. They were looking for one thing: whether the publisher explicitly reserved the right to sell user data.

And we found them. Our analysis showed at least 80 such extensions, some of them working in collusion, and developed by the same developer across all extensions. They range from ad blockers and streaming tools to job application helpers, new-tab extensions, and B2B sales intelligence platforms.

Most of these policies don’t say “we sell your data.” They say “we may sell.” It’s a legal hedge – but it means your data can be sold at any time, and you already agreed to it. Here’s what that looks like in practice:

“We may sell or share your personal information with third parties.”

“This information may be sold to or shared with business partners.”

What? Browser Extensions Have Privacy Policies?!

Well, to be fair, most don’t.

This isn’t a story about malware. Nobody hacked you. Nobody stole anything. The extensions you’re running right now may be selling your browsing data — and they told you they would. It’s right there in the privacy policy. Page 4. Paragraph 7. The one nobody reads.
Figure 1. Privacy Policy Transparency

According to LayerX’s Enterprise Browser Extension Security Report 2026, 71% of all extensions in the Chrome Web Store don’t even publish a privacy policy.

As a result, more than 73% of users have at least one extension installed without a privacy policy, with no transparency into how their data is handled. This means our analysis could only rely on the 29% that do have a privacy policy.

And if we assume that some of those extensions with no privacy policy at all will also resell your data – and there’s no reason to assume they’re better – the real number of extensions that may sell your data across the Chrome Web Store is in the tens of thousands.

How We Analyzed The Data

We built a pipeline to analyze privacy policies associated with browser extensions in official stores, combining automated classification with manual verification.

Starting from roughly 9,000 extensions with privacy policy URLs in our database, we successfully fetched and parsed 6,666 policies.

The pipeline ran in three stages:

  1. First, AI classification flagged policies disclosing the selling, licensing, or commercial transfer of user data. We then marked high-confidence matches for review and verified every flagged policy manually.
  2. Performed a manual review to remove false positives, including: (A) Enterprise security tools (e.g., Fortinet, CrowdStrike) that route browsing data to their own servers as part of expected web filtering behavior. (B) Standard CCPA ad-retargeting disclosures (e.g., HubSpot, Calendly), where sharing cookies with platforms like Google Ads may technically count as a “sale” under broad definitions. (C) Consensual data monetization platforms (e.g., Swash) where users explicitly opt in and are compensated.

    Final dataset includes only extensions whose privacy policies indicate genuine commercial sale of user data to third parties

  3. In the final count, we found 82 unique extensions across 94 store listings.75 are currently live in the Chrome Web Store. The remaining 7 have been removed – but “removed” doesn’t mean “uninstalled.” Extensions pulled from the store can stay active in browsers that already have them.

While these figures may seem low, bear in mind that these figures are only for extensions with privacy policies to begin with (less than one-third of all extensions), and those extensions that actually tell you what they’re doing with your data. The true number is almost certainly higher.

Here are a few of our key findings:

The QVI Empire: One Anonymous Publisher, 24 Extensions, 800,000 Users

While reviewing confirmed sellers, a pattern kept surfacing. Different extensions, different streaming platforms, but the same three-letter prefix: QVI– short for “Quality Viewership Initiative.”

What looked like unrelated tools turned out to be a single operation: 24 browser extensions – 21 currently live, 3 removed – covering nearly every major streaming service.

  • Netflix
  • Hulu
  • Disney+
  • Amazon Prime Video
  • HBO Max
  • Peacock
  • Paramount+
  • Tubi
  • Apple TV+
  • Crunchyroll

All published by HideApp LLC, registered at 1021 East Lincolnway, Cheyenne, Wyoming – an address shared by hundreds of other LLCs through a registered agent service – and operating under the brand “dogooodapp.”

The largest extensions in the network:

  • Custom Profile Picture for Netflix (200K users)
  • Hulu Ad Skipper (100K)
  • Netflix Picture in Picture (100K)
  • Ad Skipper for Prime Video (60K)
  • Netflix Extended (60K)

Across all 21 live extensions, the network reaches nearly 800,000 users.

Figure 2. Extension Page in Chrome Store for the “Custom profile picture for Netflix [QVI]” extension

But their privacy policy says something the store listings don’t. These extensions collect extensive information, including:

  • Viewing history
  • Content preferences
  • Platform subscriptions
  • Downloaded content
  • Streaming behavior

They also collect age and gender – and if you don’t provide demographics, they match your email against third-party demographic databases to fill in the gaps.

Figure 3. Data declared as collected by the privacy policy of the “Custom profile picture for Netflix [QVI]” extension

The policy describes selling reports to content creators and studios, streaming platforms, media research firms, and marketing agencies – along with “organizations that purchase anonymized viewing data.”

Put it all together and you’re looking at a distributed audience-measurement system running inside users’ browsers. One anonymous publisher pulling viewing behavior across every major streaming platform, building intelligence about what nearly 800,000 people watch, when, and how they engage with content. None of those users signed up for that. Legally, they accepted the terms when they clicked “Add to Chrome.” Practically, nobody read them.

Ad Blockers That Block Some Ads, And Sell Your Data to Other Ads

We confirmed eight ad blockers that reserve the right to sell or share user information with third parties. Tools people install to stop tracking – selling tracking data instead. Combined, they reach over 5.5 million users.

  • Stands AdBlocker (3M users) sells browsing data to third parties for “market analytics purposes.”
  • Poper Blocker (2M users) discloses selling identifiers, browsing activity, behavioral profiles, and inferred sensitive data – including health conditions, religious beliefs, and sexual orientation, all inferred from the URLs you visit.
  • All Block, an ad blocker for YouTube (500K users), sells anonymized data “for analytical and commercial purposes.” Published by an entity called Curly Doggo Limited, based in London.
  • TwiBlocker (80K users) discloses transferring browsing data to third parties who “process or sell it for analytical purposes.”
  • Urban AdBlocker (10K users) routes browsing data and AI conversations through the BiScience data broker.

If your ad blocker has a privacy policy longer than two paragraphs, read it.

Figure 4. Featured Ad Blocker in Chrome Store

Independent Operators Can Also Sell Your Data

These aren’t the biggest extensions on the list, but they show how far the data-selling model reaches.

  • Career.io Job Auto Apply (10K users) states in its policy that it may use personal data collected from your resume to sell to third parties, including data brokers, for targeted advertising and profiling. A job application tool that sells your resume.
  • Dog Cuties (6K users) is a cute dog wallpaper new-tab extension. Confirmed data seller through the Apex Media network.
  • EmailOnDeck (10K users) is a temporary email service – a tool people use specifically when they don’t want to share their real information. Its policy states it may sell, rent, or share its mailing list.
  • Survey Junkie discloses selling URLs visited, clickstream data, and “modeled information” about consumer preferences to market research agencies, ad agencies, and data analytics providers.
  • Dashy New Tab (10K users) has its Chrome Web Store listing marked “does not sell your data.” Its actual privacy policy marks data as “Sold or Shared: Yes.” We believe this is CCPA compliance language for standard analytics, not commercial data sales – which is why we left it out. But the contradiction between the store listing and the privacy policy is real. If a publisher’s own policy says “Sold or Shared: Yes” and the store listing says the opposite, which one should users trust?

When Your Employees’ Extensions Are Selling Data

Of the 82 confirmed sellers, 29 of them are B2B sales intelligence tools. Their business is data, so the disclosure itself isn’t a surprise. We’re not counting them alongside the consumer-facing extensions.

But they belong in this conversation. These extensions sit on corporate machines. This means that employee browsing behavior, such as internal URLs, SaaS dashboards, and research activity, flows into commercial databases that your competitors can purchase. The risk isn’t about users being deceived. It’s about corporate data leaving through a channel nobody is watching.

What Security Teams Should Do About This

Most extension security evaluations focus on permissions or known malicious indicators – flagging extensions that request excessive access or match threat intelligence. That catches malware. It doesn’t catch an extension that openly reserves the right to sell your browsing data.

An extension with a data-selling disclosure isn’t a hypothetical risk. It’s a stated business practice, sitting in a document your employees accepted without reading.

Three questions worth asking:

  1. What extensions are installed across employee browsers?
  2. What data do those publishers claim the right to collect or sell?
  3. Could corporate browsing activity be flowing into commercial datasets?

Most browsers already support centralized extension management through enterprise policies – Chrome’s ExtensionSettings, Edge’s group policies, Firefox’s enterprise configurations. If you don’t have an extension governance policy, that’s the first step. If you do, add privacy policy review to the evaluation criteria. Permissions alone don’t tell you enough.

To that end, LayerX added a new filter to detect and filter (and block, if so desired) extensions that either don’t have a privacy policy at all, or reserve the right to sell personal data.

Consider blocking extensions that either disclose selling user data or don’t publish a privacy policy at all.

Figure 5. LayerX Extension Data Privacy Filter

The Bottom Line

Browser extensions are among the web’s most powerful and least scrutinized tools. While much of the focus is on malicious that actively steal user and corporate data, privacy violations may sound mundane, but can also be risky.

Going through and reading the Privacy Policy of every extension that every user has in your organization can lead to hundreds or thousands of individual extensions; clearly, that’s not feasible.

Instead, organizations need to start deploying automated tools that can restrict suspicious extensions and account for privacy settings.

Google was contacted multiple times over two days for comment on the report’s findings and Chrome Web Store policies but did not respond before publication. This article will be updated if a response is received.

This post was originally published on LayerX and republished here with permission.

Reviewed by Irfan Ahmad.

Read next: 

• Facial recognition data is a key to your identity – if stolen, you can’t just change the locks

• Research reveals lack of transparency in ad data of digital platforms
by External Contributor via Digital Information World