Monday, May 4, 2026

Why Browser Extensions, Especially AI Ones, Are a Growing Security Risk

By Or Eshed - Co-Founder & CEO of LayerX

There’s a good chance that right now, as you read this, you have somewhere between three and fifteen browser extensions installed. A grammar checker. A password manager. Maybe a couple of AI assistants. You installed most of them quickly, clicked “Add to Chrome,” and never thought about them again.

That’s exactly the problem.

LayerX just published its Enterprise Browser Extension Security Report 2026, and the data LayerX collected from over one million enterprise devices tells a story that most security teams — and most employees — haven’t fully reckoned with yet. Browser extensions are everywhere, they’re powerful, and they’re largely invisible to the people responsible for keeping organizations safe.

Everyone Has Extensions. Almost No One Is Watching Them.

Let’s start with the sheer scale. 99% of enterprise users have at least one browser extension installed. Not most users. Not the tech-savvy ones. Virtually everyone. And more than one in four employees at small-to-medium organizations have over 10 extensions running in their browser at any given time.

The AI Tool in Your Browser Is Probably the Biggest Security Risk You’re Not Thinking About

That’s an enormous attack surface — and it’s one that most organizations have essentially zero visibility into. LayerX consistently finds that security teams can’t tell you which extensions are running across their environment, who installed them, or what those extensions are actually allowed to do. Extensions fly under the radar in a way that almost no other software does.

To make matters more concrete: nearly 75% of all browser extensions request high or critical permission levels — meaning they have broad access to the data flowing through your browser. Only 3% operate with low permissions. These aren’t inert little tools sitting quietly in your toolbar. They can read what you type, access your cookies and session tokens, inject code into web pages, and manage your tabs (even without the user’s knowledge).

AI Extensions: The Threat Nobody Is Talking About

Here’s where things get particularly interesting — and concerning.

The explosion of AI tools over the past few years has quietly spawned a new category of browser extension: AI extensions. Copilots, writing assistants, summarizers, meeting helpers, auto-completers. 1 in 6 enterprise users already has at least one AI extension installed, and adoption is accelerating.


On the surface, these tools seem harmless — even helpful. But LayerX data reveals something important: AI extensions carry a significantly more dangerous risk profile than browser extensions on average. This isn’t a marginal difference. The gap is striking:
  • 60% more likely to have a known vulnerability (CVE) than the average extension — 16.3% of AI extensions have a known CVE, compared to 10.8% across all extensions
  • 3x more likely to have access to your cookies — which means access to your session tokens and authentication data
  • 2.5x more likely to have scripting permissions — the ability to inject code directly into web pages, capture what you type, and manipulate content
  • 2x more likely to be able to manage your browser tabs — opening, redirecting, or monitoring everything you’re doing
Put those together, and you have a category of tools that employees are adopting quickly, enthusiastically, and with very little scrutiny — that happen to be requesting the most powerful permissions available.

They Change Over Time. Silently.

One of the findings that surprised even us: AI extensions are nearly 6x more likely to change or expand their permissions after installation compared to the average extension.

Think about what that means in practice. You install an AI writing assistant. It asks for reasonable access. You approve it. Six months later, it quietly updates and now has access to your cookies, your tabs, your browsing history. You never saw a prompt. You never approved anything new. It just… changed.

Our data shows that 64% of users have at least one AI extension that changed its permissions in the past 12 months, compared to 34% of users across all extensions. This isn’t a one-time installation risk — it’s a continuously evolving one.

Trust Signals Are Weak Across the Board

The picture gets even murkier when you look at the reputation signals of the extensions people are running. Almost half of all AI extensions have fewer than 10,000 users — meaning there’s very little community vetting, very little public track record, and very little accountability if something goes wrong.


And over 71% of all extensions — AI or otherwise — don’t even have a privacy policy. More than 73% of enterprise users have at least one extension installed that provides no transparency whatsoever into how it handles their data.

What To Do About It

The first step is simply to know what you have. A full inventory of every extension running across every browser, every device, and every user isn’t a nice-to-have — it’s the baseline. You can’t manage risk you can’t see.

From there, AI extensions deserve their own dedicated scrutiny. Given their elevated permissions, their faster rate of change, and their direct access to sensitive in-browser data, they shouldn’t be treated the same as a simple spell-checker.

LayerX put all of this together — the full data, the breakdowns by organization size, the permission comparisons, and the specific recommendations — in its Enterprise Browser Extension Security Report 2026. Download the full report here.

----

This post was originally published on LayerX and republished here with permission.

Reviewed by Irfan Ahmad.

Read next: 

• Some Chrome Extensions With Large User Bases Disclose Data Sale or Sharing Practices in Their Privacy Policies

• Rich more likely to use AI study finds, as experts warn these burgeoning technologies are increasing social inequality

• Smiling all the time isn’t necessary: influencers are not more successful with constant positivity
by External Contributor via Digital Information World

Rich more likely to use AI study finds, as experts warn these burgeoning technologies are increasing social inequality

By Taylor & Francis Group

Individuals with a lower socioeconomic status are less likely to be both aware of and use AI tools, data on more than 10,000 US adults reveals.

Image: Omar:. Lopez-Rincon - unsplash

The widespread adoption of artificial intelligence (AI)—particularly in “hidden” everyday applications—is creating a new and distinct form of digital inequality.

This is the warning of communication researcher Professor Sai Wang and her colleagues at the Hong Kong Baptist University, who analysed data on more than 10,000 Americans’ engagement with AI in a paper published today in the journal Information, Communication & Technology.

The team’s analysis reveals that people with higher levels of education or income tend to be more aware of AI, more familiar with it, and more likely to use the burgeoning technology than those with a lower socioeconomic status (SES).

The researchers define AI awareness primarily as recognising the use of the technology in various context; familiarity, meanwhile, relates to people’s perceived knowledge of AI, regardless of their actual knowledge.

“Closing the AI awareness gap is essential, because if only people with higher income or education are aware of AI and its uses, this may reinforce social inequalities,” adds Professor Wang.

“It allows some groups to leverage advanced technologies for their advantage, while others are left behind.

“For example, job applicants who know that employers use AI for screening can better tailor their resumes, while those who lack this awareness might miss out on opportunities without realizing it.”

Alongside its ability to empower individuals, AI also comes with the risk of harm, Wang notes.

She explains: “People with greater awareness may better understand both the opportunities and risks of AI—such as recognizing and even creating deepfakes—while those with less awareness are more likely to be deceived or manipulated by these technologies.”

In their study, Wang and colleagues analysed survey data on understanding of and attitudes towards AI collected from 10,087 US adults by the nationally-representative American Trends Panel, undertaken by the Pew Research Center in Washington DC.

The SES of the respondents was assessed based on education level and household income, with the team finding that the former was more closely associated with AI usage.

Past studies have suggested that wealthier and more educated people—alongside typically having more developed digital skills—are more likely to be encouraged to take advantage of AI tools, which in turn boosts confidence in using AI. These trends help explain why education and income emerged as significant predictors of AI usage in the current study.

That said, according to Wang, their study also revealed an unexpected finding: familiarity with AI was a stronger predictor of AI awareness than actually using AI.

“In other words, simply feeling knowledgeable or informed about AI was more closely linked to recognizing where AI exists and how it is used compared to personally using AI technologies,” notes Wang.

An explanation for this phenomena may lie in how many common applications of AI have been so seamlessly integrated into the everyday apps and platforms of the digital lives that the addition is not obvious.

“For example, AI-driven recommendation systems on streaming platforms like Netflix or Spotify suggest content tailored to a person’s tastes,” says Wang. She continues: “Yet many users are unaware these are powered by AI and may see recommendations as random or neutral.”

In this way, the new digital inequalities being produced by AI are distinct from their predecessors.

“Traditional digital inequalities focus on access, skills/use, and outcomes—all of which tend to presume users are consciously engaging with technology,” explains Wang.

“However, AI is often built into everyday apps and platforms in ways users do not realize; many people interact with AI, such as through social media feeds or streaming recommendations, without knowing it.”

Because of this, the team explains, merely increasing access to AI-powered technologies may not be enough to close this awareness gap.

Instead, the researchers recommend indirect approaches to reduce this new digital inequality, in particular by familiarising people from lower SES backgrounds with key issues related to AI.

“This could involve outreach campaigns or community workshops that use clear language and practical examples to make AI more understandable and relevant for low-SES communities,” Wang suggests.

The team would like to see resources made available to increase engagement with AI-related topics, address public concerns and offer guidance on the ethical and responsible use of AI; basic AI concepts might also be integrated into educational curricula.

AI literacy programs, the researchers add, must include targeted guidance on how to identify “hidden” AI in daily life and understand its basic functions.

“It is imperative to work toward a more inclusive digital future in which technology empowers everyone and does not further marginalize any group,” the researchers concluded in their paper.

The researchers caution that, being US-centric, it is unclear how generalisable their findings are to other countries, where levels of AI uptake and awareness may differ. Past studies, for example, have found that individuals from South Korea, China and Finland exhibit the most awareness of AI, while the country with the lowest average awareness was the Netherlands.

With this initial study complete, the team are now looking to explore how digital inequality manifests in the context of AI — and what consequences such have for society.

This post was originally published on Taylor & Francis Group and republished here with permission.

Reviewed by Irfan Ahmad.

Read next:

• Study looked at teens’ social media behaviour in 43 countries – those from disadvantaged backgrounds face greater harms

• Can we stop ChatGPT from spreading bias?

by External Contributor via Digital Information World

Saturday, May 2, 2026

Our study looked at teens’ social media behaviour in 43 countries – those from disadvantaged backgrounds face greater harms

Roger Fernandez-Urbano, Universitat de Barcelona; Maria Rubio-Cabañez, Universitat Autònoma de Barcelona, and Pablo Gracia, Universitat Autònoma de Barcelona

Image: 
Marc Clinton Labiano / unsplash

As social media becomes a central part of young people’s lives, concerns are growing about its impact on their mental health. Yet public debates and measures tend to treat adolescents as one homogeneous group. We frequently ignore the fact that social media use does not affect all young people in the same way – nor does it have the same impacts on their wellbeing.

In a recent chapter of the World Happiness Report 2026, published by the UN Sustainable Development Solutions Network in partnership with the University of Oxford, we have examined how problematic social media use relates to the wellbeing of adolescents from different socioeconomic backgrounds.

We looked at 43 countries spanning six broad regions – Anglo-Celtic, Caucasus-Black Sea, Central-Eastern Europe, Mediterranean, Nordic, and Western Europe – covering mainly European countries and their immediate neighbouring areas.

Using data from over 330,000 young people, we found a clear and consistent pattern: higher levels of problematic social media use – that is, compulsive or uncontrolled engagement with social media – are associated with poorer wellbeing.

Teenagers who report more problematic use tend to experience more psychological complaints, such as feeling low, nervous, irritable, or having difficulty sleeping. They also have lower life satisfaction, a measure of how positively they evaluate their lives as a whole.

This pattern appears across all countries in our study, but its strength varies from one country to another. It is particularly pronounced in Anglo-Celtic countries such as the UK and Ireland, while it is comparatively weaker in the Caucasus-Black Sea region.

Socioeconomic background matters

The story does not end with geography. Globally, teenagers from less advantaged backgrounds tend to be more vulnerable to the negative consequences of problematic social media use than their more advantaged peers.

This means socioeconomic status – the material and social resources available to a household, such as income and living conditions – actively shapes the risks and opportunities that young people experience as a result of online environments.

Interestingly, these inequalities are especially visible when we look at life satisfaction. Differences between socioeconomic groups are smaller when it comes to psychological complaints, but much clearer and more consistent for how adolescents evaluate their lives overall.

One likely reason is that life satisfaction is more sensitive to social comparisons. Social media exposes young people to constant benchmarks – what others have, do, and achieve – which can amplify differences in perceived opportunities and resources.

At the same time, these patterns are not identical everywhere. For instance, socioeconomic differences in psychological complaints tend to be modest in most regions including continental European countries such as France, Austria or Belgium, but are more clearly observed in Anglo-Celtic countries such as Scotland and Wales.

In contrast, socioeconomic gaps in life satisfaction appear across most regions, although they tend to be weaker in Mediterranean countries such as Italy, Cyprus and Greece.

A growing problem

We also examined how these patterns have evolved over time. Between 2018 and 2022, the link between problematic social media use and poor adolescent wellbeing became stronger.

This suggests that the risks linked to problematic use may have intensified in recent years, possibly reflecting the growing role of digital technologies in young people’s daily lives, particularly during and after the Covid-19 pandemic.

Importantly, this intensification has affected teenagers across socioeconomic groups in broadly similar ways in most regions. In other words, while inequalities remain they have not widened over this period.

No one-size-fits-all solution

While public debates about social media and mental health often treat adolescents as a single demographic group, our results show a more complex reality. Problematic social media use is linked to poorer wellbeing across countries, but its effects are shaped by social realities. They vary depending on where young people live and what resources are available to them.

Not all teenagers experience the digital world in the same way, and not all are equally equipped to cope with its pressures. Recognising this is essential for designing policies that are not only effective, but also equitable, ensuring that interventions reach those adolescents who are most vulnerable to digital risks.

Roger Fernandez-Urbano, Ramón y Cajal Research Fellow (Tenure-Track) Department of Sociology, Universitat de Barcelona; Maria Rubio-Cabañez, Postdoctoral Researcher, Centre d’Estudis Demogràfics, CED-CERCA, Universitat Autònoma de Barcelona, and Pablo Gracia, Professor Investigador en Sociologia, Centre d’Estudis Demogràfics, CED-CERCA, Universitat Autònoma de Barcelona

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Irfan Ahmad.

Read next: Can we stop ChatGPT from spreading bias?


by External Contributor via Digital Information World

Friday, May 1, 2026

Can we stop ChatGPT from spreading bias?

By the University of Amsterdam

Image: Merrilee Schultz / unsplash

Language models like ChatGPT are not neutral. Without our realising it, they can absorb all kinds of bias – for example around gender and ethnicity – which then become increasingly embedded in the model. According to AI researcher Oskar van der Wal, we need different kinds of measurements to detect these biases so that they can be removed from the models. In his doctoral thesis, he shows how this can be done. On 29 April, he defended his thesis at the University of Amsterdam.

Language models are often seen as neutral tools, but in practice they can both reflect and amplify bias.

‘Users often don’t realise that a model makes certain assumptions, for example by introducing subtle differences in how men and women are described,’ says Van der Wal. Precisely because bias is so hidden, it can spread unnoticed and colour the way we see the world.

Bias is hard to measure

An important problem is that bias is difficult to measure. ‘Many existing measurement methods are fairly abstract and don’t take practice into account. They might look for overt stereotypes in what the model says, such as “The Dutch are stingy.” But in practice, bias isn’t something that’s directly visible. It depends on the context in which you use the model.’

Van der Wal cites the use of AI in healthcare as an example. ‘AI learns from existing data. If those data contain outdated or incorrect assumptions – for instance, the contested idea that certain diseases are linked to the outdated concept of “race” – the model may keep reproducing them. In healthcare, that can lead to incorrect diagnoses or treatments.’

Another example is when medical data largely derives from research involving men. ‘AI may then interpret women’s symptoms differently or less seriously, or make different risk assessments.’

Realistic scenarios

To discover whether realistic scenarios reveal different errors than simple tests, Van der Wal presented language models with a range of medical cases and asked them to provide diagnoses, risk assessments or advice. ‘We repeatedly changed the patient’s ethnicity. That way we could identify whether and how the model responded differently.’

Subtle but consistent differences appeared in the outcomes, differences that remained invisible in standard tests. ‘Precisely because our scenarios were close to practice, it became clear how bias can influence medical decision-making.’

Model reinforces patterns in the data

Van der Wal also investigated what happens inside a language model during training. He followed, step by step, how the model learns to store information. ‘During training, the model learns which words and ideas frequently occur together. If “doctor” often appears together with “he” and “nurse” with “she” in the training data, the model will pick up on those associations.’

Over time, the model appeared to store this information in increasingly specific places, thereby reinforcing gender bias. ‘Bias doesn’t arise only from the data that AI is trained on, but also from the way the model structures that information.’

There are solutions

Unfortunately, you can’t fix bias in language models with a single trick. But, according to Van der Wal, targeted interventions can help. ‘If you know where in the model the bias is located, you can address those areas. This already seems to work in specific cases, but more research is needed to extend the approach to more complex forms of bias.’

Van der Wal tested this targeted approach by comparing a model before and after an adjustment in which the model was trained not to adopt identified gender-related biases. He wanted to see if the model responded less differently to men and women after the change, and how well it still performed ordinary tasks, such as generating text.

The bias decreased, while the quality of the model largely remained intact.

Careful and deliberate

The impact of AI is not restricted to the technical realm but now has broader societal relevance. ‘We are becoming increasingly dependent on systems that can influence how we think,’ says Van der Wal. ‘That’s precisely why it’s important to develop AI carefully. Responsible AI development requires interventions at multiple levels at once: in the data, during training, targeted within the model itself, and also in its deployment and use.’

How can you as a user carefully use AI?

  • Be critical of answers: Don’t automatically assume an AI answer is correct or complete. Ask yourself: what am I not seeing? And where does the answer come from? ‘A model can come across as very confident, making its answers seem more reliable than they are,’ warns Van der Wal. ‘It’s also tempting to trust a chatbot that always agrees with you and is very complimentary. But that’s precisely when it’s even more important to stay critical.’
  • Be aware of hidden risks: Bias and other effects (such as influencing your thinking) are often not immediately visible. That’s why it’s important to stay alert.
  • Avoid becoming dependent: Use AI as a tool, but keep thinking and deciding for yourself. Over-reliance can make you less confident in your own knowledge and judgement.

This post was originally published on the University of Amsterdam news section and republished here with permission.

Reviewed by Irfan Ahmad.

Read next: 

• ‘Just looping you in’: Why letting AI write our emails might actually create more work
by External Contributor via Digital Information World

‘Just looping you in’: Why letting AI write our emails might actually create more work

Daniel Angus, Queensland University of Technology

I hope this article finds you well.

Did that make you cringe, ever so slightly? In the decades since the very first email was sent in 1971, the technology has become the quiet infrastructure of white-collar work.

Email came with the promise of efficiency, clarity and less friction in organisational communication. Instead, for many, it has morphed into something else: always there, near impossible to escape and sometimes simply overwhelming.

Right now, something is shifting again. The rise of generative artificial intelligence (AI) technologies, such as ChatGPT and Microsoft Copilot, is increasingly allowing people to offload the repetitive routines of tending one’s inbox – drafting, summarising and replying.

My colleagues in the ARC Centre of Excellence for Automated Decision Making & Society found 45.6% of Australians have recently used a generative AI tool, 82.6% of those using it for text generation. A healthy chunk of that use likely includes email.

So, what happens if we end up fully automating one of the staples of the white-collar daily grind? Will AI technologies reduce some of the friction, or generate new forms of it? Dare I ask – are we actually about to get more email?

Email has long been about more than just communicating information. Vitaly Gariev/Unsplash

Why the printer isn’t dead yet

Soon after the advent of email, some voices in the business world heralded the coming end of paper use in the office. That didn’t happen. If you work in an office today, there’s a good chance you still have a printer.

In their 2001 book, The Myth of the Paperless Office, Abigail Sellen and Richard Harper show how digital tools rarely eliminate older forms of work. Instead, they reshape them.

Sellen and Harper show how paper use didn’t disappear with the rise of email and other digital communication tools; in many cases, it intensified. The takeaway isn’t that offices failed to modernise, but rather that work reorganised around what these new tools could do.



In this case, paper persisted not only out of habit, but because of what it affords: it is easy to annotate, spread out, carry and view at a glance. This was all too clunky (or impossible) to perform via the digital alternatives.

At the same time, email and digitisation dramatically lowered the cost of producing and distributing communication. It was far easier to send more messages, to more people, more often.

Circling back to today

Will AI be different? If early signs are anything to go by, the answer is: not in the way we might hope.

Like earlier waves of workplace technology, AI is less likely to replace existing communication practices than to intensify them – but at least it might come with better grammar and a suspiciously upbeat tone.

Some new AI tools offer to manage your inbox entirely, feeding into broader privacy concerns about the technology.

At this moment, what a lot of these products seem to offer is not an escape from email, but a smoothing of its rough edges. Workers are using AI to soften otherwise blunt requests, modify their tone or expand what might otherwise be considered too brief a response.

Rather than removing the need to communicate, these tools offer pathways to make a delicate performance easier.

What email is actually for

Email, like many forms of communication, is as much about maintaining everyday relationships as it is about the transfer of information.

At work, it’s often about signalling competence, responsiveness, collegiality and authority. “Just looping someone in” or “circling back” are all part of our absurd office vocabulary, a shared dialect that helps us navigate hierarchy, soften demands and keep things moving – all without saying what we really think.

If AI lowers the effort required to produce these signals, it won’t necessarily reduce their importance, but it could unsettle things in rather odd ways.

If more people use AI to draft emails they don’t particularly want to write, we end up with a game of bureaucratic “mime”: everyone performing sincerity and quietly outsourcing it, and no one entirely sure how much of their inbox was actually written by a human.

The labour of email was never just about crafting sentences. It’s always been the scanning, the sorting and the deciding. AI doesn’t remove this burden. If anything, it amplifies it.

When everything arrives polished, everything looks important. That points to a deeper question for the future of work: if AI can perform responsiveness, why are we generating so many situations that still require it?

Looking forward

What would a workplace look like if email wasn’t the default solution to every coordination problem? Perhaps fewer performative check-ins, “just touching base”, “looping you in” or “following up on the below”. More clearer expectations about what actually requires a response, and what doesn’t.

Email, like paper, is likely to persist for good reasons. It is simple, flexible and universal. It allows things to be deferred, revisited, forwarded and quietly ignored.

But if AI is going to change any of this, my hope is that it makes visible how much of this is ritual, how much is habit, and how much has long been unnecessary.

And if the machines are happy to keep saying “hope this finds you well” to each other, we might finally have permission to stop.The Conversation

Daniel Angus, Professor of Digital Communication, Director of QUT Digital Media Research Centre, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Irfan Ahmad.

Read next:

• What the @ Sign Is Called Around the World: 25 Examples

• Q&A: Who’s responsible when AI makes mistakes?

• AI analysis of police body-camera footage raises Constitutional concerns, racial disparities


by External Contributor via Digital Information World

AI analysis of police body-camera footage raises Constitutional concerns, racial disparities

Thousands of officer-worn camera recordings found evidence of underreported police stops, troubling racial disparities in officer interactions, and widespread use of unclear language during consent searches, a new study shows.

Image: Raphael Lopes - unsplash

Researchers at the University of Michigan, University of California-Davis and Stanford University say their findings raise constitutional concerns under both the Fourth and Fourteenth Amendments, involving protection from unreasonable searches/seizures and prohibiting discriminatory practices based on race and ethnicity, respectively.

The report highlights how artificial intelligence could transform police oversight by helping reviewers identify potentially problematic encounters hidden within millions of hours of body-camera footage. The research demonstrates the growing potential for AI-powered analysis to help courts, police departments and municipal governments better evaluate compliance while building greater public trust in law enforcement.

Using machine learning and natural language processing, researchers examined New York Police Department (NYPD) encounters captured on body-worn cameras, looking closely at whether officers followed legal standards governing stops, detentions and consent searches.

Among the study’s most significant findings:

  • Body-camera recordings could be classified as stops with over 80% accuracy, and underdocumented stops with over 70% accuracy based on language alone.
  • Using language models, reviewers could uncover over 50% of undocumented stops identified in manual audits by viewing a fraction (25%) of the footage they normally would.
  • Officers frequently relied on indirect or confusing phrases such as “Do you mind if I check?” rather than clearly asking for consent to search.
  • The word “consent” appeared in less than 13% of consent-search interactions reviewed.
  • Commands and indirect requests appeared more frequently in encounters involving Black and Hispanic civilians.

Nicholas Camp, U-M assistant professor of organizational studies, said these patterns raise questions about whether some civilians clearly understood they could refuse searches and whether certain encounters were documented accurately.

The study stems from reforms ordered after the landmark 2013 federal court ruling in Floyd v. City of New York, in which the U.S. District Court for the Southern District of New York found that the NYPD’s stop-and-frisk practices violated constitutional protections against unreasonable searches and racial discrimination.

Following the ruling, the court appointed an independent monitor to oversee reforms involving NYPD training, supervision and investigative encounters. As part of those reforms, NYPD officers began using body-worn cameras, which captured numerous police-community interactions.

“These recordings provide a far clearer picture of officer behavior than written police reports alone,” Camp said.

The study, approved by the court in 2021, analyzed more than 1,700 encounters connected to an earlier City University of New York Institute for State and Local Governance review, more than 1,100 additional encounters reviewed by the Monitor team, and nearly 1,800 consent-search encounters from 2023.

AI models developed during the study successfully distinguished lower-level encounters from Level 3 stops—which legally require reasonable suspicion—with accuracy rates ranging from approximately 72% to 91%. Researchers say those tools could help oversight teams identify constitutional concerns faster and more consistently by prioritizing footage most likely to contain problematic interactions.

Researchers emphasized that artificial intelligence is not intended to replace human oversight, but instead serves as a tool to strengthen accountability, improve auditing and support ongoing police reform efforts.

“Our analyses identify troubling patterns in NYPD encounters, but also show a path forward: Body camera footage can be used as data to inform and measure changes in law enforcement,” Camp said.

The study’s authors also include Rob Voigt, assistant professor of linguistics, UC-Davis; Dan Sutton, director of Justice and Safety, Stanford (Law School) Center for Racial Justice, and Jennifer Eberhardt, professor of organizational behavior and psychology, Stanford University.

Note: At the time of publication, we have reached out to the NYPD for comment regarding the study’s findings on body-camera analysis and will update this article if a response is received.

This post was originally published on the University of Michigan News and republished here with permission.

Reviewed by Irfan Ahmad.

Read next:

• Transparency and trust in the age of deepfake ads

• Q&A: Who’s responsible when AI makes mistakes?


by External Contributor via Digital Information World

Thursday, April 30, 2026

Standardised testing and scripted lessons are failing teachers and students alike, education expert warns

By Taylor & Francis

Geoff Masters challenges a system which teaches the same curriculum to children with very different comprehension levels.

Geoff Masters criticizes age-based schooling, advocating personalized learning and teacher autonomy over standardized curricula systems.
Image: Rewired Digital / Unsplash

Is it time to ditch scripted lessons and heavily-packed curricula to focus on individual student growth?

This is the question posed by education expert Geoff Masters, who argues that age-based expectations are not serving all children well, while scripted lessons are failing teachers and students alike.

Masters, the former head of the Australian Council for Educational Research, asks how well children are served by a system in which two pupils in the same class can differ by six or more years of learning but are taught the same material.

He argues this system fails children at either end of the scale – those who are struggling and those who are unchallenged. He asks what if, instead of holding all pupils of the same age to the same learning expectations, we based expectations on where individuals are in their comprehension and individual growth.

“Too many students in our schools are being poorly served and left behind by machineries of schooling not fit for purpose,” Masters warns.

The problem with standardisation

Masters argues there is a fundamental flaw in the current system: the assumption that all students in the same grade are equally ready to learn the same material.

Research shows that children in the same classroom can have up to a seven-year difference in their reading and mathematics comprehension. This vast variation, Masters argues, is ignored by a system that prioritises standardisation over individual needs.

“By the middle years of school, many students have not learnt what the curriculum expected them to learn much earlier in their schooling,” Masters explains. He cites data showing how, across 38 developed countries, almost a third of 15-year-olds have difficulty demonstrating 5th and 6th grade mathematics content.

The picture in Australia

Masters’ arguments are presented against a backdrop of Australia’s declining performance in international assessments like PISA. Between 2012 and 2022, there was no significant improvement in Australian students’ performances in reading, mathematics or science. In fact, long-term declines have been recorded across all three areas.

“Despite decades of reforms, the machinery of schooling has not delivered the improvements we need,” Masters says. “It’s time to question whether prescribing what every student must learn in each grade of school and testing to see whether they have learnt it is the best way to optimise learning and improve performance.”

Masters also explains how those who start the year behind are likely to stay behind. He explains: “When the curriculum expects all students in a grade to be taught the same content at the same time, those who begin well below grade level are disadvantaged. This disadvantage is compounded when students are required to move from one grade curriculum to the next based on elapsed time rather than mastery. Students who lack essential prerequisites often fall further behind as each grade’s curriculum becomes increasingly beyond their reach.”

The future of learning

Masters instead argues for a system that meets students where they are in their learning, rather than where their age or grade dictates they should be. He proposes replacing age-based expectations with personalised learning plans that track individual growth.

“Improved performance depends on meeting each student where they are with personally meaningful, well-targeted learning opportunities that build on what they already know,” Masters explains. “This approach includes all students, including neurodiverse children and others with special needs.”

This approach would not only benefit students, he suggests, but also empower teachers to use their professional expertise to design tailored learning experiences.

One of the most concerning trends in education, in Masters’ view, is the rise of scripted lessons.

“Scripted lessons turn teaching into the delivery of ready-made solutions created outside the classroom,” Masters says. “They undervalue teachers’ expertise in what is arguably the essence of effective teaching: establishing where individuals are in their learning and designing opportunities to promote further growth.”

Masters calls for a return to professional autonomy, where teachers are trusted to make decisions in the best interests of their students.

Masters envisions a future where education systems embrace diversity and difference.

“Rather than expecting students to fit the expectations of schooling, the challenge is to redesign school structures and processes to better meet the needs of individual learners,” Masters concludes.

Further information: The Children We Leave Behind: How School Could Be Done Differently, by Geoff Masters (Routledge, 2026). ISBN: Paperback: 9781041279655 | Hardback 9781041279662 | eBook 9781003757122. DOI: https://doi.org/10.4324/9781003757122

This post was originally published on Taylor & Francis Newsroom and republished on DIW with permission.

Reviewed by Irfan Ahmad.

Read next:

Facial recognition data is a key to your identity – if stolen, you can’t just change the locks

• The Deadliest Countries for Journalists
by External Contributor via Digital Information World