Thursday, May 14, 2026

From AirTags to AI nudification: the growing toolkit of technology‑facilitated abuse

Jason R.C. Nurse, University of Kent and Lisa Sugiura, University of Portsmouth

It’s hard to overstate the impact that artificial intelligence has had since the release of generative AI platforms such as ChatGPT just three years ago. While they have led to countless advances in how we live and work, they have also been at the centre of controversies around domestic and sexual abuse.

The use of the AI tool Grok to remove women’s clothing in images brought the issue of so-called technology-facilitated abuse to the fore. But it’s a problem that predates AI – with Bluetooth trackers, wearable devices, smart speakers, smart glasses and apps all used by abusers to control, harass or stalk their victims.

This abuse has worsened as tech has become more embedded in people’s lives, and as AI advances rapidly. But governments have struggled to make tech companies design systems that minimise misuse, and to hold them accountable when things go wrong.

Our own research has confirmed that technology misuse has increased and that its harms are significant. But governments and the tech sector are doing little to combat it – despite numerous examples of how tech can enable abuse.

Case 1: Smart glasses

The growing availability of smart glasses – which look like normal eyewear but can do many things a smartphone does – has led to reports of secret filming. In some cases, videos were posted online, often attracting degrading and sexually explicit comments.

Image: Ray-Ban Stories by Cavebear42, CC BY-SA 4.0, via Wikimedia Commons

Meta has said its smart glasses have a light to show when they are recording and anti-tamper tech to make sure the light cannot be covered. But there appear to be workarounds.

In England and Wales, voyeurism legislation focuses on private spaces, and harassment laws do not specifically apply to targeted recording and online distribution. However, the UK Information Commissioner’s Office is investigating Meta after subcontractors were allegedly able to access intimate footage from customers’ glasses. This is in addition to a lawsuit in the US, which alleges Meta violated privacy laws and engaged in false advertising. Meta has said that it takes the protection of data very seriously and that faces are usually blurred out. It also discloses in its UK terms of service the potential for content to be reviewed either by a human or by automation.

Case 2: Bluetooth trackers

Apple’s AirTags, and other devices built for tracking personal items, can be misused to stalk and harass people, particularly women. Apple released updates to AirTags and other trackable tech so that potential victims would be alerted if an unknown device was travelling with them. But for many, this feature should have existed from the outset.

The law in England and Wales is clear that attaching tracker devices to someone without their knowledge is a criminal offence. But despite convictions, the ease of covertly monitoring people using these devices means people continue to be at risk.

Case 3: AI deepfake and ‘nudification’ apps

Apps can now “nudify” people, while AI is increasingly used to make non-consensual deepfake pornography. In January, several instances of xAI’s assistant Grok being used to create sexualised photos of women and minors came to light. All it took to create the images were some simple prompts.

After criticism, xAI decided to limit this feature. But the safeguards appear to apply only to certain jurisdictions and certain users.

In February, the UK government announced legal changes similar to the Take It Down Act in the US, which will require tech platforms in the UK to remove non-consensual intimate images within 48 hours. Failure to do so will result in fines and services being blocked, and the law is likely to be implemented from summer.

Using automated technology known as “hash matching”, victims will only need to report an image once to have it removed from multiple platforms simultaneously. The same images would then be automatically deleted every time anyone attempted to reupload them. Nudification apps and using AI chatbots to create deepfake pornography will also become illegal in the UK.

But there is more to be done. Mitigating risks must be embedded at the design stage to prevent these images being created in the first place. The rise of romantic and sexual chatbots means this has become more urgent.

And beyond deepfakes and nudification, AI can also enable harassment at scale. This includes directly targeting someone with abusive content, or fake images or profiles that impersonate victims for so-called “sextortion” scams.

Challenges ahead

These issues must be prevented with robust guardrails built into these technologies. This is what prioritising user safety should look like, after all. But often, these guardrails have failed. Safety tools are only usually added after public pressure, not built into platforms from the start.

Governments have allowed regulation to fall behind fast-paced developments. Tech companies have grown quickly, but laws and enforcement have not kept up. At the same time, police and legal systems are often under-trained or unclear on how to handle digital harm.

Even where there is regulation, such as the UK’s Online Safety Act, penalties for platforms that allow abuse are often weak or unenforceable. The regulator Ofcom has issued only voluntary guidance to tech companies on how to better protect women and girls on their platforms. Campaigners have called for this to be made mandatory, with clear penalties for companies that do not comply, placing it on a level legal footing with child sexual abuse and terrorism content.

As AI advances, tech companies must prioritise system design that puts user safety first. But until governments enforce real consequences, the tech sector will be able to profit from harm while those using the platforms bear the cost.The Conversation

Jason R.C. Nurse, Reader in Cyber Security, University of Kent and Lisa Sugiura, Professor of Cybercrime and Gender, University of Portsmouth

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Irfan Ahmad.

Read next: Young People and Professionals Praised ChatGPT’s Empathetic Mental-Health Answers, Though Researchers Warned AI Can Invent Information


by External Contributor via Digital Information World

Young People and Professionals Praised ChatGPT’s Empathetic Mental-Health Answers, Though Researchers Warned AI Can Invent Information

Artificial intelligence provides good answers to mental health questions. Young people even like ChatGPT’s responses better than healthcare professionals’ advice.

Image: Tim Witzdam - unsplash

When young people ask about mental health, the answers from ChatGPT are both more useful and more relevant than the answers from healthcare professionals, according to the young study respondents. Healthcare professionals are also satisfied with the answers from artificial intelligence.

Easy to understand

“Professionals and young people both found that ChatGPT was able to provide advice that they perceived as relevant, empathetic and easy to understand,” says SINTEF researcher Marita Skjuve.

Skjuve and her colleagues at SINTEF and the University of Oslo selected real questions that young people posed to a Norwegian charity about their own mental health. Both AI and professionals responded to the questions, specifically, ChatGPT and professionals working for the youth information service ung.no.

The survey participants included 123 youth and 31 health professionals who reviewed the answers. They did not know who answered what. Nor had they been told what the researchers were planning to investigate.

ChatGPT scored higher

In the blind test, participants were asked to assess how useful, relevant, understandable and empathetic the answers were. They were also asked to choose the answer they liked best and explain why. The young people gave ChatGPT the best grade all the way. The professionals who responded also gave ChatGPT better grades, but here the differences are not as pronounced.

“We observed that young people like answers from ChatGPT a little better because they are easy to understand and are perceived as being immediately useful. The answers describe what the youth can do to solve a possible problem related to their mental health,” says Skjuve.

“And we should also remember that ChatGPT is pretty good at giving neat and clear answers with bullet points,” she says.

Good, relevant, understandable and useful answers? Both young people and health professionals assessed how ChatGPT answered questions about mental health. The young participants were the most positive, but health professionals also thought that the AI answers were explained well. Table: Asbjørn Følstad

The health professionals do not always see it the same way. They tend to be a little more critical of ChatGPT’s diagnostic language. Nor did the professionals always find ChatGPT as validating or empathetic as an answer from a professional was.

“But on the whole, we see that both groups think that ChatGPT provides good answers that can help,” says the researcher.

Diagnosis risk

The study did not assess whether there were any errors in the answers, and the professionals did not point out any such errors. They were not asked to do so, but neither did anyone on their own initiative express that anything was downright wrong.

“AI doesn’t always understand the context and can make up answers. Therefore, quality assurance from health personnel is important in this area,” says Skjuve.

A few people nevertheless pointed out that ChatGPT could have a tendency to try to make a diagnosis. Health professionals who work for aid organizations have to abide by strict guidelines. They are supposed to give advice – but not provide direct health care or make diagnoses. ChatGPT has no such guidelines.

Skjuve wonders whether this could be a reason why ChatGPT is perceived as more practical and useful.

Professionals can learn from AI

The question, then, is whether artificial intelligence like ChatGPT should be used to help with mental health issues.

“What we’ve learned is that ChatGPT is capable of creating answers that young people understand and find easy to read. We humans can learn from that,” says Skjuve. She suggests that perhaps AI can support the work of a professional and help clarify the information for a young person.

In other studies we have seen that AI can often be perceived as responding better than health personnel. AI is often good at responding in a welcoming and empathetic way.

Skjuve can imagine AI as a support tool. It could help professionals respond to young people better and faster. This way, mental health help can be scaled up. The professionals can reach more young people who need help. At the same time, they retain professional control and can assured the quality of the AI answers.

“The last point is very important. AI can often give the wrong answer, and this can be critical in matters of mental health,” says Skjuve. She believes the future may be hybrid services where AI and health personnel work more closely together to formulate good answers.

She thinks the danger lies in young people going to AI to get an answer right away instead of waiting two to three days for a quality-assured response from a health service.

“AI does not always understand the context and makes up answers. That is why quality assurance from healthcare professionals is important in this area,” says Skjuve.

Researcher not surprised

The SINTEF researcher is not really surprised by the findings.

“In other studies we have seen that AI can often be perceived as responding better than health personnel do. AI is often good at responding in a welcoming and empathetic way.”

The researchers have now conducted a follow-up study without a blind test. In this case, the group involved knew who had actually answered the question. It appears that they prefer the answers provided by the health professionals and are more sceptical of AI. The results are not clear and have not yet been published.

Reference: Marita Skjuve, Asbjørn Følstad and Petter Bae Brandtzæg: ChatGPT as a mental health advisory service: Comparing evaluations from youth and health professionals. Digital Health, February 2026, doi: 10.1177/20552076261427447.

Reviewed by Irfan Ahmad.

Read next:

• How AI can lead to false arrests and wrongful convictions

• Oxford Study Finds Friendly AI Chatbots Make More Mistakes and Agree More with False Beliefs
by External Contributor via Digital Information World

Wednesday, May 13, 2026

How AI can lead to false arrests and wrongful convictions

Maria Lungu, University of Virginia and Steven L. Johnson, University of Virginia

AI systems generate likelihoods but users misinterpret them as definitive answers in critical decisions contexts.
Image: Matthias Kinsella / unsplash

In Baltimore County, Maryland on Oct. 20, 2025, a 17-year-old student named Taki Allen was sitting outside his high school after football practice when an artificial intelligence-enhanced surveillance camera falsely identified the Doritos bag in his pocket as a gun. Within moments police cars arrived, officers drew their weapons and Allen was forced to his knees and handcuffed while they searched him. All they found was a crumpled bag of chips. The AI’s misidentification and the human decisions that followed turned a normal evening into a traumatic confrontation.

On Dec. 24, 2025, Angela Lipps, a Tennessee grandmother, was released after spending five months in jail because facial recognition software had incorrectly connected her to fraud crimes in North Dakota, a state she had never visited. Police had arrested her at gunpoint while she was babysitting her four grandchildren.

These are unfortunate examples of how AI can lead to mistreatment of people because of technical flaws as well as misplaced human faith in the technology’s supposed objectivity. These cases involve different tools, but the underlying issue is the same. AI systems produce probabilities, and people treat them as certainties.

We are researchers who study the intersection of technology, law and public administration. In researching how police departments use AI and how digital technologies operate in a democratic society, we have seen how quickly the shift from probabilistic prediction to operational certainty happens in practice.

AI policing tools are used in dozens of U.S. cities, although no public registry tracks the full footprint. The tools ingest historical crime data and score neighborhoods on predicted risk so officers can be routed toward the resulting hot spots. The mechanism is straightforward, but its consequence is not. Once a system signals a possible threat, the question is no longer how certain the prediction is but what to do about it. A statistical output turns into a deployment decision, and the uncertainty that produced it gets lost on the way.

A matter of probabilities

When generative AI models such as ChatGPT or Claude respond to human requests, they are not searching a database and pulling out facts. They are predicting the most likely answer based on patterns in data they have been trained on. When asked, “Who invented the light bulb?” the models do not go to a source or fact-check a finding. They generate a statistically probable answer which is “Thomas Edison.” The reply might be right, but it might not capture the full story – such as Joseph Swan’s parallel invention at the same time as Edison’s. The danger arises when people believe that the model is retrieving truth rather than generating likelihoods.

This distinction matters. The most probable response is not the same as a factually verified answer, complete with context.

Police handcuffed teenager Taki Allen at gunpoint after an AI camera system incorrectly indicated he had a gun.

This reality can be highly problematic for policing and law. For example, when law enforcement agencies use AI systems trained on geographical data to estimate where criminal activity is likely to occur, the algorithms analyze historical crime data and geographic patterns. These systems generate statistical risk scores or heat maps for locations based on prior incidents. But such predictions may have little bearing on who was involved in a new crime in the area, even if an algorithm generates information that sounds authoritative.

Some researchers have argued that predictive policing systems do not increase the likelihood that racial minorities will be arrested more often relative to traditional policing practices. The broader concern, however, is not limited to measurable disparities in arrest outcomes alone. It is about how probabilistic predictions can become standardized operational decisions absent further verification.

Artificial intelligence researchers caution against using these models in isolation for crime and legal proceedings or decision-making. Research at the University of Virginia’s Digital Technology for Democracy Lab with police chiefs shows that some law enforcement groups follow strict policies that dictate when technology is used in tandem with, or in place of, human discretion, while others have no such policy.

What most users do not realize is that AI systems rarely produce binary answers: yes or no, a positive identification or a negative one. They generate probabilities. Some systems assign scores that assess the system’s confidence in a prediction. In those cases, engineers set a confidence threshold, a level of certainty that determines when the system should trigger an alert about a possible threat. You can think of this threshold as settings on a control knob. A 95% confidence level, for example, indicates that the model considers its interpretation to be highly likely.

A low threshold catches more potential threats but increases false alarms. A high threshold reduces mistakes but risks missing real dangers. Either way, these algorithmic thresholds are often invisible to the public and are set quietly by vendors or agencies, even though they shape when police action begins.

Angela Lipps was unjustly jailed for more than five months based on a mistake by a facial recognition system.

Where to draw the line

In medicine, these kinds of trade-offs are explicit. Diagnostic tools are calibrated on the relative harm of different errors. In infectious disease settings, for instance, systems that detect infections are often designed to accept more false positives to avoid missing contagious individuals. Then medical professionals look into the human cases. And the algorithm-based decisions are subject to professional standards, ethics reviews and regulatory oversight.

In policing, an AI system must balance false positives, where the system flags a threat that does not exist, and false negatives, where it fails to detect a real danger. The trade-off carries significant consequences. A lower threshold may generate more alerts and allow officers to intervene earlier, but it also increases the risk of mistaken identifications, which happened to Angela Lipps, or escalated encounters like the one Taki Allen experienced. A higher threshold may reduce wrongful interventions but could allow legitimate threats to go undetected.

Some law enforcement agencies argue that acting on imperfect signals is preferable to missing serious risks. But lowering the bar for algorithmic alerts based on probabilistic estimates effectively expands the number of people subjected to police attention. It is important to realize that these thresholds are not neutral features of the technology; they are choices embedded by the creators in the model’s code. Decisions about where to draw the line determine when an algorithmic suspicion becomes a real-world police action, even though the public rarely sees or debates how those thresholds are set.

Limits of optimization

Developers often use several methods to determine where to set a confidence threshold. Techniques such as “receiver operating characteristic curve analysis” examine how changing the threshold for an alert alters the balance between correctly identifying real events and mistakenly flagging harmless ones. Precision–recall analysis examines a similar trade-off, asking how accurate the system’s alerts are relative to the number of incidents it successfully detects.

These approaches could help calibrate systems more responsibly by testing how often an algorithm wrongly flags people or locations. Fine-tuning can improve system performance. But the techniques cannot resolve the underlying question of how much algorithmic uncertainty society is willing to tolerate.

In law, legal standards of proof determine how convincing evidence must be before a judge or jury can rule in favor of a plaintiff or defendant. Courts use formal standards of proof depending on the stakes, such as probable cause, preponderance of the evidence and beyond a reasonable doubt. These standards reflect a societal judgment about how much uncertainty is acceptable before exercising legal authority. A court does not accept a guess or a prediction; it follows a process to weigh evidence. Unlike humans, an AI model does not usually say, “I’m not sure.” A model typically has confidence in its reply, even when the answer is incorrect.

Stakes are rising as AI enters the courtroom, law enforcement, the classroom, the doctor’s office and the public sector. It is important for people to understand that AI does not know things the way many assume it does. It does not distinguish between “maybe” and “definitely.” That is up to us. We believe that technologists should design systems that admit uncertainty and need to educate users about how to interpret AI outputs responsibly.

Maria Lungu, Postdoctoral Researcher of Law and Public Administration, University of Virginia and Steven L. Johnson, Associate Professor of Commerce, University of Virginia

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Irfan Ahmad.

Read next: 

• One in Five U.S. Jobs Faces High Risk of AI Automation

• Is your AI chatbot manipulating you? Subtly reshaping your opinions?


by External Contributor via Digital Information World

Is your AI chatbot manipulating you? Subtly reshaping your opinions?

Richard Lachman, Toronto Metropolitan University

A billboard tries to sell you something. So does a used car salesman. But no matter how smooth the pitch, you’re quite aware of the profit motive, and you can walk away at any time.

What if that pitch is invisible, plays to your unique fears and vanities, and is delivered in a voice that sounds like a trusted friend? Generative AI has changed the equation of persuasion entirely: chatbots can now deliver a personalized, adaptive and targeted message, informed by the most intimate details of your life.

Large language models (LLMs) can hyper-target messages by drawing from your social media posts and photos. They can mine hundreds of previous chatbot conversations in which you asked for relationship advice, discussed your parenting fails and shared your health concerns and financial woes. They can also learn from each interaction, refining their manipulation in real time, targeting your unique and individual tastes, preferences and vulnerabilities.

Studies show this kind of personalized content to be 65 per cent more persuasive than messages from humans or from non-personalized AI. It is four times as effective at changing political opinions as advertising. It could be a powerful tool for social change — used for the good, or for nefarious purposes.

This makes one feature especially troubling: Each conversation is private. It is not monitored, never audited and doesn’t happen in the public eye.

This isn’t advertising. It’s something we don’t have words for yet, and we’re living inside it.

Convincing arguments

In my book Digital Wisdom: Searching for Agency in the Age of AI, I explore how large language models introduce a new frontier in persuasion — one where AI systems can draw upon a huge amount of data about the world, language and you to tailor a highly personalized pitch.

Consider how this might work: You’re a nurse. Through your employer’s AI platform, you’ve shared your sleep problems, burnout and the financial stress of a recent divorce. Now the hospital is short-staffed and offering shifts at a reduced rate calculated by software they license.

You ask the AI chatbot whether you should take them. It knows you’re exhausted. It knows you’re behind on bills. It knows exactly which argument could convince you one way or the other. Who is it working for in that moment?

As companies like Meta and IBM explore how AI can hyper-personalize ads for specific audiences, the dividing line between tools that help users find what they genuinely want, and those that manipulate them against their interests, becomes increasingly important.

Friend or stranger?

Let’s look at another example. Imagine the following messages from your favourite AI chatbot or companion:

I noticed your sleep patterns haven’t been great lately, averaging only 5.4 hours, with lots of restless periods. That’s common when dealing with relationship stress. Your partner just went back to work and 76 per cent of couples experience strain during career transitions.

A new sleep medication has shown effectiveness for relationship-linked insomnia. Your insurance would cover it with just a $15 contribution. Would you like me to schedule a telehealth appointment for tomorrow at 2 p.m.? I see you have a break in your schedule.

This might feel great, like advice from a thoughtful friend who knows you well. It might also feel terrifying, as if a manipulative stranger has read your diary.

Given that people are increasingly turning to AI for medical or mental health advice, despite studies showing this advice to be problematic almost 50 per cent of the time, a manipulative stranger could cause real harm.

The danger here isn’t just the precision of the targeting. This content is also impossible to police. What you view can’t be tracked by watchdogs, since you’re the only person who ever sees it.

While governments don’t typically police the content of political ads, beyond transparency about their funding, we often rely on public outcry and the media to expose campaigns that spread falsehoods. If an AI personalizes every message for an individual, there is no trace left behind.

Reshaping our worldview

Perhaps most concerning is that these systems could gradually reshape our worldview over time.

Scholars have long argued that the algorithms used by social networking sites and search engines create filter bubbles, in which we are fed well-crafted text, video and audio content that either reinforces our worldview or exerts influence towards someone else’s.

Are AI chatbots like Claude, ChatGPT, Gemini and DeepSeek helping you think, or subtly shaping your thoughts?
(Unsplash)

By controlling what information we see and how it’s presented, AI systems could slowly shift how we think about and interpret the world around us, and even change our understanding of reality itself.

This capability becomes particularly concerning when combined with emotional manipulation. Vendors suggest their AI systems can gauge a user’s emotional state through text analysis, voice patterns or facial expressions, and adjust their persuasive strategies accordingly.

Are you feeling vulnerable? Lonely? Angry? The system could modify its approach to exploit those emotional states. Even more troubling, it could deliberately cultivate certain emotional states to make its persuasion more effective.

Preliminary research shows that AI models tend to flatter users, affirming their users’ actions 50 per cent more than other humans do, even when the actions involve potential harms. Further research shows that chatbots use deliberate emotional manipulation strategies — such as “guilt appeals” and “fear-of-missing-out hooks” — to keep us chatting when we try to say goodbye.

There have also been cases of AI chatbots allegedly endangering users, encouraging suicidal thoughts or giving detailed advice on how a user could harm themselves.

The guardrails set up by corporations to protect users from harm have also proven surprisingly easy to bypass.

Design matters

Persuasion is not a side effect of technology — it’s often the point. Every interface, every notification, every design decision carries with it an intent to influence behaviour.

Sometimes that influence is welcome: reminders to take medication, encouragement to exercise or nudges to donate blood that reinforce values we already hold. But sometimes persuasion serves someone else’s agenda — nudging us to buy, to scroll, to work harder or to give up privacy.

The same persuasive techniques can empower or exploit, depending on who controls the system, what goals they pursue and whether they have meaningful consent.

Design matters. Whether in public health, the workplace or daily life. We must ask hard questions about intent, agency and power. Who benefits from a design? Who is being persuaded and do they know it?

The technologies we build should support reflective choice, not undermine it. As AI continues to shape how we think, feel and act, our ethical obligations grow sharper: to create systems that are transparent, that prioritize user dignity and that reinforce our capacity for independent judgment. We don’t just need innovation — we need wisdom.The Conversation

Richard Lachman, Director, Zone Learning & Professor, Digital Media, Toronto Metropolitan University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Irfan Ahmad.

Read next: Instagram can now read all users’ private messages. Will this make kids safer or just boost ad targeting?


by External Contributor via Digital Information World

Tuesday, May 12, 2026

Instagram can now read all users’ private messages. Will this make kids safer or just boost ad targeting?

Joel Scanlan, University of Tasmania

Instagram ended encrypted direct messages, reigniting debates balancing child safety, surveillance concerns, and user privacy protections.
Image: Shutter Speed / unsplash

As of May 8 end-to-end encryption is no longer available on direct messages on Instagram.

Meta, in announcing the policy reversal, said it had done so because few people used the feature. But this has raised questions about its impact on user privacy and whether it will improve child safety on the platform.

Instagram has long been a focal point for discussion about online safety – whether in relation to body image concerns, cyberbullying or sexual extortion. This policy change by Meta directly affects how safety and moderation are implemented in private messages.

This is important considering research has found that perpetrators first contacted roughly 23% of Australian sexual extortion victims on Instagram, the second most frequent method of contact, behind Snapchat (at 50%).

What is end-to-end encryption?

End-to-end encryption is a way of scrambling a message so only the sender’s and recipient’s devices can read it. The platform carrying the message, in this case Instagram, can’t access it.

This same technology is present by default on WhatsApp, Signal, iMessage, and (since late 2023) Facebook Messenger.

Meta’s CEO Mark Zuckerberg first promised to bring end-to-end encryption across Meta’s messaging products back in 2019, under the slogan “the future is private”.

Instagram tested encrypted direct messages in 2021. It rolled them out as an opt-in feature in 2023.

End-to-end encrypted direct messages never became the default, and the low adoption rate of opting in to use the feature is Meta’s justification for removing it. As a spokesperson told The Guardian:

Very few people were opting in to end-to-end encrypted messaging in DMs, so we’re removing this option from Instagram.

There is a circular logic to this: Meta has killed off a feature it buried so deep that most users never knew it existed, then cited low usage as the reason for its removal.

What does this mean for Instagram users?

In practical terms, every message you send on Instagram now travels in a form Meta can read.

Meta’s privacy policy lists the content of messages users send and receive among the data it collects. In principle, this enables the company to use this data to personalise features, train artificial intelligence (AI) models, and deliver targeted advertising.

While Meta has publicly committed not to train its AI models on private messages unless users actively share them with Meta AI, it has made no equivalent public commitment about advertising.

That leaves open the possibility that Meta could use unencrypted Instagram direct messages for ad targeting. And without encryption, Meta’s AI commitment is now backed by policy alone, not by the technology itself.

A clear reversal

This reads as a clear reversal of Meta’s privacy-first posture which Zuckerberg announced seven years ago.

Meta has been under sustained pressure from law enforcement, regulators and child protection organisations who argue end-to-end encryption creates spaces where platforms can’t detect child sexual exploitation and grooming. Australia’s eSafety Commissioner has been clear that the deployment of end-to-end encryption “does not absolve services of responsibility for hosting or facilitating online abuse or the sharing of illegal content”.

This argument deserves to be taken seriously. The harms are real and disproportionately fall on young people.

However, sexual extortion research shows perpetrators don’t tend to stay on the platform where they make first contact, with more than 50% of sexual extortion victims saying perpetrators asked them to switch platforms.

Meta still uses end-to-end encryption on its other platforms, such as WhatsApp and Facebook Messenger, and it needs to apply a consistent approach to child safety. Predators routinely ask victims to switch platforms, so the company’s safety approach needs to work for Instagram and their end-to-end encrypted services.

A false choice

Meta and privacy advocates often frame this as a choice between end-to-end encryption or child safety. But that’s a false choice. It’s not an “either-or” situation, even if they make it sound like one.

The technology already exists to detect harmful content while keeping messages encrypted in transit. It just has to run in the right place: on the user’s device, before the device encrypts and sends the message, or after it receives and decrypts it.

On-device approaches have a contested history, and any deployment must be genuinely privacy-preserving by design. But technology companies must weigh the objection against the harms that continue to occur. A safety by design approach is needed.

On-device safety measures have been demonstrated at scale with Apple’s on-device nudity detection for images sent or received via Messages, AirDrop and FaceTime. A 2025 study demonstrated high-accuracy grooming detection using Meta’s AI model designed specifically for on-device deployment on mobile phones.

Recently, both Apple and Google have started to take measures towards app store–based age verification in some jurisdictions.

The highest-profile real-world deployment of these is Apple enabling device-level privacy-preserving age verification in the UK.

Social media and private messaging companies, along with operating system vendors (Microsoft, Apple, and Google), all have a role to play in ensuring harmful content is detected, whether or not end-to-end encryption is used. Progress has been slow. But we, as a community, need to demand more from these companies.The Conversation

Joel Scanlan, Adjunct Associate Professor, School of Law; Academic Co-Lead, CSAM Deterrence Centre, University of Tasmania

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Irfan Ahmad.

Read next: 

• What happens when scientists trust AI more than colleagues?

• Study: Firms often use automation to control certain workers’ wages


by External Contributor via Digital Information World

What happens when scientists trust AI more than colleagues?

Sungho Hong, The Institute for Basic Science and Victor J. Drew, The Institute for Basic Science

Image: Tima Miroshnichenko / Pexels

Artificial intelligence has crossed a threshold in the modern workplace. It is being used for everything from helping employees manage schedules to supporting financial forecasts. A similar shift is now unfolding inside research laboratories.

There is currently a boom in national initiatives to accelerate the integration of AI into science. These include the US Genesis Mission and South Korea’s AI Co-Scientist Challenge. But despite clear benefits, we believe these institutional drives are neglecting important issues that carry immense risks for scientific research.

Today, more than half of researchers use AI for work tasks including reviews of academic journals and designing experiments.

AlphaFold is an AI tool developed to predict the structures of proteins for scientific research. Working out protein structures was incredibly time-consuming before its release – taking years in some cases. The same tasks now take hours. AlphaFold was acknowledged by the 2024 Nobel Prize in Chemistry.

AI tools for use in medicine now assist with everything from the interpretation of results from X-rays and MRIs to supporting doctors’ decisions on the diagnosis and treatment of disease.

Our key concern is that hasty adoption of AI may gradually erode the scientific culture and human relationships that sustain rigorous research. It starts with the erosion of core thinking skills among researchers, as a result of an increased reliance on AI to perform that work. This can alienate researchers from the deeper reasoning behind their work.

Loss of independent thinking

Early-career scientists are particularly vulnerable, because they are still developing their scientific reasoning. Troubleshooting skills and the critical evaluation of ideas may be outsourced to AI systems.

AI’s fluent, confident and immediate responses can easily be mistaken for authoritative information. Once researchers begin to treat AI outputs as implicitly correct, the responsibility for judgment calls may gradually shift from them to their machines.

AI’s persuasive arguments, probably drawn from mainstream ideas in their training data, could replace more rigorous, time-consuming and creative research approaches. These are traditionally shaped through critical back-and-forth discussions between researchers.

This can evolve into over-dependence. As reasoning is delegated to AI, researchers become less confident at working unaided. Unfortunately, modern scientific labs are full of conditions that reinforce this dependence, such as intense competition, long hours and frequent isolation.

Limited mentorship and feedback from colleagues that is delayed, critical or politically influenced can enhance this issue. In contrast, AI provides an immediate, patient and nonjudgmental alternative.

Scientists interact with AI systems daily in order to check computer code, revise illustrations or charts, draft the language for grant applications, clarify scientific concepts, and at times, ask for personal advice.

As researchers begin to trust the AI assistant, it can begin to function less like a tool and more like a companion. This phenomenon bears the risk of emotional dependency, too. When ChatGPT-4 was retired, many users expressed a form of grief.

Replacing relationships

Another important concern is the potential for replacement of human relationships in the office or research lab. AI is always available, nonjudgmental, noncompeting – and indifferent to office politics, with no ego to defend. It remembers context, adapts to individual working styles, and offers reassurance without social cost.

Human scientific relationships are more complicated, involving nuance, criticism, time constraints, hierarchy – and sometimes, ulterior motives. For early-career researchers especially, these interactions can feel risky.

Critical feedback from humans can feel adversarial, while AI responses feel supportive. So, early-career scientists might have good reason to prefer testing ideas or seeking validation through AI, rather than their peers or superiors.

The scientific community cannot thrive without opposing ideas, deep scepticism against consensus, vigorous debate and rigorous mentoring. If AI begins to replace these, it threatens the foundations on which scientific progress has always been made.

The current debate on AI safety mostly focuses on errors in models’ responses, or on AI systems circumventing the restrictions imposed on the way they work, known as “jailbreaking”. Such rules have limited effects when it comes to the AI models’ societal and cultural impact.

Given the recent drives to get scientists to work more closely with AI assistants, we should educate our young scientists on the risks of AI dependence. We also need benchmarks to rigorously test AI models for their ability to establish boundaries with users, to prevent overdependence and other unhealthy interactions.

Finally, all of us – but especially institutional leaders – should understand the capabilities and permanence of AI companionship. They are here to stay, and we should learn to make our relationships with them as healthy as possible.The Conversation

Sungho Hong, Neuroscientist, Center for Memory and Glioscience, The Institute for Basic Science and Victor J. Drew, Postdoctoral Research Associate, Center for Cognition and Sociality, The Institute for Basic Science

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Irfan Ahmad.

Read next:

• Study: Firms often use automation to control certain workers’ wages

• Research finds journalism classes lack consistent approach to AI use across institutions


by External Contributor via Digital Information World

Monday, May 11, 2026

Study: Firms often use automation to control certain workers’ wages

Peter Dizikes | MIT News

MIT economists found US companies tend to target employees earning a “wage premium,” which increases inequality but not necessarily productivity.

Image Credit: Tara WinsteadJonathan Borba - Pexels. Edited by DIW

When we hear about automation and artificial intelligence replacing jobs, it may seem like a tsunami of technology is going to wipe out workers broadly, in the name of greater efficiency. But a study co-authored by an MIT economist shows markedly different dynamics in the U.S. since 1980.

Rather than implement automation in pursuit of maximal productivity, firms have often used automation to replace employees who specifically receive a “wage premium,” earning higher salaries than other comparable workers. In practice, that means automation has frequently reduced the earnings of non-college-educated workers who had obtained better salaries than most employees with similar qualifications.

This finding has at least two big implications. For one thing, automation has affected the growth in U.S. income inequality even more than many observers realize. At the same time, automation has yielded a mediocre productivity boost, plausibly due to the focus of firms on controlling wages rather than finding more tech-driven ways to enhance efficiency and long-term growth.

“There has been an inefficient targeting of automation,” says MIT’s Daron Acemoglu, co-author of a published paper detailing the study’s results. “The higher the wage of the worker in a particular industry or occupation or task, the more attractive automation becomes to firms.” In theory, he notes, firms could automate efficiently. But they have not, by emphasizing it as a tool for shedding salaries, which helps their own internal short-term numbers without building an optimal path for growth.

The study estimates that automation is responsible for 52 percent of the growth in income inequality from 1980 to 2016, and that about 10 percentage points derive specifically from firms replacing workers who had been earning a wage premium. This inefficient targeting of certain employees has offset 60-90 percent of the productivity gains from automation during the time period.

“It’s one of the possible reasons productivity improvements have been relatively muted in the U.S., despite the fact that we’ve had an amazing number of new patents, and an amazing number of new technologies,” Acemoglu says. “Then you look at the productivity statistics, and they are fairly pitiful.”

The paper, “Automation and Rent Dissipation: Implications for Wages, Inequality, and Productivity,” appears in the May print issue of the Quarterly Journal of Economics. The authors are Acemoglu, who is an Institute Professor at MIT; and Pascual Restrepo, an associate professor of economics at Yale University.

Inequality implications

Dating back to the 2010s, Acemoglu and Restrepo have combined to conduct many studies about automation and its effects on employment, wages, productivity, and firm growth. In general, their findings have suggested that the effects of automation on the workforce after 1980 are more significant than many other scholars have believed.

To conduct the current study, the researchers used data from many sources, including U.S. Census Bureau statistics, data from the bureau’s American Community Survey, industry numbers, and more. Acemoglu and Restrepo analyzed 500 detailed demographic groups, sorted by five levels of education, as well as gender, age, and ethnic background. The study links this information to an analysis of changes in 49 U.S. industries, for a granular look at the way automation affected the workforce.

Ultimately, the analysis allowed the scholars to estimate not just the overall amount of jobs erased due to automation, but how much of that consisted of firms very specifically trying to remove the wage premium accruing to some of their workers.

Among other findings, the study shows that within groups of workers affected by automation, the biggest effects occur for workers in the 70th-95th percentile of the salary range, indicating that higher-earning employees bear much of the brunt of this process.

And as the analysis indicates, about one-fifth of the overall growth in income inequality is attributable to this sole factor.

“I think that is a big number,” says Acemoglu, who shared the 2024 Nobel Prize in economic sciences with his longtime collaborators Simon Johnson of MIT and James Robinson of the University of Chicago.

He adds: “Automation, of course, is an engine of economic growth and we’re going to use it, but it does create very large inequalities between capital and labor, and between different labor groups, and hence it may have been a much bigger contributor to the increase in inequality in the United States over the last several decades.”

The productivity puzzle

The study also illuminates a basic choice for firm managers, but one that gets overlooked. Imagine a type of automation — call-center technology, for instance — that might actually be inefficient for a business. Even so, firm managers have incentive to adopt it, reduce wages, and oversee a less productive business with increased net profits.

Writ large, some version of this seems to have been happening to the U.S. economy since 1980: Greater profitability is not the same as increased productivity.

“Those two things are different,” says Acemoglu. “You can reduce costs while reducing productivity.”

Indeed, the current study by Acemoglu and Restrepo calls to mind an observation by the late MIT economist Robert M. Solow, who in 1987 wrote, “You can see the computer age everywhere but in the productivity statistics.”

In that vein, Acemoglu observes, “If managers can reduce productivity by 1 percent but increase profits, many of them might be happy with that. It depends on their priorities and values. So the other important implication of our paper is that good automation at the margins is being bundled with not-so-good automation.”

To be clear, the study does not necessarily imply that less automation is always better. Certain types of automation can boost productivity and feed a virtuous cycle in which a firm makes more money and hires more workers.

But currently, Acemoglu believes, the complexities of automation are not yet recognized clearly enough. Perhaps seeing the broad historical pattern of U.S. automation, since 1980, will help people better grasp the tradeoffs involved — and not just economists, but firm managers, workers, and technologists.

“The important thing is whether it becomes incorporated into people’s thinking and where we land in terms of the overall holistic assessment of automation, in terms of inequality, productivity and labor market effects,” Acemoglu says. “So we hope this study moves the dial there.”

Or, as he concludes, “We could be missing out on potentially even better productivity gains by calibrating the type and extent of automation more carefully, and in a more productivity-enhancing way. It’s all a choice, 100 percent.”

Reprinted with permission of MIT News.

Reviewed by Irfan Ahmad.

Read next:

• Research finds journalism classes lack consistent approach to AI use across institutions

• New Report Reveals TikTok Leads Influencer Disclosure Compliance While YouTube Dominates Long-Term Brand Deals
by External Contributor via Digital Information World