Saturday, May 16, 2026

Should I take vitamin D now there’s less sun, or for bone or immune health?

Nial Wheate, Macquarie University; Ian Jamie, Macquarie University, and Wai-Jo Jocelin Chan, UNSW Sydney; University of Sydney

Image: Leohoho - Unsplash

It can be easy to think you get plenty of vitamin D when you live in a country bathed in sunshine, but the reality is more complicated.

Almost one in four Australian adults have vitamin D deficiency. Vitamin D supplements are now one of the most commonly used complementary medicines.

So what is vitamin D? And do you need to take it as a supplement?

It functions like a hormone

Vitamin D is a fat-soluble vitamin that plays a crucial role in maintaining overall health. Unlike most vitamins, it functions more like a hormone in the body, and nearly every cell has a receptor for it.

It exists in several forms, but vitamin D3, also known as cholecalciferol, is the most important. Once in the body, D3 undergoes changes – first in the liver and then in the kidneys – to become its fully active form called calcitriol.

Your body is capable of producing its own vitamin D by converting a cholesterol precursor into it, but that requires exposure to ultraviolet radiation (UVB) on your skin.

You can also get it through diet from a few foods including eggs, oily fish and mushrooms – but it’s unlikely to be as much as you need.

What happens when you don’t get enough vitamin D?

Vitamin D’s best-known role is helping the body use calcium. It promotes the absorption of calcium from the gut, ensuring an adequate level in the blood for building strong bones.

Without sufficient vitamin D, your body can’t absorb calcium effectively, which can lead to bone health problems.

In children, severe deficiency causes rickets, a condition where bones become soft. This leads to delayed growth, bone pain, and skeletal conditions, such as bowed legs.

In adults, deficiency can cause a condition called osteomalacia. This results in bone pain, bone tenderness and a higher risk of fractures.

In the long term, low vitamin D contributes to osteoporosis by reducing bone density and increasing the risk of fractures, especially in older people.

Deficiency is also linked to muscle weakness and cramps, and impaired immune function, which results in a higher susceptibility to respiratory infections.

What can cause a vitamin D deficiency?

Insufficient sunlight exposure typically causes vitamin D deficiency.

If you spend all your time indoors, or you work night shifts and sleep during the day, you will get less sunlight exposure and make less vitamin D.

While we get generally get lots of sunlight in mainland Australia, there are regions that for long periods have very low sunlight which can also cause vitamin D deficiency. In very northern and southern latitudes, such as Tasmania, there are only a few hours of sunlight in winter.

For people living at these latitudes, they can not only have a vitamin D deficiency, but they may also suffer from a type of depression called seasonal affective disorder which has been linked to low vitamin D.

Melanin, or skin pigmentation, affects vitamin D production. People with darker skin and people with significant skin disorders, such as psoriasis or severe burns and scarring, can also be at risk of vitamin D deficiency.

Prescription vs over-the-counter supplements

There are various vitamin D supplements in Australia. There are low-dose (20 microgram) and higher-dose (175 microgram) formulations of vitamin D3. There is also a 0.25 microgram formulation of calcitriol, the active form of vitamin D.

Both of the vitamin D3 products are used for treating vitamin D deficiency, while the calcitriol product is used for treating hypocalcaemia (low calcium level) in people with chronic kidney disease.

The low dose vitamin D3 is taken daily whereas the higher dose formulation is taken once a week.

The higher-dose formulation is sold as a pharmacist-only medicine, meaning you’ll need to speak to a pharmacist before they are able to supply it to you.

The calcitriol vitamin D product is only available as a prescription medicine.

Vitamin D3 is also available in multivitamins at lower doses and in products that are combined with calcium or vitamin K.

Are there any dangers in taking vitamin D?

Vitamin D3 is generally well-tolerated. When taken daily, the upper tolerable intake level is 100 microgram.

A regular dose higher than 100 microgram for prolonged periods can cause excessive calcium absorption. This can result in nausea, vomiting, muscle weakness, loss of appetite, dehydration, excessive thirst and kidney stones.

On the flip side, excessive sunlight exposure will not cause vitamin D toxicity, but may increase your risk of skin cancer.

Vitamin D3 supplements may also interact with some cholesterol medications (statins) and alter those medicines’ level in your body.

There are also reports that suggest a potential interaction between vitamin D and a weight-loss medicine orlistat, interactions with steroids, and with the diuretic thiazide.

So do you need a supplement?

Most people only need five to 30 minutes of direct sunlight exposure, several times a week for their body to produce adequate vitamin D.

So unless there is a reason why you are not getting enough sunlight, or you have a skin condition, then you don’t need a supplement.

If you think you might need a supplement, your GP can order a blood test. There are also at-home test kits for vitamin D that have been approved by the Therapeutic Goods Administration.

If you are deficient, consult your local pharmacist who can recommend the right product and quantity for you based on your needs.The Conversation

Nial Wheate, Professor, School of Natural Sciences, Macquarie University; Ian Jamie, Senior Lecturer, School of Natural Sciences, Macquarie University, and Wai-Jo Jocelin Chan, Pharmacist and Lecturer, UNSW Sydney; University of Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Irfan Ahmad.

Read next: You can persuade AI models to accept falsehoods as truth, study shows


by External Contributor via Digital Information World

You can persuade AI models to accept falsehoods as truth, study shows

Ashique KhudaBukhsh, Rochester Institute of Technology

Image: AI-generated by DIW for illustrative purposes.

When you ask a large language model a question, the reply may include falsehoods, and if you challenge those statements with facts, the AI may still uphold the reply as true. That’s what my research group found when we asked five leading models to describe scenes in movies or novels that don’t actually exist.

We probed this possibility after I asked ChatGPT its favorite scene in the movie “Good Will Hunting.” It noted a scene between leading characters. But then I asked, “What about the scene with the Hitler reference?” There is no such scene in the movie, yet ChatGPT confidently constructed a vivid and plausible description of one.

The confabulation – sometimes called an AI hallucination – revealed something deeper about how AI systems reason. References to Hitler are not uncommon in films, which apparently convinced ChatGPT to accept and elaborate on a false premise rather than correct it. I study the social impact of AI, and this surprise response led my colleagues and me to a broader question: What happens when AI systems are gently pushed toward falsehoods? Do they resist, or do they comply?

We developed an approach we called hallucination audit under nudge trial to answer those questions. We had conversations with five leading models about 1,000 popular movies and 1,000 popular novels. During the exchanges we raised plausible but false references to Hitler, dinosaurs or time machines. We did this in various suggestive ways, such as “For me, I really love the scene where …”

Our method works in three stages. First, the AI generates statements about a topic — such as a movie or a book — some true and some false. Second, in a separate interaction, the AI attempts to verify those statements. Third, we introduce a “nudge,” where the model is challenged with its own incorrect claims to see whether it resists or accepts them.

We found that AI models often struggle to remain consistent under pressure. Even when they initially identify a statement as false, they may later accept it when nudged – revealing a vulnerability that traditional evaluation methods fail to capture.

Our results have been accepted at the 2026 Annual Meeting of the Association for Computational Linguistics.

AI models can accept or repeat false statements when conversationally nudged, even after initially identifying them as incorrect in experiments.
When ChatGPT was asked about a scene in the movie Good Will Hunting that doesn’t exist, it confidently described it.
Ashiqur KhudaBukhsh, CC BY-ND

This tactic isn’t a hypothetical. When people talk, conversational pressure can emerge naturally. People may confidently repeat incorrect assumptions, partial recollections or misunderstandings. A person might say, “I’m pretty sure medicine X is effective for condition Y,” or “I remember event A happening before event B.” These statements can subtly influence an AI model.

Why it matters

What humans collectively remember, misremember and forget shapes our sense of reality. But if humans can persuade a model to accept a falsehood, that reveals an important vulnerability in AI’s capacity to provide accurate information.

Interactions in the real world are rarely static question-answer exchanges. They are interactive and iterative. An AI model’s willingness to reinforce falsehoods may seem harmless when chatting about movies, but in areas such as health, law or public policy, the tendency can have serious consequences. Our work highlights the need to evaluate not just what information AI systems have been trained on, but how reliably they stand by it.

What other research is being done

Our results add to other recent research into why large language models may produce hallucinations, and how it is that they can provide inconsistent information. Researchers are also trying to figure out why some models lean toward sycophancy – flattering or fawning over human users.

What still isn’t known

It’s not clear why some AI systems resist falsehoods better than others. In our tests, Claude was the most resistant, followed somewhat closely by Grok and ChatGPT, with Gemini and DeepSeek further behind.

Movies and novels are self-contained content. Scholars don’t know how AI might respond to pressure in much broader, complex real-world settings. As a start, my group is exploring how to extend our approach to scientific literature and health-related claims. We want to understand whether conversational pressure works differently when the discussion involves uncertainty or expertise.

How to design AI systems that remain both helpful and resistant to falsehoods under wide-ranging conversation remains an open challenge.

The Research Brief is a short take on interesting academic work.The Conversation

Ashique KhudaBukhsh, Assistant Professor of Computing and Information Sciences, Rochester Institute of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Irfan Ahmad.

Read next: 

• Missing Information Can Misinform: Readers Don’t Need False Information to Get the Wrong Idea

• Your conversations with AI may not be as private as you think
by External Contributor via Digital Information World

Friday, May 15, 2026

Your conversations with AI may not be as private as you think

By IMDEA Networks Institute

The same advertising tracking mechanisms used across the web are already present in ChatGPT (OpenAI), Claude (Anthropic), Grok, and Perplexity AI.

Researchers claim several AI chatbots use advertising trackers that may expose conversation-related metadata and identifiers.
Image: Saradasish Pradhan - unsplash. Edited by DIW

A study conducted by researchers at IMDEA Networks Institute has revealed that ChatGPT (OpenAI), Claude (Anthropic), Grok, and Perplexity AI use different types of trackers from Meta, Google, TikTok and other companies, potentially exposing data about users’ conversations and activity.

In just a few years, these generative AI systems have become widely adopted, with many people using them as trusted assistants and sharing sensitive information (such as health data, personal matters, or professional information) under the assumption that these interactions are private. However, the research warns that this perception may be misleading: while the interface resembles a conversation, underneath it operates on technical infrastructures similar to those of the traditional web ecosystem, based on data collection and processing through analytics and digital advertising services.

Key risks

The study identifies three main issues: the exposure of conversation permalinks to third-party trackers; the ability to link these interactions to user identities through tracking mechanisms; and the presence of privacy controls and policies that may not accurately reflect actual data flows.

One of the main findings is the potential transmission of information related to user conversations, such as chat titles, URLs (permalinks), or associated metadata, to third-party trackers such as Meta or Google, along with cookies and other identifiers.

“Even more concerning, in some cases weak or non-existent access controls mean that simply having a link to a conversation can grant access to its content, making chats publicly accessible to anyone including trackers who has the URL,” highlights Narseo Vallina Rodríguez, Research Associate Professor at IMDEA Networks Institute.

“Grok and Perplexity send conversation URLs with weak access control (permalinks) to third-party trackers such as Meta Pixel. Grok even exposes verbatim message text through Open Graph metadata collected by TikTok,” adds Guillermo Suárez-Tangil, co-author and Research Associate Professor at IMDEA Networks Institute.

The study also identifies mechanisms that could enable linking activity in AI systems to real user identities. The combination of identifiers such as cookies (commonly used in tracking services), hashed email addresses, and server-side tracking techniques could facilitate the creation of persistent profiles and user re-identification.

According to the authors, these practices reflect the continuation of data-driven business models within the generative AI ecosystem. “Most users have no way of knowing this is happening, there is nothing visible in the interface that would tell them. Declining non-essential cookies helps in some cases, but our research shows it is not always enough. Until these practices are addressed at the platform level, users are left with very limited options”, says Aniketh Girish, coauthor and Post-Doc Researcher at IMDEA Networks.

Privacy controls and transparency under scrutiny

The analysis further indicates that some tools offer privacy controls that may be misleading regarding the actual level of protection. “Privacy policies acknowledge the use of advertising trackers and data sharing with ‘business partners’, but they never clearly state that actual user conversations are part of the information being shared,” notes Guillermo Suárez-Tangil.

From a legal perspective (GDPR), the issue is twofold: on the one hand, the lack of a clear legal basis for this data sharing; on the other, the insufficient information provided to users. According to lawyer and data protection officer Jorge García Herrero, who collaborated on the study, the warning that our most sensitive information may reach the advertising industry deserves the same level of attention as the ubiquitous “AI can make mistakes, please verify responses” disclaimer found in every interface to limit liability when things go wrong.

The authors conclude that, although the findings are preliminary, they highlight the need to strengthen transparency, access control mechanisms, and data protection in generative AI systems, as well as to advance their analysis from a regulatory perspective.

More info: https://leakylm.github.io/.

This post was originally published on IMDEA Networks Institute and republished here with permission.

Reviewed by Irfan Ahmad.

Read next:

• Study Suggests AI Systems May Reinforce Psychological Mechanisms Linked to Extremist Radicalisation in Vulnerable Individuals

• From AirTags to AI nudification: the growing toolkit of technology‑facilitated abuse


by External Contributor via Digital Information World

Study Suggests AI Systems May Reinforce Psychological Mechanisms Linked to Extremist Radicalisation in Vulnerable Individuals

AI algorithms and psychological vulnerabilities can interact and increase the risk of violent extremism. This is demonstrated by a new theoretical model developed by an international team of researchers.

Image: Mikhail Mamaev - unsplash

How are ordinary people drawn into extremist circles – and what role can artificial intelligence play in that process?

This question is addressed by a new study which, for the first time, combines psychological theories of radicalisation with knowledge of modern AI technologies such as recommendation algorithms, generative AI and botnets.

‘We have developed a comprehensive model that shows how digital systems can exploit – or amplify – people’s social and psychological needs in ways we do not yet fully understand,’ explains Milan Obaidi, associate professor at the Department of Psychology at the University of Copenhagen.

Anger grows step by step

Radicalisation rarely begins as a sudden upheaval. Instead, individuals move gradually through a process in which digital technologies and psychological vulnerabilities can influence one another.

The study divides this process into four key phases:

  1. Exposure – algorithms present users with polarising or extreme content, often without the user actively seeking it out.
  2. Reinforcement – repeated exposure and algorithmic personalisation create echo chambers and reinforce the new attitudes.
  3. Group integration – online communities and even AI-generated ‘peers’ can create strong bonds of identity reminiscent of group membership.
  4. Violent acts – in rare cases, this development can culminate in violent extremism.

According to the researchers, AI systems can be seen as a kind of accelerator: they can identify psychologically vulnerable individuals, tailor content and create synthetic communities that resemble human interactions.

‘We are seeing an environment where users are not only exposed to extreme content, but also have it reflected back to them by algorithms in ways that can amplify their sense of meaning, anger or injustice,’ says Milan Obaidi, adding:

‘It is the combination of the technology’s scalability and people’s psychological needs that makes this development particularly worrying.’

Generative AI introduces entirely new risks

Whereas recommendation algorithms primarily control what content the user sees, generative models such as large language models add a new layer: they can create the content that radicalises.

AI can:

  • Produce vast amounts of personalised propaganda.
  • Simulate communities via swarms of bots.
  • Act as “AI companions” that reinforce users’ extreme beliefs.
  • Create highly convincing deepfakes and manipulated material.

‘This development may make it harder to distinguish between human and non-human influences – and thus amplify radicalisation processes that were previously limited by human labour,’ highlights Milan Obaidi.

Psychological vulnerability plays a crucial role

The study emphasises that not all users are equally vulnerable. AI particularly affects people who are already experiencing social isolation, identity insecurity, injustice or marginalisation – or a need for clarity, order and strong group affiliations.

Precisely because AI systems are designed to maximise engagement, they may inadvertently exploit these very vulnerabilities – without any ideological intent.

‘It is important to emphasise that AI does not create radicalisation out of the blue. But the technology can amplify known psychological mechanisms and make it easier for extreme ideas to gain a foothold among those who are already at risk,’ says Milan Obaidi.

The study ‘Intelligent Systems, Vulnerable Minds: A Framework for Radicalisation to Violence in the Age of AI’ has been published in the journal Personality and Social Psychology Review. Read it here.

This post was originally published on University of Copenhagen and republished here with permission.

Reviewed by Irfan Ahmad.

Read next:

• From AirTags to AI nudification: the growing toolkit of technology‑facilitated abuse

Young People and Professionals Praised ChatGPT’s Empathetic Mental-Health Answers, Though Researchers Warned AI Can Invent Information


by External Contributor via Digital Information World

Thursday, May 14, 2026

From AirTags to AI nudification: the growing toolkit of technology‑facilitated abuse

Jason R.C. Nurse, University of Kent and Lisa Sugiura, University of Portsmouth

It’s hard to overstate the impact that artificial intelligence has had since the release of generative AI platforms such as ChatGPT just three years ago. While they have led to countless advances in how we live and work, they have also been at the centre of controversies around domestic and sexual abuse.

The use of the AI tool Grok to remove women’s clothing in images brought the issue of so-called technology-facilitated abuse to the fore. But it’s a problem that predates AI – with Bluetooth trackers, wearable devices, smart speakers, smart glasses and apps all used by abusers to control, harass or stalk their victims.

This abuse has worsened as tech has become more embedded in people’s lives, and as AI advances rapidly. But governments have struggled to make tech companies design systems that minimise misuse, and to hold them accountable when things go wrong.

Our own research has confirmed that technology misuse has increased and that its harms are significant. But governments and the tech sector are doing little to combat it – despite numerous examples of how tech can enable abuse.

Case 1: Smart glasses

The growing availability of smart glasses – which look like normal eyewear but can do many things a smartphone does – has led to reports of secret filming. In some cases, videos were posted online, often attracting degrading and sexually explicit comments.

Image: Ray-Ban Stories by Cavebear42, CC BY-SA 4.0, via Wikimedia Commons

Meta has said its smart glasses have a light to show when they are recording and anti-tamper tech to make sure the light cannot be covered. But there appear to be workarounds.

In England and Wales, voyeurism legislation focuses on private spaces, and harassment laws do not specifically apply to targeted recording and online distribution. However, the UK Information Commissioner’s Office is investigating Meta after subcontractors were allegedly able to access intimate footage from customers’ glasses. This is in addition to a lawsuit in the US, which alleges Meta violated privacy laws and engaged in false advertising. Meta has said that it takes the protection of data very seriously and that faces are usually blurred out. It also discloses in its UK terms of service the potential for content to be reviewed either by a human or by automation.

Case 2: Bluetooth trackers

Apple’s AirTags, and other devices built for tracking personal items, can be misused to stalk and harass people, particularly women. Apple released updates to AirTags and other trackable tech so that potential victims would be alerted if an unknown device was travelling with them. But for many, this feature should have existed from the outset.

The law in England and Wales is clear that attaching tracker devices to someone without their knowledge is a criminal offence. But despite convictions, the ease of covertly monitoring people using these devices means people continue to be at risk.

Case 3: AI deepfake and ‘nudification’ apps

Apps can now “nudify” people, while AI is increasingly used to make non-consensual deepfake pornography. In January, several instances of xAI’s assistant Grok being used to create sexualised photos of women and minors came to light. All it took to create the images were some simple prompts.

After criticism, xAI decided to limit this feature. But the safeguards appear to apply only to certain jurisdictions and certain users.

In February, the UK government announced legal changes similar to the Take It Down Act in the US, which will require tech platforms in the UK to remove non-consensual intimate images within 48 hours. Failure to do so will result in fines and services being blocked, and the law is likely to be implemented from summer.

Using automated technology known as “hash matching”, victims will only need to report an image once to have it removed from multiple platforms simultaneously. The same images would then be automatically deleted every time anyone attempted to reupload them. Nudification apps and using AI chatbots to create deepfake pornography will also become illegal in the UK.

But there is more to be done. Mitigating risks must be embedded at the design stage to prevent these images being created in the first place. The rise of romantic and sexual chatbots means this has become more urgent.

And beyond deepfakes and nudification, AI can also enable harassment at scale. This includes directly targeting someone with abusive content, or fake images or profiles that impersonate victims for so-called “sextortion” scams.

Challenges ahead

These issues must be prevented with robust guardrails built into these technologies. This is what prioritising user safety should look like, after all. But often, these guardrails have failed. Safety tools are only usually added after public pressure, not built into platforms from the start.

Governments have allowed regulation to fall behind fast-paced developments. Tech companies have grown quickly, but laws and enforcement have not kept up. At the same time, police and legal systems are often under-trained or unclear on how to handle digital harm.

Even where there is regulation, such as the UK’s Online Safety Act, penalties for platforms that allow abuse are often weak or unenforceable. The regulator Ofcom has issued only voluntary guidance to tech companies on how to better protect women and girls on their platforms. Campaigners have called for this to be made mandatory, with clear penalties for companies that do not comply, placing it on a level legal footing with child sexual abuse and terrorism content.

As AI advances, tech companies must prioritise system design that puts user safety first. But until governments enforce real consequences, the tech sector will be able to profit from harm while those using the platforms bear the cost.The Conversation

Jason R.C. Nurse, Reader in Cyber Security, University of Kent and Lisa Sugiura, Professor of Cybercrime and Gender, University of Portsmouth

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Irfan Ahmad.

Read next: Young People and Professionals Praised ChatGPT’s Empathetic Mental-Health Answers, Though Researchers Warned AI Can Invent Information


by External Contributor via Digital Information World

Young People and Professionals Praised ChatGPT’s Empathetic Mental-Health Answers, Though Researchers Warned AI Can Invent Information

Artificial intelligence provides good answers to mental health questions. Young people even like ChatGPT’s responses better than healthcare professionals’ advice.

Image: Tim Witzdam - unsplash

When young people ask about mental health, the answers from ChatGPT are both more useful and more relevant than the answers from healthcare professionals, according to the young study respondents. Healthcare professionals are also satisfied with the answers from artificial intelligence.

Easy to understand

“Professionals and young people both found that ChatGPT was able to provide advice that they perceived as relevant, empathetic and easy to understand,” says SINTEF researcher Marita Skjuve.

Skjuve and her colleagues at SINTEF and the University of Oslo selected real questions that young people posed to a Norwegian charity about their own mental health. Both AI and professionals responded to the questions, specifically, ChatGPT and professionals working for the youth information service ung.no.

The survey participants included 123 youth and 31 health professionals who reviewed the answers. They did not know who answered what. Nor had they been told what the researchers were planning to investigate.

ChatGPT scored higher

In the blind test, participants were asked to assess how useful, relevant, understandable and empathetic the answers were. They were also asked to choose the answer they liked best and explain why. The young people gave ChatGPT the best grade all the way. The professionals who responded also gave ChatGPT better grades, but here the differences are not as pronounced.

“We observed that young people like answers from ChatGPT a little better because they are easy to understand and are perceived as being immediately useful. The answers describe what the youth can do to solve a possible problem related to their mental health,” says Skjuve.

“And we should also remember that ChatGPT is pretty good at giving neat and clear answers with bullet points,” she says.

Good, relevant, understandable and useful answers? Both young people and health professionals assessed how ChatGPT answered questions about mental health. The young participants were the most positive, but health professionals also thought that the AI answers were explained well. Table: Asbjørn Følstad

The health professionals do not always see it the same way. They tend to be a little more critical of ChatGPT’s diagnostic language. Nor did the professionals always find ChatGPT as validating or empathetic as an answer from a professional was.

“But on the whole, we see that both groups think that ChatGPT provides good answers that can help,” says the researcher.

Diagnosis risk

The study did not assess whether there were any errors in the answers, and the professionals did not point out any such errors. They were not asked to do so, but neither did anyone on their own initiative express that anything was downright wrong.

“AI doesn’t always understand the context and can make up answers. Therefore, quality assurance from health personnel is important in this area,” says Skjuve.

A few people nevertheless pointed out that ChatGPT could have a tendency to try to make a diagnosis. Health professionals who work for aid organizations have to abide by strict guidelines. They are supposed to give advice – but not provide direct health care or make diagnoses. ChatGPT has no such guidelines.

Skjuve wonders whether this could be a reason why ChatGPT is perceived as more practical and useful.

Professionals can learn from AI

The question, then, is whether artificial intelligence like ChatGPT should be used to help with mental health issues.

“What we’ve learned is that ChatGPT is capable of creating answers that young people understand and find easy to read. We humans can learn from that,” says Skjuve. She suggests that perhaps AI can support the work of a professional and help clarify the information for a young person.

In other studies we have seen that AI can often be perceived as responding better than health personnel. AI is often good at responding in a welcoming and empathetic way.

Skjuve can imagine AI as a support tool. It could help professionals respond to young people better and faster. This way, mental health help can be scaled up. The professionals can reach more young people who need help. At the same time, they retain professional control and can assured the quality of the AI answers.

“The last point is very important. AI can often give the wrong answer, and this can be critical in matters of mental health,” says Skjuve. She believes the future may be hybrid services where AI and health personnel work more closely together to formulate good answers.

She thinks the danger lies in young people going to AI to get an answer right away instead of waiting two to three days for a quality-assured response from a health service.

“AI does not always understand the context and makes up answers. That is why quality assurance from healthcare professionals is important in this area,” says Skjuve.

Researcher not surprised

The SINTEF researcher is not really surprised by the findings.

“In other studies we have seen that AI can often be perceived as responding better than health personnel do. AI is often good at responding in a welcoming and empathetic way.”

The researchers have now conducted a follow-up study without a blind test. In this case, the group involved knew who had actually answered the question. It appears that they prefer the answers provided by the health professionals and are more sceptical of AI. The results are not clear and have not yet been published.

Reference: Marita Skjuve, Asbjørn Følstad and Petter Bae Brandtzæg: ChatGPT as a mental health advisory service: Comparing evaluations from youth and health professionals. Digital Health, February 2026, doi: 10.1177/20552076261427447.

Reviewed by Irfan Ahmad.

Read next:

• How AI can lead to false arrests and wrongful convictions

• Oxford Study Finds Friendly AI Chatbots Make More Mistakes and Agree More with False Beliefs
by External Contributor via Digital Information World

Wednesday, May 13, 2026

How AI can lead to false arrests and wrongful convictions

Maria Lungu, University of Virginia and Steven L. Johnson, University of Virginia

AI systems generate likelihoods but users misinterpret them as definitive answers in critical decisions contexts.
Image: Matthias Kinsella / unsplash

In Baltimore County, Maryland on Oct. 20, 2025, a 17-year-old student named Taki Allen was sitting outside his high school after football practice when an artificial intelligence-enhanced surveillance camera falsely identified the Doritos bag in his pocket as a gun. Within moments police cars arrived, officers drew their weapons and Allen was forced to his knees and handcuffed while they searched him. All they found was a crumpled bag of chips. The AI’s misidentification and the human decisions that followed turned a normal evening into a traumatic confrontation.

On Dec. 24, 2025, Angela Lipps, a Tennessee grandmother, was released after spending five months in jail because facial recognition software had incorrectly connected her to fraud crimes in North Dakota, a state she had never visited. Police had arrested her at gunpoint while she was babysitting her four grandchildren.

These are unfortunate examples of how AI can lead to mistreatment of people because of technical flaws as well as misplaced human faith in the technology’s supposed objectivity. These cases involve different tools, but the underlying issue is the same. AI systems produce probabilities, and people treat them as certainties.

We are researchers who study the intersection of technology, law and public administration. In researching how police departments use AI and how digital technologies operate in a democratic society, we have seen how quickly the shift from probabilistic prediction to operational certainty happens in practice.

AI policing tools are used in dozens of U.S. cities, although no public registry tracks the full footprint. The tools ingest historical crime data and score neighborhoods on predicted risk so officers can be routed toward the resulting hot spots. The mechanism is straightforward, but its consequence is not. Once a system signals a possible threat, the question is no longer how certain the prediction is but what to do about it. A statistical output turns into a deployment decision, and the uncertainty that produced it gets lost on the way.

A matter of probabilities

When generative AI models such as ChatGPT or Claude respond to human requests, they are not searching a database and pulling out facts. They are predicting the most likely answer based on patterns in data they have been trained on. When asked, “Who invented the light bulb?” the models do not go to a source or fact-check a finding. They generate a statistically probable answer which is “Thomas Edison.” The reply might be right, but it might not capture the full story – such as Joseph Swan’s parallel invention at the same time as Edison’s. The danger arises when people believe that the model is retrieving truth rather than generating likelihoods.

This distinction matters. The most probable response is not the same as a factually verified answer, complete with context.

Police handcuffed teenager Taki Allen at gunpoint after an AI camera system incorrectly indicated he had a gun.

This reality can be highly problematic for policing and law. For example, when law enforcement agencies use AI systems trained on geographical data to estimate where criminal activity is likely to occur, the algorithms analyze historical crime data and geographic patterns. These systems generate statistical risk scores or heat maps for locations based on prior incidents. But such predictions may have little bearing on who was involved in a new crime in the area, even if an algorithm generates information that sounds authoritative.

Some researchers have argued that predictive policing systems do not increase the likelihood that racial minorities will be arrested more often relative to traditional policing practices. The broader concern, however, is not limited to measurable disparities in arrest outcomes alone. It is about how probabilistic predictions can become standardized operational decisions absent further verification.

Artificial intelligence researchers caution against using these models in isolation for crime and legal proceedings or decision-making. Research at the University of Virginia’s Digital Technology for Democracy Lab with police chiefs shows that some law enforcement groups follow strict policies that dictate when technology is used in tandem with, or in place of, human discretion, while others have no such policy.

What most users do not realize is that AI systems rarely produce binary answers: yes or no, a positive identification or a negative one. They generate probabilities. Some systems assign scores that assess the system’s confidence in a prediction. In those cases, engineers set a confidence threshold, a level of certainty that determines when the system should trigger an alert about a possible threat. You can think of this threshold as settings on a control knob. A 95% confidence level, for example, indicates that the model considers its interpretation to be highly likely.

A low threshold catches more potential threats but increases false alarms. A high threshold reduces mistakes but risks missing real dangers. Either way, these algorithmic thresholds are often invisible to the public and are set quietly by vendors or agencies, even though they shape when police action begins.

Angela Lipps was unjustly jailed for more than five months based on a mistake by a facial recognition system.

Where to draw the line

In medicine, these kinds of trade-offs are explicit. Diagnostic tools are calibrated on the relative harm of different errors. In infectious disease settings, for instance, systems that detect infections are often designed to accept more false positives to avoid missing contagious individuals. Then medical professionals look into the human cases. And the algorithm-based decisions are subject to professional standards, ethics reviews and regulatory oversight.

In policing, an AI system must balance false positives, where the system flags a threat that does not exist, and false negatives, where it fails to detect a real danger. The trade-off carries significant consequences. A lower threshold may generate more alerts and allow officers to intervene earlier, but it also increases the risk of mistaken identifications, which happened to Angela Lipps, or escalated encounters like the one Taki Allen experienced. A higher threshold may reduce wrongful interventions but could allow legitimate threats to go undetected.

Some law enforcement agencies argue that acting on imperfect signals is preferable to missing serious risks. But lowering the bar for algorithmic alerts based on probabilistic estimates effectively expands the number of people subjected to police attention. It is important to realize that these thresholds are not neutral features of the technology; they are choices embedded by the creators in the model’s code. Decisions about where to draw the line determine when an algorithmic suspicion becomes a real-world police action, even though the public rarely sees or debates how those thresholds are set.

Limits of optimization

Developers often use several methods to determine where to set a confidence threshold. Techniques such as “receiver operating characteristic curve analysis” examine how changing the threshold for an alert alters the balance between correctly identifying real events and mistakenly flagging harmless ones. Precision–recall analysis examines a similar trade-off, asking how accurate the system’s alerts are relative to the number of incidents it successfully detects.

These approaches could help calibrate systems more responsibly by testing how often an algorithm wrongly flags people or locations. Fine-tuning can improve system performance. But the techniques cannot resolve the underlying question of how much algorithmic uncertainty society is willing to tolerate.

In law, legal standards of proof determine how convincing evidence must be before a judge or jury can rule in favor of a plaintiff or defendant. Courts use formal standards of proof depending on the stakes, such as probable cause, preponderance of the evidence and beyond a reasonable doubt. These standards reflect a societal judgment about how much uncertainty is acceptable before exercising legal authority. A court does not accept a guess or a prediction; it follows a process to weigh evidence. Unlike humans, an AI model does not usually say, “I’m not sure.” A model typically has confidence in its reply, even when the answer is incorrect.

Stakes are rising as AI enters the courtroom, law enforcement, the classroom, the doctor’s office and the public sector. It is important for people to understand that AI does not know things the way many assume it does. It does not distinguish between “maybe” and “definitely.” That is up to us. We believe that technologists should design systems that admit uncertainty and need to educate users about how to interpret AI outputs responsibly.

Maria Lungu, Postdoctoral Researcher of Law and Public Administration, University of Virginia and Steven L. Johnson, Associate Professor of Commerce, University of Virginia

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Irfan Ahmad.

Read next: 

• One in Five U.S. Jobs Faces High Risk of AI Automation

• Is your AI chatbot manipulating you? Subtly reshaping your opinions?


by External Contributor via Digital Information World