Sunday, December 14, 2025

The ‘AI Homeless Man Prank’ reveals a crisis in AI education

The new TikTok trend “AI Homeless Man Prank” has sparked a wave of outrage and police responses in the United States and beyond. The prank involves using AI image generators to create realistic photos depicting fake homeless people appearing to be at someone’s door or inside their home.

Learning to distinguish between truth and falsehood is not the only challenge society faces in the AI era. We must also reflect on the human consequences of what we create.

As professors of educational technology at Laval University and education and innovation at Concordia University, we study how to strengthen human agency — the ability to consciously understand, question and transform environments shaped by artificial intelligence and synthetic media — to counter disinformation.

Image: Kenny Eliason / unsplash

A worrying trend

In one of the most viral “AI Homeless Man Prank” videos, viewed more than two million times, creator Nnamdi Anunobi tricked his mother by sending her fake photos of a homeless man sleeping on her bed. The scene went viral and sparked a wave of imitations across the country.

Two teenagers in Ohio have been charged for triggering false home intrusion alarms, resulting in unnecessary calls to police and real panic. Police departments in Michigan, New York and Wisconsin have issued public warnings that these pranks are wasting emergency resources and dehumanizing the vulnerable.

At the other end of the media spectrum, boxer Jake Paul agreed to experiment with the cameo feature of Sora 2, OpenAI’s video generation tool, by consenting to the use of his image.

But the phenomenon quickly got out of hand: internet users hijacked his face to create ultra-realistic videos in which he appears to be coming out as gay or giving make-up tutorials.

What was supposed to be a technical demonstration turned into a flood of mocking content. His partner, skater Jutta Leerdam, denounced the situation: “I don’t like it, it’s not funny. People believe it.”

These are two phenomena with different intentions: one aimed at making people laugh; the other following a trend. But both reveal the same flaw: that we have democratized technological power without paying attention to issues of morality.

Digital natives without a compass

Today’s cybercrimes — sextortion, fraud, deepnudes, cyberbullying — are not appearing out of nowhere.

Their perpetrators are yesterday’s teenagers: they were taught to code, create and publish online, but rarely to think about the human consequences of their actions.

Juvenile cybercrime is rapidly increasing, fuelled by the widespread use of AI tools and a perception of impunity. Young people are no longer just victims. They are also becoming perpetrators of cyber crime — often “out of curiosity,” for the challenge, or just “for fun.”

And yet, for more than a decade, schools and governments have been educating students about digital citizenship and literacy: developing critical thinking skills, protecting data, adopting responsible online behaviour and verifying sources.

Despite these efforts, cyber-bullying, disinformation and misinformation persist and are intensifying to the point of now being recognized as one of the top global risks for the coming years.

A silent but profound desensitization

These abuses do not stem from innate malice, but from a lack of moral guidance adapted to the digital age.

We are educating young people who are capable of manipulating technology, but sometimes unable to gauge the human impact of their actions, especially in an environment where certain platforms deliberately push the boundaries of what is socially acceptable.

Grok, Elon Musk’s chatbot integrated into X (formerly Twitter), illustrates this drift. AI-generated characters make sexualized, violent or discriminatory comments, presented as simple humorous content. This type of trivialization blurs moral boundaries: in such a context, transgression becomes a form of expression and the absence of responsibility is confused with freedom.

Without guidelines, many young people risk becoming augmented criminals capable of manipulating, defrauding or humiliating on an unprecedented scale.

The mere absence of malicious intent in content creation is no longer enough to prevent harm.

Creating without considering the human consequences, even out of curiosity or for entertainment, fuels collective desensitization as dignity and trust are eroded — making our societies more vulnerable to manipulation and indifference.

From a knowledge crisis to a moral crisis

AI literacy frameworks — conceptual frameworks that define the skills, knowledge and attitudes needed to understand, use and critically and responsibly evaluate AI — have led to significant advances in critical thinking and vigilance. The next step is to incorporate a more human dimension: to reflect on the effects of what we create on others.

Synthetic media undermine our confidence in knowledge because they make the false credible, and the true questionable. The result is that we end up doubting everything – facts, others, sometimes even ourselves. But the crisis we face today goes beyond the epistemic: it is a moral crisis.

Most young people today know how to question manipulated content, but they don’t always understand its human consequences. Young activists, however, are the exception. Whether in Gaza or amid other humanitarian struggles, they are experiencing both the power of digital technology as a tool for mobilization — hashtag campaigns, TikTok videos, symbolic blockades, coordinated actions — and the moral responsibility that this power carries.

But it’s no longer truth alone that is wavering, but our sense of responsibility.

The relationship between humans and technology has been extensively studied. But the relationship between humans through technology-generated content hasn’t been studied enough.

Towards moral sobriety in the digital world

The human impact of AI — moral, psychological, relational — remains the great blind spot in our thinking about the uses of the technology.

Every deepfake, every “prank,” every visual manipulation leaves a human footprint: loss of trust, fear, shame, dehumanization. Just as emissions pollute the air, these attacks pollute our social bonds.

Learning to measure this human footprint means thinking about the consequences of our digital actions before they materialize. It means asking ourselves:

  • Who is affected by my creation?
  • What emotions and perceptions does it evoke?
  • What mark will it leave on someone’s life?

Building a moral ecology of digital technology means recognizing that every image and every broadcast shapes the human environment in which we live.

Educating young people to not want to harm

Laws like the European AI Act define what should be prohibited, but no law can teach why we should not want to cause harm.

In concrete terms, this means:

  • Cultivating personal responsibility by helping young people feel accountable for their creations.
  • Transmitting values through experience, by inviting them to create and then reflect: how would this person feel?
  • Fostering intrinsic motivation, so that they act ethically out of consistency with their own values, not fear of punishment.
  • Involving families and communities, transforming schools, homes and public spaces into places for discussion about the human impacts of unethical or simply ill-considered uses of generative AI.

In the age of manufactured media, thinking about the human consequences of what we create is perhaps the most advanced form of intelligence.The Conversation

Nadia Naffi, Associate Professor, Educational Technology, Université Laval and Ann-Louise Davidson, Innovation Lab Director and Professor, Educational Technology and Innovation Mindset, Concordia University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read next: Revealing the AI Knowledge Gap in Marketing, The Cost of Upskilling


by External Contributor via Digital Information World

No comments:

Post a Comment