Friday, May 1, 2026

‘Just looping you in’: Why letting AI write our emails might actually create more work

Daniel Angus, Queensland University of Technology

I hope this article finds you well.

Did that make you cringe, ever so slightly? In the decades since the very first email was sent in 1971, the technology has become the quiet infrastructure of white-collar work.

Email came with the promise of efficiency, clarity and less friction in organisational communication. Instead, for many, it has morphed into something else: always there, near impossible to escape and sometimes simply overwhelming.

Right now, something is shifting again. The rise of generative artificial intelligence (AI) technologies, such as ChatGPT and Microsoft Copilot, is increasingly allowing people to offload the repetitive routines of tending one’s inbox – drafting, summarising and replying.

My colleagues in the ARC Centre of Excellence for Automated Decision Making & Society found 45.6% of Australians have recently used a generative AI tool, 82.6% of those using it for text generation. A healthy chunk of that use likely includes email.

So, what happens if we end up fully automating one of the staples of the white-collar daily grind? Will AI technologies reduce some of the friction, or generate new forms of it? Dare I ask – are we actually about to get more email?

Email has long been about more than just communicating information. Vitaly Gariev/Unsplash

Why the printer isn’t dead yet

Soon after the advent of email, some voices in the business world heralded the coming end of paper use in the office. That didn’t happen. If you work in an office today, there’s a good chance you still have a printer.

In their 2001 book, The Myth of the Paperless Office, Abigail Sellen and Richard Harper show how digital tools rarely eliminate older forms of work. Instead, they reshape them.

Sellen and Harper show how paper use didn’t disappear with the rise of email and other digital communication tools; in many cases, it intensified. The takeaway isn’t that offices failed to modernise, but rather that work reorganised around what these new tools could do.



In this case, paper persisted not only out of habit, but because of what it affords: it is easy to annotate, spread out, carry and view at a glance. This was all too clunky (or impossible) to perform via the digital alternatives.

At the same time, email and digitisation dramatically lowered the cost of producing and distributing communication. It was far easier to send more messages, to more people, more often.

Circling back to today

Will AI be different? If early signs are anything to go by, the answer is: not in the way we might hope.

Like earlier waves of workplace technology, AI is less likely to replace existing communication practices than to intensify them – but at least it might come with better grammar and a suspiciously upbeat tone.

Some new AI tools offer to manage your inbox entirely, feeding into broader privacy concerns about the technology.

At this moment, what a lot of these products seem to offer is not an escape from email, but a smoothing of its rough edges. Workers are using AI to soften otherwise blunt requests, modify their tone or expand what might otherwise be considered too brief a response.

Rather than removing the need to communicate, these tools offer pathways to make a delicate performance easier.

What email is actually for

Email, like many forms of communication, is as much about maintaining everyday relationships as it is about the transfer of information.

At work, it’s often about signalling competence, responsiveness, collegiality and authority. “Just looping someone in” or “circling back” are all part of our absurd office vocabulary, a shared dialect that helps us navigate hierarchy, soften demands and keep things moving – all without saying what we really think.

If AI lowers the effort required to produce these signals, it won’t necessarily reduce their importance, but it could unsettle things in rather odd ways.

If more people use AI to draft emails they don’t particularly want to write, we end up with a game of bureaucratic “mime”: everyone performing sincerity and quietly outsourcing it, and no one entirely sure how much of their inbox was actually written by a human.

The labour of email was never just about crafting sentences. It’s always been the scanning, the sorting and the deciding. AI doesn’t remove this burden. If anything, it amplifies it.

When everything arrives polished, everything looks important. That points to a deeper question for the future of work: if AI can perform responsiveness, why are we generating so many situations that still require it?

Looking forward

What would a workplace look like if email wasn’t the default solution to every coordination problem? Perhaps fewer performative check-ins, “just touching base”, “looping you in” or “following up on the below”. More clearer expectations about what actually requires a response, and what doesn’t.

Email, like paper, is likely to persist for good reasons. It is simple, flexible and universal. It allows things to be deferred, revisited, forwarded and quietly ignored.

But if AI is going to change any of this, my hope is that it makes visible how much of this is ritual, how much is habit, and how much has long been unnecessary.

And if the machines are happy to keep saying “hope this finds you well” to each other, we might finally have permission to stop.The Conversation

Daniel Angus, Professor of Digital Communication, Director of QUT Digital Media Research Centre, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Irfan Ahmad.

Read next:

• What the @ Sign Is Called Around the World: 25 Examples

• Q&A: Who’s responsible when AI makes mistakes?

• AI analysis of police body-camera footage raises Constitutional concerns, racial disparities


by External Contributor via Digital Information World

AI analysis of police body-camera footage raises Constitutional concerns, racial disparities

Thousands of officer-worn camera recordings found evidence of underreported police stops, troubling racial disparities in officer interactions, and widespread use of unclear language during consent searches, a new study shows.

Image: Raphael Lopes - unsplash

Researchers at the University of Michigan, University of California-Davis and Stanford University say their findings raise constitutional concerns under both the Fourth and Fourteenth Amendments, involving protection from unreasonable searches/seizures and prohibiting discriminatory practices based on race and ethnicity, respectively.

The report highlights how artificial intelligence could transform police oversight by helping reviewers identify potentially problematic encounters hidden within millions of hours of body-camera footage. The research demonstrates the growing potential for AI-powered analysis to help courts, police departments and municipal governments better evaluate compliance while building greater public trust in law enforcement.

Using machine learning and natural language processing, researchers examined New York Police Department (NYPD) encounters captured on body-worn cameras, looking closely at whether officers followed legal standards governing stops, detentions and consent searches.

Among the study’s most significant findings:

  • Body-camera recordings could be classified as stops with over 80% accuracy, and underdocumented stops with over 70% accuracy based on language alone.
  • Using language models, reviewers could uncover over 50% of undocumented stops identified in manual audits by viewing a fraction (25%) of the footage they normally would.
  • Officers frequently relied on indirect or confusing phrases such as “Do you mind if I check?” rather than clearly asking for consent to search.
  • The word “consent” appeared in less than 13% of consent-search interactions reviewed.
  • Commands and indirect requests appeared more frequently in encounters involving Black and Hispanic civilians.

Nicholas Camp, U-M assistant professor of organizational studies, said these patterns raise questions about whether some civilians clearly understood they could refuse searches and whether certain encounters were documented accurately.

The study stems from reforms ordered after the landmark 2013 federal court ruling in Floyd v. City of New York, in which the U.S. District Court for the Southern District of New York found that the NYPD’s stop-and-frisk practices violated constitutional protections against unreasonable searches and racial discrimination.

Following the ruling, the court appointed an independent monitor to oversee reforms involving NYPD training, supervision and investigative encounters. As part of those reforms, NYPD officers began using body-worn cameras, which captured numerous police-community interactions.

“These recordings provide a far clearer picture of officer behavior than written police reports alone,” Camp said.

The study, approved by the court in 2021, analyzed more than 1,700 encounters connected to an earlier City University of New York Institute for State and Local Governance review, more than 1,100 additional encounters reviewed by the Monitor team, and nearly 1,800 consent-search encounters from 2023.

AI models developed during the study successfully distinguished lower-level encounters from Level 3 stops—which legally require reasonable suspicion—with accuracy rates ranging from approximately 72% to 91%. Researchers say those tools could help oversight teams identify constitutional concerns faster and more consistently by prioritizing footage most likely to contain problematic interactions.

Researchers emphasized that artificial intelligence is not intended to replace human oversight, but instead serves as a tool to strengthen accountability, improve auditing and support ongoing police reform efforts.

“Our analyses identify troubling patterns in NYPD encounters, but also show a path forward: Body camera footage can be used as data to inform and measure changes in law enforcement,” Camp said.

The study’s authors also include Rob Voigt, assistant professor of linguistics, UC-Davis; Dan Sutton, director of Justice and Safety, Stanford (Law School) Center for Racial Justice, and Jennifer Eberhardt, professor of organizational behavior and psychology, Stanford University.

Note: At the time of publication, we have reached out to the NYPD for comment regarding the study’s findings on body-camera analysis and will update this article if a response is received.

This post was originally published on the University of Michigan News and republished here with permission.

Reviewed by Irfan Ahmad.

Read next:

• Transparency and trust in the age of deepfake ads

• Q&A: Who’s responsible when AI makes mistakes?


by External Contributor via Digital Information World