Thursday, December 18, 2025

Want to Stand Out at Work? Avoid These Top 10 Email Clichés

If you “reach out” and “circle back” often in your work emails, you’re not alone. These phrases are among the most overused email clichés around the world, a new study finds.

Email has been around for more than 50 years, and it’s still the backbone of workplace communication. Its usage has only increased over the past few decades, so it’s no surprise that workers have come to rely on certain phrases to get their point across quickly.

The result is that our inboxes are flooded with emails where people are “following up,” “checking in,” and “touching base.” But just how often do we write these things? Email verification company ZeroBounce dove into the data – and the stats paint a fascinating picture about how we communicate with our peers.

Study: the top 10 email clichés in workplace email communication

ZeroBounce analyzed over a million emails to compile a list of the most common email buzzwords in our workplace communication today.

Here are the top 10:

  1. Reaching out: 6,117 emails
  2. Following up: 5,755 emails
  3. Check in: 4,286 emails
  4. Aligned: 1,714 emails
  5. Please advise: 1,459 emails
  6. Hope you’re doing well: 1,300 emails
  7. Hope this email finds you well: 974 emails
  8. Hope all is well: 592 emails
  9. E-meet: 536 emails
  10. Circle back: 533 emails
AI isn’t making our emails smarter, it’s copying our clichés (1M+ emails analyzed)

Other popular email phrases ZeroBounce identified are:
  • Happy Friday: 512 emails
  • Touch base: 331 emails
  • Hop on a call: 243 emails
  • Bandwidth: 220 emails
  • Happy Monday: 169 emails
  • Per my last email: 89 emails
  • Low-hanging fruit: 18 emails

How we can replace the most common clichés

While saying you’re “touching base” in an email isn’t inherently bad, some of these phrases are so ubiquitous nowadays that your message may start losing power. “People rely on these cookie-cutter phrases because they often don’t know how to start an email, especially when they’re following up on something they need,” says ZeroBounce founder and CEO Liviu Tanase.

“Our goal with this study wasn’t to shame anyone – we’ve all used these buzzwords at least once. But the findings remind us that there are other options out there. Before we hit send on that next email, it’s worth taking a minute to read through it and see if we can find a different way to convey our message,” Tanase adds.

A few alternative ways to “reach out”

It’s not always easy to come up with fresh email openers or ask someone (again) for something you need from them. But here are a few pointers to help make your next email stand out in the inbox:

  • Instead of “reaching out,” start with a positive comment about the person you’re emailing. Pick whatever is relevant in that moment – it could be a keynote they gave at a conference, an article they wrote, or a recent promotion. Make it about them and you’ll immediately get their attention.
  • If you’re sending a second or a third email asking for something you need, show empathy. You can start by acknowledging how busy life can get to show you understand why they haven’t written back. Then quickly ask your question again. To increase your chances of a reply, keep the email concise.
  • Avoid some of the most overused openers, like “hope this email finds you well.” Your recipient will tune out right from the beginning. Also, steer clear of “per my last email” – it comes across as passive-aggressive, even when you have the best intentions. As for “bandwidth,” it’s a turn-off for many people. Simply ask the person if they have the time to take on the task.

Being more intentional about the way we write can yield immediate results, especially in the age of artificial intelligence, where communication tends to sound generic and flat.

What our emails reveal about hidden stress at work

Aside from the heavy usage of these buzzwords, the study also found a small but telling sign of how workers handle work-related stress. The phrase “Happy Friday” appeared almost three times more often than “Happy Monday.” People are more likely to greet someone enthusiastically when they know the weekend is just around the corner. It’s just another reminder that we’d all benefit from building work cultures that foster less pressure and more positivity.

Article provided by ZeroBounce.

Read next:

• 43% of Web Requests Come from Mobile, Cloudflare Data Shows

• OpenAI Introduces GPT‑Image-1.5 Image Generation Update in ChatGPT With Faster Editing and Improved Accuracy Tools


by Guest Contributor via Digital Information World

Wednesday, December 17, 2025

43% of Web Requests Come from Mobile, Cloudflare Data Shows

According to Cloudflare's Year in Review, published on 15 December 2025, Global internet traffic increased by 19 percent in 2025, with growth accelerating after August.

Cloudflare's global network spans 330 cities across more than 125 regions and handled an average of over 81 million HTTP requests per second.

Mobile devices accounted for 43 percent of web requests observed by Cloudflare. (This means about 4 in 10 requests came from mobile devices like phones and tablets.)

In 117 countries, more than half of all internet traffic measured by Cloudflare originated from mobile devices.

Analysis of email traffic processed by Cloudflare found that an average of 5.6 percent of messages in 2025 were malicious.

Among top-level domains with sufficient message volume, nearly all email originating from the .christmas and .lol domains was classified as spam or malicious, making them the most abused top-level domains identified in the analysis.

Within the internet services category, ChatGPT by OpenAI ranked highest in terms of popularity among generative AI platforms based on Cloudflare-observed global traffic data.

Among Generative AI services, ChatGPT/OpenAI remained at the top of the list. But there was movement elsewhere, highlighting the dynamic nature of the industry. Services that moved up the rankings include Perplexity, Claude/Anthropic, and GitHub Copilot. New entries in the top 10 for 2025 include Google Gemini, Windsurf AI, Grok/xAI, and DeepSeek.
ChatGPT led AI services; Perplexity, Claude, Copilot climbed, while Gemini, Grok, Windsurf, DeepSeek entered rankings.
Image: Cloudflare Radar

Google referred 89.5 percent of search engine traffic to websites protected and delivered by Cloudflare.

Chrome accounted for roughly two-thirds (66.2 percent to be exact) of browser-generated requests observed by Cloudflare in 2025.

The share of human-generated web traffic using post-quantum encryption increased from 29 percent at the start of 2025 to 52 percent by early December.

Traffic associated with SpaceX's Starlink satellite internet service increased by approximately 2.3 times over the course of the year.

Read next: OpenAI Introduces GPT‑Image-1.5 Image Generation Update in ChatGPT With Faster Editing and Improved Accuracy Tools
by Ayaz Khan via Digital Information World

OpenAI Introduces GPT‑Image-1.5 Image Generation Update in ChatGPT With Faster Editing and Improved Accuracy Tools

OpenAI has introduced an updated image generation feature in ChatGPT using its new GPT‑Image-1.5 model. According to the AI giant, the update changes how users generate and edit images by following instructions more closely while maintaining visual details such as lighting and composition.

The system supports different types of image edits, from small adjustments to more extensive changes, and is reported to operate with image generation speeds up to four times faster than before. The model also shows improved handling of text within images and more complex visual scenes, though OpenAI notes that limitations remain.

The updated image feature is being rolled out to ChatGPT users on Free, Go, Plus, Edu, and Pro plans, with Business and Enterprise availability coming later. The model is also accessible through OpenAI’s developer API, where image processing costs are reduced compared with the previous version.
While GPT Image 1.5 offers impressive creative capabilities, both users and developers must prioritize ethical responsibility. Users should verify that AI-generated images are clearly labeled to prevent deception and be aware of risks from misuse such as deepfakes. Developers and companies must implement clear labeling of AI images, obtain explicit consent before training on any user data or images, build effective safeguards against misuse. Plus, developers should implement age verification protections and establish clear processes for reporting misuse. With transparency, consent, and robust safeguards in place, AI image generation can serve beneficial creative purposes while protecting user privacy, dignity, and intellectual property rights.

Image: openai

Notes: This post was drafted with the assistance of AI tools and reviewed, edited, and published by humans.

Read next: How good people justify bending the rules at work — and what leaders can do about it
by Irfan Ahmad via Digital Information World

Tuesday, December 16, 2025

Researchers Find Structured Human–AI Collaboration Is Required for Creative Gains

What generative AI typically does best – recognise patterns and predict the next step in a sequence – can seem fundamentally at odds with the intangibility of human creativity and imagination. However, Cambridge researchers suggest that AI can be a useful creative partner, as long as there is clear guidance on how ideas should be developed together.
Image: DIW-Aigen

It’s a big question for people working on creative tasks: can humans and AI work together to boost creativity? Studies have yielded widely inconsistent results in how humans working with AI can handle creative tasks together.

But a team of researchers, including from the University of Cambridge, say that while simply adding AI into a process won’t improve creativity, AI can be an effective creative partner, if there are clear instructions and guidance – for both humans and AI - on how to develop ideas together.

Their research, published in the journal Information Systems Research, offers practical insight for improving human-AI collaboration to improve creativity.

“Adding AI doesn’t automatically lead to better ideas,” said co-author Dr Yeun Joon Kim, from Cambridge Judge Business School. “For human-AI pairs to work together and improve ideas over time, organisations must provide targeted support – such as guidance on how to build on and adapt ideas – to help employees and AI learn how to create more effectively.”

In their research, Joon and his colleagues redefined ‘augmented learning’ – a term first used in 1962 to describe how technology can help people learn more effectively.

The researchers argue that in the age of generative AI (GenAI), learning is no longer just about improving human understanding. Instead, it’s becoming a shared process where humans and AI learn and create together. The researchers describe this as an evolving partnership where both sides adjust their roles across tasks such as generating ideas, giving feedback and refining concepts.

Traditionally, technology was seen as a tool that simply made information easier to access. But GenAI, they say, acts more like a collaborator. Once a human enters a prompt, the system can take an active role in shaping ideas and decisions: shifting augmented learning from an individual process to a collective one.

The study points to Netflix as an example of human–AI teamwork. Rather than treating scriptwriting as a single task, Netflix breaks it into stages like idea generation and evaluation. Human writers create early drafts, while AI analyses character arcs, pacing and audience trends to help refine stories and improve how shows are developed and marketed.

Joon says he became interested in this research because AI was originally developed to answer a longstanding question: “Can machines generate something genuinely new?” Because while traditional technologies excel at routine tasks, many doubted that technology could make a creative contribution.

“When GenAI systems became widely available, I noticed that although they could generate ideas rapidly, people did not know how to collaborate with them in a way that improved creativity,” he said.

“We wanted to figure out how people can learn to work with GenAI in a more intentional way, so that human–GenAI co-creation leads to stronger joint results rather than just more content,” said co-author Dr Luna Luan from the University of Queensland.

The research included three linked studies, each involving between 160 and 200 human participants. The first study found that human-AI teams did not automatically become more creative over time when tackling social and environmental problems. The second study explored why. It identified three types of collaboration – humans proposing ideas, asking AI for ideas, and jointly refining ideas – and found that only joint refinement boosted creativity. But participants rarely increased this behaviour. A third study showed that simply instructing people to focus more on co-developing ideas led to clear improvements in human–AI creativity across repeated tasks.

“We were surprised that human-AI pairs did not naturally improve through repeated collaboration,” said Joon. “Despite AI’s generative power, creativity did not increase over time. We found that improvement occurred only when we introduced a deliberate intervention.”

“Specifically, instructing participants to engage in idea co-development – focusing on exchanging feedback and refining existing ideas rather than endlessly generating new ideas – was the key.”

The researchers say GenAI tools need to do more than churn out ideas. Their findings show that human–AI teams became less creative over time, mainly because they stopped engaging in the back-and-forth refinement that actually improves results. They say that AI systems should be designed to prompt users to give feedback, expand on suggestions and refine ideas, rather than racing through idea generation.

For organisations, the message is that creativity won’t automatically improve just by adding AI. Effective collaboration requires structure: clear instructions, templates and workflows that help people recognise when to challenge, refine or build on AI-generated ideas. Training should also teach staff how to treat AI as a creative partner by practising feedback exchange and iterative improvement.

The authors also call for a shift in how companies think about human–AI work. Instead of choosing between automation and collaboration, tasks should be broken into stages where humans and AI play different roles: for example, AI generating options and humans evaluating and refining them.

They warn that although many firms rushed to adopt GenAI after the release of ChatGPT in 2022, simply using the technology does not guarantee greater creativity at work. Its impact depends on how well people understand it and how effectively they collaborate with it.

The research was co-authored by Luna Luan, a Lecturer in Management at the University of Queensland in Australia, who recently earned her PhD at Cambridge Judge Business School, Yeun Joon Kim of Cambridge Judge and Jing Zhou, Professor of Management at Rice University in Texas.

Reference:

Yingyue Luna Luan, Yeun Joon Kim, Jing Zhou. ‘ Augmented Learning for Joint Creativity in Human-GenAI Co-Creation .’ Information Systems Research (2025). DOI: 10.1287/isre.2024.0984

Adapted from a piece published on the Cambridge Judge Business School website.

This article was originally published on University of Cambridge, with an updated title, and is shared under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license.

Read next: The ‘one chatbot per child’ model for AI in classrooms conflicts with what research shows: Learning is a social process

by External Contributor via Digital Information World

The ‘one chatbot per child’ model for AI in classrooms conflicts with what research shows: Learning is a social process

Niral Shah, University of Washington

Image: Ethan Garvey / unsplash

In the Star Trek universe, the audience occasionally gets a glimpse inside schools on the planet Vulcan. Young children stand alone in pods surrounded by 360-degree digital screens. Adults wander among the pods but do not talk to the students. Instead, each child interacts only with a sophisticated artificial intelligence, which peppers them with questions about everything from mathematics to philosophy.

This is not the reality in today’s classrooms on Earth. For many technology leaders building modern AI, however, a vision of AI-driven personalized learning holds considerable appeal. Outspoken venture capitalist Marc Andreessen, for example, imagines that “the AI tutor will be by each child’s side every step of their development.”

Years ago, I studied computer science and interned in Silicon Valley. Later, as a public school teacher, I was often the first to bring technology into my classroom. I was dazzled by the promise of a digital future in education.

Now as a social scientist who studies how people learn, I believe K-12 schools need to question predominant visions of AI for education.

Individualized learning has its place. But decades of educational research is also clear that learning is a social endeavor at its core. Classrooms that privilege personalized AI chatbots overlook that fact.

School districts under pressure

Generative AI is coming to K-12 classrooms. Some of the largest school districts in the country, such as Houston and Miami, have signed expensive contracts to bring AI to thousands of students. Amid declining enrollment, perhaps AI offers a way for districts to both cut costs and seem cutting edge.

Pressure is also coming from both industry and the federal government. Tech companies have spent billions of dollars building generative AI and see a potential market in public schools. Republican and Democratic administrations have been enthusiastic about AI’s potential for education.

Decades ago, educators promoted the benefits of “One Laptop per Child.” Today it seems we may be on the cusp of “one chatbot per child.” What does educational research tell us about what this model could mean for children’s learning and well-being?

Learning is a social process

During much of the 20th century, learning was understood mainly as a matter of individual cognition. In contrast, the latest science on learning paints a more multidimensional picture.

Scientists now understand that seemingly individual processes – such as building new knowledge – are actually deeply rooted in social interactions with the world around us.

Neuroscience research has shown that even from a young age, people’s social relationships influence which of our genes turn on and off. This matters because gene expression affects how our brains develop and our capacity to learn.

In classrooms, this suggests that opportunities for social interaction – for instance, children listening to their classmates’ ideas and haggling over what is true and why – can support brain health and academic learning.

Research in the social sciences has long since proved the value of high-quality classroom discourse. For example, in a well-cited 1991 study involving over 1,000 middle school students across more than 50 English classrooms, researchers Martin Nystrand and Adam Gamoran found that children performed significantly better in classes “exhibiting more uptake, more authenticity of questions, more contiguity of reading, and more discussion time.”

In short, research tells us that rich learning happens when students have opportunities to interact with other people in meaningful ways.

AI in classrooms lacks research evidence

What does all of this mean for AI in education?

Introducing any new technology into a classroom, especially one as alien as generative AI, is a major change. It seems reasonable that high-stakes decisions should be based on solid research evidence.

But there’s one problem: The studies that school leaders need just aren’t there yet. No one really knows how generative AI in K-12 classrooms will affect children’s learning and social development.

Current research on generative AI’s impact on student learning is limited, inconclusive and tends to focus on older students – not K-12 children. Studies of AI use thus far have tended to focus on either learning outcomes or individual cognitive activity.

Although standardized test scores and critical thinking skills matter, they represent a small piece of the educational experience. It is also important to understand generative AI’s real-life impact on students.

For example: How does it feel to learn from a chatbot, day after day? What is the longer-term impact on children’s mental health? How does AI use affect children’s relationships with each other and with their teachers? What kinds of relationships might children form with the chatbots themselves? What will AI mean for educational inequities related to social forces such as race and disability?

More broadly, I think now is the time to ask: What is the purpose of K-12 education? What do we, as a society, actually want children to learn?

Of course, every child should learn how to write essays and do basic arithmetic. But beyond academic outcomes, I believe schools can also teach students how to become thoughtful citizens in their communities.

To prepare young people to grapple with complex societal issues, the National Academy of Education has called for classrooms where students learn to engage in civic discourse across subject areas. That kind of learning happens best through messy discussions with people who don’t think alike.

To be clear, not everything in a classroom needs to involve discussions among classmates. And research does indicate that individualized instruction can also enhance social forms of learning.

So I don’t want to rule out the possibility that classroom-based generative AI might augment learning or the quality of students’ social interactions. However, the tech industry’s deep investments in individualized forms of AI – as well as the disappointing history of technology in classrooms – should give schools pause.

Good teaching blends social and individual processes. My concern about personalized AI tutors is how they might crowd out already infrequent opportunities for social interaction, further isolating children in classrooms.

Center children’s learning and development

Education is a relational enterprise. Technology may play a role, but as students spend more and more class time on laptops and tablets, I don’t think screens should displace the human-to-human interactions at the heart of education.

I see the beneficial application of any new technology in the classroom – AI or otherwise – as a way to build upon the social fabric of human learning. At its best it facilitates, rather than impedes, children’s development as people. As schools consider how and whether to use generative AI, the years of research on how children learn offer a way to move forward.The Conversation

Niral Shah, Associate Professor of Learning Sciences & Human Development, University of Washington

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read next:

• Merriam-Webster Names “Slop” as 2025 Word of the Year

• Only 44% Of UK Public Trust Tech Companies As 89% Support Independent AI Oversight


by External Contributor via Digital Information World

Merriam-Webster Names “Slop” as 2025 Word of the Year

Merriam-Webster has chosen “slop” as its 2025 Word of the Year, the dictionary publisher announced on December, 14th 2025.

The term is defined by Merriam-Webster as low-quality digital content produced usually in quantity by means of artificial intelligence.

Editors said the word reflected the large amount of AI-generated material encountered during the year, including unrealistic images, deceptive online content, automated writing, and similar media appearing across digital platforms.

Merriam-Webster noted that the word’s tone was less fearful than mocking, often used to highlight the limits of artificial intelligence rather than express alarm.

The publisher also reported high public interest in other words during 2025, including gerrymander, performative, touch grass, and tariff, based on "lookup data".

One important thing to note is that Merriam-Webster doesn't explain their selection process clearly. We also couldn't find a recent official page showing how they chose the word this year.

Merriam-Webster names slop 2025 Word of the Year, mocking proliferation of low-quality AI-generated digital content.

Notes: This post was drafted with the assistance of AI tools and reviewed, edited, and published by humans. Image: DIW-Aigen

Read next: Only 44% Of UK Public Trust Tech Companies As 89% Support Independent AI Oversight
by Asim BN via Digital Information World

Only 44% Of UK Public Trust Tech Companies As 89% Support Independent AI Oversight

As momentum behind meaningful legislation on AI in the UK has appeared to stall, new research from the Ada Lovelace Institute shows that this delay – and the government’s broader shift away from regulation – is increasingly out of step with public attitudes.

The nationally representative polling examines not only whether the UK public support regulation of AI, but also how they expect it to function, and where gaps between public expectations and policy ambition may lie. Key findings include:

  • The public support independent regulation. The UK public do not trust private companies to self-regulate. There is strong public support (89%) for an independent regulator for AI, equipped with enforcement powers.
  • The public prioritise fairness, positive social impacts and safety. AI is firmly embedded in public consciousness and 91% of the public feel it is important that AI systems are developed and used in ways that treat people fairly. They want this to be prioritised over economic gains, speed of innovation and international competition when presented with trade-offs.
  • The public feel disenfranchised and excluded from AI decision-making, and mistrust key institutions. Many people feel excluded from government decision-making. 84% fear that, when regulating AI, the government will prioritise its partnerships with large technology companies over the public interest.
  • The public expect ongoing monitoring and clear lines of accountability. People support mechanisms such as independent standards, transparency reporting and top-down accountability to ensure effective monitoring of AI systems, both before and after they are deployed.
  • Public trust in institutions shaping AI remains low. Over half (51%) do not trust large technology companies to act in the public's interest, while distrust is even higher for social media companies (69%), and 59% do not trust the government.
Only 44% Of UK Public Trust Tech Companies As 89% Support Independent AI Oversight

Nuala Polo, UK Public Policy Lead at the Ada Lovelace Institute, said:

“Our research is clear: there is a major misalignment between what the UK public want and what the government is offering in terms of AI regulation. The government is betting big on AI, but success requires public trust. When people do not trust that government policy will protect them, they are less likely to adopt new technologies, and more likely to lose confidence in public institutions and services, including the government itself.”

Michael Birtwistle, Associate Director at the Ada Lovelace Institute, said:

“Examples of the unmanaged risks – and sometimes fatal harms – of AI systems are increasingly making the headlines. Trust is built with meaningful incentives to manage harm. We see these incentives in food, aviation and medicines – consequential technologies like AI should not be treated any differently. Continued inaction on AI harms will come with serious costs to the potential benefits of adoption.”

Note: This post was first published by the Ada Lovelace Institute with minor additions, including the paragraph on public trust in institutions shaping AI and updated title.

Read next: Best way for employers to support employees with chronic mental illness is by offering flexibility
by Press Releases via Digital Information World