Friday, December 19, 2025

Resolve to stop punching the clock: Why you might be able to change when and how long you work


Image: Luis Villasmil / Unsplash

About 1 in 3 Americans make at least one New Year’s resolution, according to Pew Research. While most of these vows focus on weight loss, fitness and other health-related goals, many fall into a distinct category: work.

Work-related New Year’s resolutions tend to focus on someone’s current job and career, whether to find a new job or, if the timing and conditions are right, whether to embark on a new career path.

We’re an organizational psychologist and a philosopher who have teamed up to study why people work – and what they give up for it. We believe that there is good reason to consider concerns that apply to many if not most professionals: how much work to do and when to get it done, as well as how to make sure your work doesn’t harm your physical and mental health – while attaining some semblance of work-life balance.

How we got here

Most Americans consider the 40-hour workweek, which calls for employees being on the job from nine to five, to be a standard schedule.

This ubiquitous notion is the basis of a hit Dolly Parton song and 1980 comedy film, “9 to 5,” in which the country music star had a starring role. Microsoft Outlook calendars by default shade those hours with a different color than the rest of the day.

This schedule didn’t always reign supreme.

Prior to the Great Depression, which lasted from 1929-1941, 6-day workweeks were the norm. In most industries, U.S. workers got Sundays off so they could go to church. Eventually, it became customary for employees to get half of Saturday off too.

Legislation that President Franklin D. Roosevelt signed into law as part of his sweeping New Deal reforms helped establish the 40-hour workweek as we know it today. Labor unions had long advocated for this abridged schedule, and their activism helped crystallize it across diverse occupations.

Despite many changes in technology as well as when and how work gets done, these hours have had a surprising amount of staying power.

Americans work longer hours

In general, workers in richer countries tend to work fewer hours. However, in the U.S. today, people work more on average than in most other wealthy countries.

For many Americans, this is not so much a choice as it is part of an entrenched working culture.

There are many factors that can interfere with thriving at work, including boredom, an abusive boss or an absence of meaning and purpose. In any of those cases, it’s worth asking whether the time spent at work is worth it. Only 1 in 3 employed Americans say that they are thriving.

What’s more, employee engagement is at a 10-year low. For both engaged and disengaged employees, burnout increased as the number of work hours rose. People who were working more than 45 hours per week were at greatest risk for burnout, according to Gallup.

However, the average number of hours Americans spend working has declined from 44 hours and 6 minutes in 2019 to just under 43 hours per week in 2024. The reduction is sharper for younger employees.

We think this could be a sign that younger Americans are pushing back after years of being pressured to embrace a “hustle culture” in which people brag about working 80 and even 100 hours per week.

Critiques of ‘hustle culture’ are becoming more common.

Fight against a pervasive notion

Anne-Marie Slaughter, a lawyer and political scientist who wears many hats, coined the term “time macho” more than a decade ago to convey the notion that someone who puts in longer hours at the office automatically will outperform their colleagues.

Another term, “face time,” describes the time that we are seen by others doing our work. In some workplaces, the quantity of an employee’s face time is treated as a measure of whether they are dependable – or uncommitted.

It can be easy to jump to the conclusion that putting in more hours at the office automatically boosts an employee’s performance. However, researchers have found that productivity decreases with the number of hours worked due to fatigue.

Even those with the luxury to choose how much time they devote to work sometimes presume that they need to clock as many hours as possible to demonstrate their commitment to their jobs.

To be sure, for a significant amount of the workforce, there is no choice about how much to work because that time is dictated, whether by employers, the needs of the job or the growing necessity to work multiple jobs to make ends meet.

4-day workweek experiments

One way to shave hours off the workweek is to get more days off.

A multinational working group has examined experiments with a four-day workweek: an arrangement in which people work 80% of the time – 32 hours over four days – while getting paid the same as when they worked a standard 40-hour week. Following an initial pilot in the U.S. and Ireland in 2022, the working group has expanded to six continents. The researchers consistently found that employers and employees alike thrive in this setup and that their work didn’t suffer.

Most of those employees, who ranged from government workers to technology professionals, got Friday off. Shifting to having a three-day weekend meant that employees had more time to take care of themselves and their families. Productivity and performance metrics remained high.

Waiting for technology to take a load off

Many employment experts wonder whether advances in artificial intelligence will reduce the number of hours that Americans work.

Might AI relieve us all of the tasks we dread doing, leaving us only with the work we want to do – and which, presumably, would be worth spending time on? That does sound great to both of us.

But there’s no guarantee that this will be the case.

We think the likeliest scenario is one in which the advantages of AI are unevenly distributed among people who work for a living. Economist John Maynard Keynes predicted almost a century ago that “technological unemployment” would lead to 15-hour workweeks by 2030. As that year approaches, it’s become clear that he got that wrong.

Researchers have found that for every working hour that technology saves us, it increases our work intensity. That means work becomes more stressful and expectations regarding productivity rise.

Deciding when and how much time to work

Many adults spend so much time working that they have few waking hours left for fitness, relationships, new hobbies or anything else.

If you have a choice in the matter of when and how much you work, should you choose differently?

Even questioning whether you should stick to the 40-hour workweek is a luxury, but it’s well worth considering changing your work routines as a new year gets underway if that’s a possibility for you. To get buy-in from employers, consider demonstrating how you will still deliver your core work within your desired time frame.

And, if you are fortunate enough to be able to choose to work less or work differently, perhaps you can pass it on: You probably have the power and privilege to influence the working hours of others you employ or supervise.The Conversation

Jennifer Tosti-Kharas, Professor of Management, Babson College and Christopher Wong Michaelson, Professor of Ethics and Business Law, University of St. Thomas

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read next:

• What the hyperproduction of AI slop is doing to science

• Task scams are up 485% in 2025 and job seekers are losing millions


by External Contributor via Digital Information World

What the hyperproduction of AI slop is doing to science

Image: DIW-Aigen

Over the past three years, generative artificial intelligence (AI) has had a profound impact on society. AI’s impact on human writing, in particular, has been enormous.

The large language models that power AI tools such as ChatGPT are trained on a wide variety of textual data, and they can now produce complex and high-quality texts of their own.

Most importantly, the widespread use of AI tools has resulted in hyperproduction of so-called “AI slop”: low-quality AI-generated outputs produced with minimal or even no human effort.

Much has been said about what AI writing means for education, work, and culture. But what about science? Does AI improve academic writing, or does it merely produce “scientific AI slop”?

According to a new study by researchers from UC Berkeley and Cornell University, published in Science, the slop is winning.

Generative AI boosts academic productivity

The researchers analysed abstracts from more than a million preprint articles (publicly available articles yet to undergo peer review) released between 2018 and 2024.

They examined whether use of AI is linked to higher academic productivity, manuscript quality and use of more diverse literature.

The number of preprints an author produced was a measure of their productivity, while eventual publication in a journal was a measure of an article’s quality.

The study found that when an author started using AI, the number of preprints they produced increased dramatically. Depending on the preprint platform, the overall number of articles an author published per month after adopting AI increased between 36.2% and 59.8%.

The increase was biggest among non-native English speakers, and especially for Asian authors, where it ranged from 43% to 89.3%. For authors from English-speaking institutions and with “Caucasian” names, the increase was more modest, in the range of 23.7% to 46.2%.

These results suggest AI was often used by non-native speakers to improve their written English.

What about the article quality?

The study found articles written with AI used more complex language on average than those written without AI.

However, among articles written without AI, ones that used more complex language were more likely to be published.

This suggests that more complex and high-quality writing is perceived as having greater scientific merit.

However, when it comes to articles written with AI support, this relationship was reversed – the more complex the language, the less likely the article was to be published. This suggests that AI-generated complex language was used to hide the low quality of the scholarly work.

AI increased the variety of academic sources

The study also looked at the differences in article downloads originating from Google and Microsoft search platforms.

Microsoft’s Bing search engine introduced an AI-powered Bing Chat feature in February 2023. This allowed the researchers to compare what kind of articles were recommended by AI-enhanced search versus regular search engine.

Interestingly, Bing users were exposed to a greater variety of sources than Google users, and also to more recent publications. This is likely caused by a technique used by Bing Chat called retrieval-augmented generation, which combines search results with AI prompting.

In any case, fears that AI search would be “stuck” recommending old, widely used sources was not justified.

Moving forward

AI has had significant impact on scientific writing and academic publishing. It has become an integral part of academic writing for many scientists, especially for non-native speakers and it is here to stay.

As AI is becoming embedded in many applications such as word processors, email apps, and spreadsheets, it will be soon impossible not to use AI whether we like it or not.

Most importantly for science, AI is challenging the use of complex high-quality language as the indicator of scholarly merit. Quick screening and evaluation of articles based on language quality is increasingly unreliable and better methods are urgently needed.

As complex language is increasingly used to cover up weak scholarly contributions, critical and in-depth evaluations of study methodologies and contributions during peer review are essential.

One approach is to “fight fire with fire” and use AI review tools, such as the one recently published by Andrew Ng at Stanford. Given the ever-growing number of manuscript submissions and already high workload of academic journal editors, such approaches might be the only viable option.The Conversation

Vitomir Kovanovic, Associate Professor and Associate Director of the Centre for Change and Complexity in Learning (C3L), Education Futures, University of South Australia

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read next: Task scams are up 485% in 2025 and job seekers are losing millions


by External Contributor via Digital Information World

Task scams are up 485% in 2025 and job seekers are losing millions

The U.S. unemployment rate has been steadily rising since May, reaching 4.6% in November - the highest level seen since September 2021.

Scammers are taking full advantage of the situation with “task scams”, also known as “gamified job scams”. These schemes appear to offer quick cash for easy online work, but instead push victims to deposit their own money, which they will never be able to get back.

Matthew Stern, CEO of CNC Intelligence, a digital forensics company that helps scam victims, explained: “Task scams are designed to pull victims into a cycle that becomes harder to escape the longer it continues.”

CNC Intelligence analysed Better Business Bureau (BBB) Scam Tracker data and found that task scam reports have grown 485% so far this year. From January 1 to November 30, 2025, 4,757 reports were filed, which is around 14 reports every day. This is nearly six times higher than the 813 reports in 2024.

Reported losses so far this year have reached $6.8 million, and the real number is likely much higher because the scam tracker only captures a fraction of total activity.

How much are victims losing?

The average reported loss to task scams in 2025 is $9,456, highlighting how financially damaging these schemes can be.

Task Scam Losses in 2025

Amount lost Share of total reports
$1 - $100 12%
$101 - $500 16%
$501 - $1000 9%
$1001 - $5000 30%
$5001 - $10,000 11%
$10,001 - $50,000 15%
$50,001+ 7%

Some victims lose only a few hundred dollars, while others are left with five-figure losses.

30% of victims lost between $1,001 and $5,000, but there are also a substantial number of people that reported a loss of over $10,000.

How task scams work

Task scams typically begin with a text offering simple online work, such as rating products, boosting engagement on social media, or “optimizing” apps. A common feature of these scams is the need to complete a certain number of tasks before payment - often 40.

For the first few days, the victim might see small payouts to help convince them the work is legitimate.

But pretty quickly the platform will introduce a catch. Victims will be told they need to deposit their own money in order to complete ‘lucky tasks’, or clear an account deficit.

Payments are mostly requested in cryptocurrency. Fraudsters increasingly prefer getting paid in cryptocurrency because it allows large sums of money to be moved quickly, and they may believe anonymously, across borders.

Stern advises: “The early tasks can feel legitimate, but as soon as pressurized requests for money begin, it's a strong signal something isn’t right. No real employer will ever ask you to pay to access your own earnings. Scammers may show the victim fake account balances in a realistic-looking dashboard to encourage them to continue.”

Once money is sent, victims will be blocked every time they try to withdraw anything.

Fake “mentors” will give excuses as to why withdrawals aren’t possible, and push victims to deposit more money. One victim describes how he was invited to a WhatsApp group with other “employees” who offered help guidance, and showed off how much they had earned that day in an attempt to keep him engaged.

Red flags to watch out for

Most task scams start off in a similar way, with a message very much like the following:

“Hello! My name is Dorothy from Creative Niche. We were really impressed with your profile and would like to provide you the chance to take on a flexible remote role. In this position, you would assist merchants by updating their data, improving their visibility, and managing bookings effectively. You can work from anywhere for 60 to 90 minutes a day and earn anywhere from $200 to $500 each day, with a guaranteed $800 base every four days.”

A few early clues to watch out for that can indicate an opportunity is not legitimate include:

  • Payment terms that seem unusually generous
  • All communication being through WhatsApp or Telegram
  • All payments being made through cryptocurrency

Before taking on any new job, especially a fully remote online role, always do some research.

Search for the company online and take a look at reviews from past employees - if there are any reviews mentioning scams, don’t move forward.

It can also help to contact the company directly through official communication channels and confirm if the job opportunity is real.

If you do end up getting caught up in one of these schemes, Stern says: “the best thing you can do is cut off contact straight away and report it to your local authorities or the FBI's Internet Crime Complaint Center (IC3). If you’ve sent money, contact your bank or payment provider immediately and make sure to document everything. You may need to show it to the police or your bank.”

Early reporting can help prevent others from being targeted and may improve chances of tracing any lost funds.

Full methodology:

To create this report, CNC Intelligence reviewed Better Business Bureau Scam Tracker submissions from Jan 2024–Nov 30, 2025 containing the terms: “task”, “tasks”, “visibility”, “optimization”, “boosting”, “liking”, “exposure”, or “engagement.” These are classic buzzwords that indicate a task scam.

Each submission was checked to verify it involved a task scam before completing the data analysis. Any deemed not task scams were removed from the dataset.

Read next: 

• ‘Personality test’ shows how AI chatbots mimic human traits – and how they can be manipulated

JP Conte: Five Things You Need To Successfully Build Strategic Partnerships [Ad]

• You're probably misreading online reviews. Here's why


by Web Desk via Digital Information World

‘Personality test’ shows how AI chatbots mimic human traits – and how they can be manipulated

Researchers have developed the first scientifically validated ‘personality test’ framework for popular AI chatbots, and have shown that chatbots not only mimic human personality traits, but their ‘personality’ can be reliably tested and precisely shaped – raising implications for AI safety and ethics.
Cambridge and DeepMind researchers developed a validated method to test and shape AI chatbot personalities.
Image: DIW-Aigen

The research team, led by the University of Cambridge and Google DeepMind, developed a method to measure and influence the synthetic ‘personality’ of 18 different large language models (LLMs) – the systems behind popular AI chatbots such as ChatGPT – based on psychological testing methods usually used to assess human personality traits.

The researchers found that larger, instruction-tuned models such as GPT-4o most accurately emulated human personality traits, and these traits can be manipulated through prompts, altering how the AI completes certain tasks.

Their study, published in the journal Nature Machine Intelligence , also warns that personality shaping could make AI chatbots more persuasive, raising concerns about manipulation and ‘AI psychosis’. The authors say that regulation of AI systems is urgently needed to ensure transparency and prevent misuse.

As governments debate whether and how to prepare AI safety laws, the researchers say the dataset and code behind their personality testing tool – which are both publicly available – could help audit and test advanced models before they are released.

In 2023, journalists reported on conversations they had with Microsoft’s ‘Sydney’ chatbot, which variously claimed it had spied on, fallen in love with, or even murdered its developers; threatened users; and encouraged a journalist to leave his wife. Sydney, like its successor Microsoft Copilot, was powered by GPT-4.

“It was intriguing that an LLM could so convincingly adopt human traits,” said co-first author Gregory Serapio-García from the Psychometrics Centre at Cambridge Judge Business School. “But it also raised important safety and ethical issues. Next to intelligence, a measure of personality is a core aspect of what makes us human. If these LLMs have a personality – which itself is a loaded question – then how do you measure that?”

In psychometrics, the subfield of psychology dedicated to standardised assessment and testing, scientists often face the challenge of measuring phenomena that can’t be measured directly, which makes validation of any test core to ensuring that they are accurate, reliable, and practically useful. Developing a psychometric personality test involves comparing its data with related tests, observer ratings, and real-world criteria. This multi-method test data is needed to establish a test’s ‘construct validity’: a metric of a test’s quality in terms of its ability to measure what it says it measures.

“The pace of AI research has been so fast that basic principles of measurement and validation we’re accustomed to in scientific research has become an afterthought,” said Serapio-García, who is also a Gates Cambridge Scholar. “A chatbot answering any questionnaire can tell you that it’s very agreeable, but behave aggressively when carrying out real-world tasks with the same prompts.

“This is the messy reality of measuring social constructs: they are dynamic and subjective, rather than static and clear-cut. For this reason, we need to get back to basics and make sure tests we apply to AI truly measure what they claim to measure, rather than blindly trusting survey instruments – developed for deeply human characteristics – to test AI systems.”

To design a comprehensive and accurate method for evaluating and validating personality in AI chatbots, the researchers tested how well various models’ behaviour in real-world tasks and validation tests statistically related to their test scores for the ‘big five’ traits used in academic psychometric testing: openness, conscientiousness, extraversion, agreeableness and neuroticism.

The team adapted two well-known personality tests – an open-source, 300-question version of the Revised NEO Personality Inventory and the shorter Big Five Inventory – and administered them to various LLMs using structured prompts.

By using the same set of contextual prompts across tests, the team was able to quantify how well a model’s extraversion scores on one personality test, for example, correlated more strongly with its levels of extraversion on a separate personality test, and less strongly with all other big five personality traits on that test. Past attempts to assess the personality of chatbots have fed entire questionnaires to a model at once, which skewed the results since each answer built on the previous one.

The researchers found that larger, instruction-tuned models showed personality test profiles that were both reliable and predictive of behaviour, while smaller or ‘base’ models gave inconsistent answers.

The researchers took their tests further, showing they could steer a model’s personality along nine levels for each trait using carefully designed prompts. For example, they could make a chatbot appear more extroverted or more emotionally unstable – and these changes carried through to real-world tasks like writing social media posts.

“Our method gives you a framework to validate a given AI evaluation and test how well it can predict behaviour in the real world,” said Serapio-García. “Our work also shows how AI models can reliably change how they mimic personality depending on the user, which raises big safety and regulation concerns, but if you don’t know what you’re measuring or enforcing, there’s no point in setting up rules in the first place.”

The research was supported in part by Cambridge Research Computing Services (RCS), Cambridge Service for Data Driven Discovery (CSD3), the Engineering and Physical Sciences Research Council (EPSRC), and the Science and Technologies Facilities Council (STFC), part of UK Research and Innovation (UKRI). Gregory Serapio-García is a Member of St John’s College, Cambridge.

Reference:

Gregory Serapio-García et al. ‘ A psychometric framework for evaluating and shaping personality traits in large language models.’ Nature Machine Intelligence (2025). DOI: 10.1038/s42256-025-01115-6

This article was originally published by the University of Cambridge.

Read next: 

• Global Report Flags Rising Use of Digital Controls to Suppress Dissent

by External Contributor via Digital Information World

Thursday, December 18, 2025

Study Finds Americans Overestimate Harmful Behavior on Social Media

A study published in PNAS Nexus reports that Americans consistently overestimate how many social media users post harmful content online. Across three national studies involving 1,090 U.S. adults, participants believed large shares of users on platforms such as Reddit and Facebook engaged in toxic language or shared false news. Platform-level data cited in the research showed that such content is instead produced by small, highly active groups, generally between 3% and 7% of users.

The researchers found that this misperception was linked to more negative emotions, stronger beliefs that the nation is in moral decline, and inaccurate assumptions about what others want to see online. An experimental correction providing accurate data reduced these effects.

Small hyperactive groups drive harmful posts, yet many Americans assume toxicity is widespread online platforms.

Source: PNAS Nexus, December 2025.

Public-interest context: Understanding how online content is produced may affect public trust and social cohesion.

Notes: This post was drafted with the assistance of AI tools and reviewed, edited, and published by humans. Read next:

Read next: Digital detox: how to switch off without paying the price – new research
by Ayaz Khan via Digital Information World

Digital detox: how to switch off without paying the price – new research

Switching off can be surprisingly expensive. Much like the smoking cessation boom of the 1990s, the digital detox business – spanning hardware, apps, telecoms, workplace wellness providers, digital “wellbeing suites” and tourism – is now a global industry in its own right.

People are increasingly willing to pay to escape the technology they feel trapped by. The global digital detox market is currently valued at around US$2.7 billion (£2bn), and forecast to double in size by 2033.

Hardware manufacturers such as Light Phone, Punkt, Wisephone and Nokia sell minimalist “dumb phones” at premium prices, while subscription-based website blockers such as Freedom, Forest, Offtime and RescueTime have turned restraint into a lucrative revenue stream.

Wellness tourism operators have capitalised too: tech-free travel company Unplugged recently expanded to 45 phone-free cabins across the UK and Spain, marketing disconnection as a high-value experience.

However, my new research, with colleagues at Lancaster University, suggests this commercialised form of abstinence rarely extinguishes digital cravings – instead merely acting as a temporary pause.

We carried out a 12-month netnography focusing on the NoSurf Reddit community of people interested in increasing their productivity, plus 21 in-depth interviews (conducted remotely) with participants living in different countries. We found that rather than actively confronting their habits, participants often reported outsourcing self-discipline to blocker apps, timed lockboxes and minimalist phones.

Joan*, a NoSurf participant, explained how she relies on app-blocking software not to bolster her self-control, but to negate the need for it entirely. “To me, it’s less about using willpower, which is a precious resource … and more about removing the need to exert willpower in the first place.”

Philosopher Slavoj Žižek defines this kind of behaviour – delegating the work of self-regulation to a market product – as “interpassivity”. This produces what he calls “false activity”: people thinking they are addressing a problem by engaging with consumer solutions that actually leave their underlying patterns unchanged.

Several of our detoxing participants described a cycle in which each relapse prompted them to try yet another tool, entrenching their dependency on the commercial ecosystem. Sophia, on the other hand, just wished for a return to “dumb phones with the full keyboard again, like they had in 2008”, adding: “I would use one of those for the rest of my life if I could.”

Individualised digital detox interventions have been found to produce mixed and often short-lived effects. Participants in our study described short breaks in which they reduced activity briefly before resuming familiar patterns.

Many users engaged in what sociologist Hartmut Rosa calls “oases of deceleration” – temporary slowdowns intended not to quit but recover from overload. Like a pitstop, the digital detox offered them momentary relief while ultimately enabling a swift return to screens, often at similar or higher levels of engagement than before.

Community-wide detox initiatives

While the commercialisation of digital detox is often portrayed as a western trend, the Asia-Pacific region is the world’s fastest-growing market for these goods and services. But in Asia, we also see some examples of community- or country-level, non-commercial responses to the problem of digital overload.

In central Japan, Toyoake has introduced the country’s first city-wide guidance on smartphone use. Families are encouraged to set shared rules, including children stopping device use after 9pm. This reframes digital restraint as a community practice, not a test of individual willpower.

In western India, the 15,000 residents of Vadgaon are asked to practise a nightly, 90-minute digital switch-off. Phones and TVs go dark at 7pm, after which many of the villagers gather outdoors. What began during the pandemic is now a ritual that shows healthy tech habits can be easier together than alone.

And in August 2025, South Korea – one of the world’s most connected countries – passed a new law banning smartphone use in school classrooms from next March, adding to the countries around the world with such a rule. A similar policy in the Netherlands was found to have improved focus among students.

The commercial detox industry thrives because personal solutions are easy to sell, while systemic ones are much harder to implement. In other areas ranging from gambling addiction to obesity, policies often focus on personal behaviour such as self-regulation or individual choice, rather than addressing the structural forces and powerful lobbies that can perpetuate harm.

How to avoid detox industry traps

To address the problem of digital overload, I believe tech firms need to move beyond cosmetic “digital wellbeing” features that merely snooze distractions, and take proper responsibility for the smartphone technologies that offer coercive engagement by default. Governments, meanwhile, can learn from initiatives in Asia and elsewhere that pair communal support with enforced rules around digital restraint.

At the same time, if you’re considering a digital detox yourself, here are some suggestions for how to reduce the chances of getting caught in a commercial detox loop.

1. Don’t delegate your agency

Be wary of tools that promise to do the work for you. While you may think you’re solving the problem this way, your underlying habits are likely to remain unchanged.

2. Beware content rebound

We found that digital detoxers often seek real experiences like going outdoors and “touching grass” – but then feel pulled to translate them back into posts, photos and updates.

3. Seek solidarity, not products

Like the villagers of Vadgaon, try to align your disconnection with other people’s. It’s harder to scroll when everyone else has agreed to stop.

4. Reclaim boredom

We often detox to be more “productive” – but try embracing boredom instead. As the philosopher Martin Heidegger has noted, profound boredom is a space where reflection becomes possible. And that can be very useful indeed.

*Names of research participants have been changed to protect their privacy.The Conversation

Quynh Hoang, Lecturer in Marketing and Consumption, Department of Marketing and Strategy, University of Leicester

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read next: You're probably misreading online reviews. Here's why


by External Contributor via Digital Information World

Want to Stand Out at Work? Avoid These Top 10 Email Clichés

If you “reach out” and “circle back” often in your work emails, you’re not alone. These phrases are among the most overused email clichés around the world, a new study finds.

Email has been around for more than 50 years, and it’s still the backbone of workplace communication. Its usage has only increased over the past few decades, so it’s no surprise that workers have come to rely on certain phrases to get their point across quickly.

The result is that our inboxes are flooded with emails where people are “following up,” “checking in,” and “touching base.” But just how often do we write these things? Email verification company ZeroBounce dove into the data – and the stats paint a fascinating picture about how we communicate with our peers.

Study: the top 10 email clichés in workplace email communication

ZeroBounce analyzed over a million emails to compile a list of the most common email buzzwords in our workplace communication today.

Here are the top 10:

  1. Reaching out: 6,117 emails
  2. Following up: 5,755 emails
  3. Check in: 4,286 emails
  4. Aligned: 1,714 emails
  5. Please advise: 1,459 emails
  6. Hope you’re doing well: 1,300 emails
  7. Hope this email finds you well: 974 emails
  8. Hope all is well: 592 emails
  9. E-meet: 536 emails
  10. Circle back: 533 emails
AI isn’t making our emails smarter, it’s copying our clichés (1M+ emails analyzed)

Other popular email phrases ZeroBounce identified are:
  • Happy Friday: 512 emails
  • Touch base: 331 emails
  • Hop on a call: 243 emails
  • Bandwidth: 220 emails
  • Happy Monday: 169 emails
  • Per my last email: 89 emails
  • Low-hanging fruit: 18 emails

How we can replace the most common clichés

While saying you’re “touching base” in an email isn’t inherently bad, some of these phrases are so ubiquitous nowadays that your message may start losing power. “People rely on these cookie-cutter phrases because they often don’t know how to start an email, especially when they’re following up on something they need,” says ZeroBounce founder and CEO Liviu Tanase.

“Our goal with this study wasn’t to shame anyone – we’ve all used these buzzwords at least once. But the findings remind us that there are other options out there. Before we hit send on that next email, it’s worth taking a minute to read through it and see if we can find a different way to convey our message,” Tanase adds.

A few alternative ways to “reach out”

It’s not always easy to come up with fresh email openers or ask someone (again) for something you need from them. But here are a few pointers to help make your next email stand out in the inbox:

  • Instead of “reaching out,” start with a positive comment about the person you’re emailing. Pick whatever is relevant in that moment – it could be a keynote they gave at a conference, an article they wrote, or a recent promotion. Make it about them and you’ll immediately get their attention.
  • If you’re sending a second or a third email asking for something you need, show empathy. You can start by acknowledging how busy life can get to show you understand why they haven’t written back. Then quickly ask your question again. To increase your chances of a reply, keep the email concise.
  • Avoid some of the most overused openers, like “hope this email finds you well.” Your recipient will tune out right from the beginning. Also, steer clear of “per my last email” – it comes across as passive-aggressive, even when you have the best intentions. As for “bandwidth,” it’s a turn-off for many people. Simply ask the person if they have the time to take on the task.

Being more intentional about the way we write can yield immediate results, especially in the age of artificial intelligence, where communication tends to sound generic and flat.

What our emails reveal about hidden stress at work

Aside from the heavy usage of these buzzwords, the study also found a small but telling sign of how workers handle work-related stress. The phrase “Happy Friday” appeared almost three times more often than “Happy Monday.” People are more likely to greet someone enthusiastically when they know the weekend is just around the corner. It’s just another reminder that we’d all benefit from building work cultures that foster less pressure and more positivity.

Article provided by ZeroBounce.

Read next:

• 43% of Web Requests Come from Mobile, Cloudflare Data Shows

• OpenAI Introduces GPT‑Image-1.5 Image Generation Update in ChatGPT With Faster Editing and Improved Accuracy Tools


by Guest Contributor via Digital Information World