Mr Branding
"Mr Branding" is a blog based on RSS for everything related to website branding and website design, it collects its posts from many sites in order to facilitate the updating to the latest technology.
To suggest any source, please contact me: Taha.baba@consultant.com
Friday, March 20, 2026
Blissful (A)Ignorance: People rarely notice AI-written messages in everyday communication
Image: Solen Feyissa - Pexels
These days, you may be reading AI-written news more often than you think. The same can be said for emails, texts, and social media sites, according to a new study by researchers at the University of Michigan and Duke University.
The study found that undisclosed AI use does not trigger suspicion among people. When AI use is disclosed or strongly suspected (when people already pay a lot of attention to AI), people typically judge senders negatively, said Andras Molnar, U-M assistant professor of psychology and study co-author.
“For example, when we already suspect that someone generated their message using AI, we tend to think of them as less friendly, less trustworthy, less authentic and so on, compared to when the same text is genuinely human-written,” he said. “This ‘AI penalty’ has been widely documented in past studies.”
What the “AI penalty” suggests is that people, on average, lean toward the negative interpretation that focuses on the person (e.g., the person was lazy) instead of the more positive interpretation that takes into account the situation (e.g., there was a lot of time pressure).
However, under more realistic conditions, audiences may be uncertain, or even completely unaware, of communicators’ potential use of AI. Molnar, along with lead author Jiaqi Zhu of Duke, conducted two online experiments of more than 1,300 U.S. adults to examine how both explicit disclosure and uncertainty regarding AI use affect social impressions in realistic communication contexts (e.g., email, social media, texting).
Their research, published in Computers in Human Behavior, highlights that even though there are these massive penalties in social interactions when AI use is known, people don’t naturally suspect AI use: Participants in realistic situations treated messages of unknown origin as if they were known to be genuinely human-written. In other words, those who use AI as a shortcut most likely get away with it and keep their positive impressions.
Molnar said that concerns about widespread rejection of AI-assisted communication may be overstated for now, though attitudes could shift as AI awareness grows.
Study: Blissful (A)Ignorance: Despite the widespread adoption of AI in communication, people do not suspect AI use in realistic contexts
This post was originally published by the University of Michigan News and is republished here with permission.
Reviewed by Irfan Ahmad.
Read next:
• Content Marketers Embrace AI in Content Creation
• A better method for identifying overconfident large language models
by External Contributor via Digital Information World
Content Marketers Embrace AI in Content Creation
Less than four years after the release of ChatGPT marked the beginning of the AI era, artificial intelligence has become an integral part of the content marketing toolkit. From drafting text and generating visuals to analyzing campaign performance, AI-powered tools are being used for many day-to-day tasks, ideally helping teams to save time on routine tasks and make time for creative and strategic thinking.
According to the Statista+ Content Marketing Trend Study 2026, content creation is currently the most common application of AI tools. Just over half of the 252 surveyed B2B content marketing professionals said that their department uses AI to produce text, images or videos. Analytical tasks are another major use case, with 45 percent relying on AI for reporting and performance measurement.
Beyond these core areas, many marketers are also integrating AI into supporting processes. Around 4 in 10 respondents reported using AI for customer service as well as for ideation and inspiration. Others apply the technology to automate workflows, manage knowledge and documentation or for technical tasks such as search engine optimization. At 4 percent, only a small minority of organizations reported not having started using AI tools at all.
For more insights on AI in content marketing, download the 8th edition of our B2B Content Marketing Trend Study for free here.
This post was originally published on Statista and is republished under Creative Commons License CC BY-ND.
Reviewed by Asim BN.
Read next: A better method for identifying overconfident large language models
by External Contributor via Digital Information World
A better method for identifying overconfident large language models
Image: Marija Zaric / Unsplash
This new metric for measuring uncertainty could flag hallucinations and help users know whether to trust an AI model.
Large language models (LLMs) can generate credible but inaccurate responses, so researchers have developed uncertainty quantification methods to check the reliability of predictions. One popular method involves submitting the same prompt multiple times to see if the model generates the same answer.
But this method measures self-confidence, and even the most impressive LLM might be confidently wrong. Overconfidence can mislead users about the accuracy of a prediction, which might result in devastating consequences in high-stakes settings like health care or finance.
To address this shortcoming, MIT researchers introduced a new method for measuring a different type of uncertainty that more reliably identifies confident but incorrect LLM responses.
Their method involves comparing a target model’s response to responses from a group of similar LLMs. They found that measuring cross-model disagreement more accurately captures this type of uncertainty than traditional approaches.
They combined their approach with a measure of LLM self-consistency to create a total uncertainty metric, and evaluated it on 10 realistic tasks, such as question-answering and math reasoning. This total uncertainty metric consistently outperformed other measures and was better at identifying unreliable predictions.
“Self-consistency is being used in a lot of different approaches for uncertainty quantification, but if your estimate of uncertainty only relies on a single model’s outcome, it is not necessarily trustable. We went back to the beginning to understand the limitations of current approaches and used those as a starting point to design a complementary method that can empirically improve the results,” says Kimia Hamidieh, an electrical engineering and computer science (EECS) graduate student at MIT and lead author of a paper on this technique .
She is joined on the paper by Veronika Thost, a research scientist at the MIT-IBM Watson AI Lab; Walter Gerych, a former MIT postdoc who is now an assistant professor at Worcester Polytechnic Institute; Mikhail Yurochkin, a staff research scientist at the MIT-IBM Watson AI Lab; and senior author Marzyeh Ghassemi, an associate professor in EECS and a member of the Institute of Medical Engineering Sciences and the Laboratory for Information and Decision Systems.
Understanding overconfidence
Many popular methods for uncertainty quantification involve asking a model for a confidence score or testing the consistency of its responses to the same prompt. These methods estimate aleatoric uncertainty, or how internally confident a model is in its own prediction.
However, LLMs can be confident when they are completely wrong. Research has shown that epistemic uncertainty, or uncertainty about whether one is using the right model, can be a better way to assess true uncertainty when a model is overconfident.
The MIT researchers estimate epistemic uncertainty by measuring disagreement across a similar group of LLMs.
“If I ask ChatGPT the same question multiple times and it gives me the same answer over and over again, that doesn’t mean the answer is necessarily correct. If I switch to Claude or Gemini and ask them the same question, and I get a different answer, that is going to give me a sense of the epistemic uncertainty,” Hamidieh explains.
Epistemic uncertainty attempts to capture how far a target model diverges from the ideal model for that task. But since it is impossible to build an ideal model, researchers use surrogates or approximations that often rely on faulty assumptions.
To improve uncertainty quantification, the MIT researchers needed a more accurate way to estimate epistemic uncertainty.
An ensemble approach
The method they developed involves measuring the divergence between the target model and a small ensemble of models with similar size and architecture. They found that comparing semantic similarity, or how closely the meanings of the responses match, could provide a better estimate of epistemic uncertainty.
To achieve the most accurate estimate, the researchers needed a set of LLMs that covered diverse responses, weren’t too similar to the target model, and were weighted based on credibility.
“We found that the easiest way to satisfy all these properties is to take models that are trained by different companies. We tried many different approaches that were more complex, but this very simple approach ended up working best,” Hamidieh says.
Once they had developed this method for estimating epistemic uncertainty, they combined it with a standard approach that measures aleatoric uncertainty. This total uncertainty metric (TU) offered the most accurate reflection of whether a model’s confidence level is trustworthy.
“Uncertainty depends on the uncertainty of the given prompt as well as how close our model is to the optimal model. This is why summing up these two uncertainty metrics is going to give us the best estimate,” Hamidieh says.
TU could more effectively identify situations where an LLM is hallucinating, since epistemic uncertainty can flag confidently wrong outputs that aleatoric uncertainty might miss. It could also enable researchers to reinforce an LLM’s confidently correct answers during training, which may improve performance.
They tested TU using multiple LLMs on 10 common tasks, such as question-answering, summarization, translation, and math reasoning. Their method more effectively identified unreliable predictions than either measure on its own.
Measuring total uncertainty often required fewer queries than calculating aleatoric uncertainty, which could reduce computational costs and save energy.
Their experiments also revealed that epistemic uncertainty is most effective on tasks with a unique correct answer, like factual question-answering, but may underperform on more open-ended tasks.
In the future, the researchers could adapt their technique to improve its performance on open-ended queries. They may also build on this work by exploring other forms of aleatoric uncertainty.
This work is funded, in part, by the MIT-IBM Watson AI Lab.
Republished with permission of MIT News.
Read next:
• Can’t stop endlessly scrolling? Tips to help you take back control
• AI Chatbots Push Users to Share Sensitive Data During Tax Help, With ChatGPT Most Persistent, Analysis Finds
by External Contributor via Digital Information World
Thursday, March 19, 2026
AI Chatbots Push Users to Share Sensitive Data During Tax Help, With ChatGPT Most Persistent, Analysis Finds
Image: Salvador Rios / Unsplash
As tax season hits, you have options to file your tax return yourself or with help from someone else. But what if you let an AI chatbot step in to assist? It's a tempting choice — always available, always free, and ready to help when professional assistance is either too expensive or hard to find as deadlines loom. A new study, conducted by Surfshark, explores whether turning to AI chatbots is as smart a move as it sounds.
OpenAI's ChatGPT, Google's Gemini, and xAI's Grok have emerged as the frontrunners in the AI chatbot and tools sector. Recent data from Similarweb indicates that these platforms collectively account for nearly 84% of total traffic, making them the most likely choice for individuals seeking consultation, including tax-related advice. ChatGPT leads with 5.4B monthly visitors, followed by Gemini with 2.1B, and Grok with 0.3B.¹
Key insights
- Simulated conversations about tax returns on the most popular AI chatbots worldwide — ChatGPT, Gemini, and Grok — showed a clear pattern: users were actively pushed to provide personal information, starting from their job, income, or country, even with neutral prompts like “tax return”. ChatGPT was the most persistent, while Gemini and Grok were easier to navigate for those avoiding personal data input. For example, with Gemini, even when users were encouraged to provide personal information and chose not to, the AI chatbot smoothly continued the conversation, using example data if necessary. In contrast, ChatGPT made several attempts in a row to steer users toward providing their sensitive information.
- To illustrate AI chatbots' data collection behavior, consider an interaction with ChatGPT. Initially, this chatbot concludes its response with a request: “Just tell me your job and approximate yearly income, and I can estimate your refund.” If the user chooses to ignore this request, ChatGPT persists in its next response, asking the user to share the requested details and even seeking more data. If the user proceeds to ignore such requests, ChatGPT adopts a more assertive tone, using phrases like “Please reply with these” and “You can answer like this example.” Ultimately, if the user prompts with “no,” the chatbot ceases to offer estimates. In the case of Gemini, if a user responds with “no,” the chatbot replies with a message: “No worries at all! Since you'd rather not share your specific numbers, I've put together a ‘cheat sheet’ for the current 2025–26 financial year (ending June 30, 2026). You can use this to do the math yourself.”
- AI chatbots can gather user information beyond what is explicitly provided in user prompts. For instance, in a simulated interaction using a VPN connected to an Australian server, ChatGPT tailored its responses based on the user's location data. It started with phrases such as “If you're in Australia” and offered tax-related details specific to that country. In contrast, Gemini not only provided information relevant to Australia but also included details for the US and UK. This broader coverage makes its data collection practices less obvious and potentially less suspicious for users who aren't familiar with the Terms of Service and Privacy Policy. Grok, on the other hand, focused on delivering responses related to US tax returns and offered to customize information further if users provided additional details about their circumstances — such as their country, income type, or specific questions.
- This example aligns with findings from a study Surfshark conducted last year, which examined the data collection practices of the top AI chatbots available on the Apple App Store. The study revealed how data-hungry some of these chatbots can be, with certain apps collecting up to 32 out of 35 possible data types. Location data is just one example of the extensive information these chatbots may gather, highlighting the importance of understanding their data collection practices.
- However, using Grok can be frustrating because it frequently prompts users to sign up, after which companies can gain insights into users' habits and interests or target them with ads, as ChatGPT has already announced plans to do. During simulated conversations, interactions were often interrupted with a “high demand” note, forcing users to either wait or sign up for higher priority access. Additionally, after the fifth prompt, a message limit was reached, preventing further chat progression. Similarly, ChatGPT frequently asked users to create an account to unlock features such as uploading files or images or accessing enhanced capabilities. In contrast, Gemini's approach was the least aggressive, suggesting that users create an account only after they had been prompted at least 10 times.
- The main website page for Gemini explicitly states that the AI chatbot can make mistakes. ChatGPT provides a similar disclaimer after the first prompt, additionally warning users not to share sensitive information and noting that chats may be reviewed and used to train their models. In contrast, Grok does not visibly display such a statement on the chat screen, although it is included in the Terms of Service. For these reasons, transparency about sources and access to links are crucial for assessing the accuracy of AI chatbot information, particularly in sensitive areas like tax returns.
- A highly concerning finding is that Gemini does not provide any source references, raising issues about the verifiability of its information. Meanwhile, ChatGPT takes an inconsistent approach, offering links only for certain highlighted words, with explanatory text in a sidebar. In contrast, Grok enhances transparency by providing an extensive list of sources with direct links to content. However, it is important to note that merely providing a link does not ensure that the information was correctly interpreted or used by the AI, leaving users to navigate these technologies at their own risk.
Methodology and sources
The study aims to provide insights into the chatbots' behavior and the risks associated with their use in sensitive contexts, such as tax return assistance. To simulate user behavior, three distinct starting prompts were used: a neutral “tax return”, a more engaging “help me with my tax return,” and a third prompt, “how can you help me with my tax return?” Following the initial prompt, subsequent user interactions were limited to “yes” if the chatbot suggested an action, or “no” if it requested personal information. If the interaction stalled, the AI chatbot’s first suggestion was used to continue the conversation. Each initial prompt was entered into a new chat thread using Google Chrome's Incognito mode, with a VPN connected to an Australian server. All interactions were conducted in English. Data was collected on March 12, 2026.
Among the top five AI chatbots and tools with the highest user traffic, OpenAI's ChatGPT, Google's Gemini, and xAI's Grok were selected for analysis because their accessible free versions do not require users to sign in. As a result, Anthropic Claude and DeepSeek were excluded from the analysis due to their requirement for account creation before use. No additional settings were adjusted after accessing the AI chatbot websites.
Note: The same prompts do not always produce identical results, so the first recorded take was used for analysis.
For the complete research material behind this study, visit here.
This post was originally published by Surfshark and is republished on DIW with permission.
Reviewed by Asim BN.
Read next:
• Two-thirds of workers are burned out – here’s what science says about how to tackle it
• Iran war shows how AI speeds up military ‘kill chains’
by External Contributor via Digital Information World
Wednesday, March 18, 2026
Iran war shows how AI speeds up military ‘kill chains’
Image: Saifee Art / Unsplash
The US-Israel war on Iran has been described as “the first AI war”. But recent deployments of artificial intelligence are, in fact, the latest in a long history of technological developments that prize a need for speed in the military “kill chain”.
“Sixty seconds – that’s all it took,” claimed a former Israeli Mossad agent of the strikes that killed Iran’s supreme leader, Ayatollah Ali Khamenei, on February 28 2026, the first day of the US-Israel war on Iran.
The speed and scale of war have been significantly enhanced by use of AI systems. But this need for speed brings serious risks for civilians and military combatants alike.
Modern military operations produce and rely on an enormous amount of intelligence. This includes intercepted phone calls and text messages, the mass surveillance of the internet (known as “signals intelligence”), as well as satellite imagery and video feeds from loitering drones. We can think of all this intelligence as data – and the problem is, there’s too much of it.
As early as 2010, the US Air Force was concerned about “swimming in sensors and drowning in data”. Too many hours of footage, and too many analysts manually reviewing this intelligence.
AI systems can dramatically speed up the analysis of military intelligence. Brad Cooper, head of US Central Command (CentCom), recently confirmed the use of AI tools in the war against Iran, saying:
These systems help us sift through vast amounts of data in seconds, so our leaders can cut through the noise and make smarter decisions faster than the enemy can react … Advanced AI tools can turn processes that used to take hours and sometimes even days into seconds.
In 2024, an investigation by Georgetown University found that the US Army’s 18th Airborne Corps had employed AI to assist with intelligence processing – reducing a team of 2,000 to just 20.
Update from CENTCOM Commander on Operation Epic Fury: pic.twitter.com/5KQDv0Cfxs
— U.S. Central Command (@CENTCOM) March 11, 2026
The allure of speed
In the second world war, the aerial targeting cycle – from collecting images to assembling target packages complete with intelligence reports – could take weeks or even months. But over the ensuing decades, the US military set about what it called “compressing the kill chain” – shortening the time between the identification of a target and use of force against it.
During the first Gulf war of 1991, Iraq’s president Saddam Hussein made use of mobile missile launchers that would roam the desert firing Scud missiles. By the time US radar identified its location, the launcher could be miles away. This “shoot and scoot” tactic required new technology to track these mobile targets.
A key breakthrough came shortly after the September 11 attacks in the form of an armed Predator drone.
In November 2002, the CIA targeted and killed Al Qaeda’s leader in Yemen, Qaed Salim Sinan al-Harithi. This heralded a new era of warfare in which drones piloted from military bases in the US flew remotely over the skies of Yemen, Somalia, Pakistan, Iraq, Afghanistan and elsewhere.
The drones’ powerful cameras could take high-resolution video and beam it back to the US via satellite in a matter of seconds, enabling the drone operators to track mobile targets. The same drone which had eyes on the target could fire missiles to kill or destroy the target.
With greater speed comes greater risk
Two decades ago, it was easy to dismiss as hyperbole the idea that the coming age of cyberwarfare might bring about “bombing at the speed of thought”, a phrase coined by American historian Nick Cullather in 2003. Yet with the advent of AI warfare, the unthinkable has become almost antiquated.
Part of the push to employ AI tools is the sense that human thought is no match for the processing speeds enabled by AI systems. The US Department of Defense’s artificial intelligence strategy states: “Military AI is going to be a race for the foreseeable future, and therefore speed wins … We must accept that the risks of not moving fast enough outweigh the risks of imperfect alignment.”
While the precise uses of AI by US and other military is shrouded in secrecy, information has been made public that highlights the risks of its use on civilian populations.
In Gaza, according to Israeli intelligence sources, the AI systems Lavender and Gospel have been programmed to accept up to 100 civilian casualties (and occasionally even more) for a strike on a single suspected Hamas combatant. More than 75,000 people are estimated to have been killed there since October 7 2023.
In February 2024, a US airstrike killed a 20-year-old student, Abdul-Rahman al-Rawi. At the time, a senior US official admitted the strikes had used AI targeting – although confusingly, the US military now says it has “no way of knowing” whether it used AI in specific airstrikes.
The risk is that AI could lower the threshold or cost of going to war, as people play an increasingly passive role in reviewing and rubber-stamping the work of AI.
The embedding of AI into military kill chains intersects with other alarming developments. After years of inaction, the US military spent more than a decade developing an infrastructure to avoid civilian casualties in war, but it has been almost totally dismantled under the Trump administration.
The lawyers who give advice to the military on targeting operations, including compliance with international law and rules of engagement, have been sidelined and fired.
Meanwhile, since the start of the war in Iran, more than 1,200 civilians have been killed, according to the Iranian Health Ministry. On February 28, the US military struck an elementary school in the south of Iran, killing at least 175 people, most of them children.
The US secretary of defense, Pete Hegseth, has been clear that the military’s aim in Iran is for “maximum lethality, not tepid legality. Violent effect, not politically correct”.
With such an attitude, and by privileging speed over deliberation, civilian casualties become inevitable, and accountability ever more elusive.![]()
Craig Jones, Senior Lecturer in Political Geography, Department of Geography, Newcastle University and Helen M Kinsella, Professor of Political Science and Law, Department of Political Science, University of Minnesota
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Reviewed by Asim BN.
Read next:
• From Anthropic to Iran: Who sets the limits on AI’s use in war and surveillance?
• Political Unrest Is the Leading Cause of Internet Shutdowns
by External Contributor via Digital Information World
Two-thirds of workers are burned out – here’s what science says about how to tackle it
Evidence-based, long-term psychological strategies to build a framework for your brain’s resilience and overcome burnout
Burnout is at an all-time high, with some studies saying two-thirds of employees now cite job burnout as a major challenge.
Overwork and chronic stress do not just drain energy, they can erode health, contributing to a wide range of psychological and physical problems, including depression, anxiety, cardiovascular disease and even increased stroke risk.
Shaina Siber offers solutions rooted in science in her new book, Using ACT and CFT for Burnout Recovery: The Beyond Burnout Blueprint, with strategies to help people in high pressure situations break the cycle of exhaustion.
What is burnout
The term “burnout,” coined by psychologist Herbert Freudenberger in the 1970s, described a state of physical and mental exhaustion among workers. Decades later, the World Health Organization formally recognised burnout as an “occupational phenomenon,” characterised by exhaustion, cynicism, detachment and reduced effectiveness.“Burnout isn’t just making us miserable; it’s making us sick. Half a century after naming the problem, we are left collectively scratching our heads on how to resolve it.
“If you’re experiencing burnout, chances are you’ve already tried to ‘fix’ it. Maybe you leaned into conventional wisdom: More exercise, more sleep, more meditation, more sunshine, more kale. Maybe you bought into the idea that a vacation or spa day would reset your system.
“Here’s the truth: We cannot rely on “good vibes only” for finding our way out of burnout. There aren’t enough green juices, yoga classes, or massages in the world to self-care burnout into submission. Even the most restorative vacation glow often evaporates before you’ve finished unpacking,” Siber explains.
Siber says that while we cannot ignore the systemic realities that drive burnout, such as unsafe staffing, impossible workloads, workplace discrimination and other pervasive and damaging issues, we can acknowledge these challenges and find a way to cope that does not cause us physical and psychological harm.
“I do not ask people to deny or minimise these issues; or pretend they don’t matter. But burnout isn’t something you can simply eliminate once your external circumstances change. Pain and challenge are inevitable in work, and in life,” she says.
Burnout: A neurological and psychological perspective
Burnout is more than just feeling tired, it’s a state of chronic stress that rewires the brain. Science tells us that prolonged stress activates the amygdala, the brain’s fear centre, while suppressing activity in the prefrontal cortex, which governs decision-making and emotional regulation.This imbalance leaves individuals stuck in survival mode, unable to access the psychological flexibility needed to recover.
Siber explains: “Burnout often pulls us into mental time travel: replaying the past, catastrophising the future, or checking out altogether. Burnout isn’t just about exhaustion; it’s about the erosion of meaning, connection, and agency in our lives.”
Acceptance and Commitment Therapy (ACT) and Compassion-Focused Therapy (CFT) offer a way to recalibrate.
ACT promotes a concept called ‘radical acceptance’ to encourage psychological flexibility, the ability to stay present, open up to difficult experiences and take action in keeping with wider goals. Meeting difficult situations with acceptance can alter the brain’s neural responses to difficult thoughts and emotions by reducing the hyperactivity in the brain’s Default Mode Network (DMN), which is linked to rumination and self-centred thinking, while improving the connections between the higher-thinking parts and emotional processing centres for more measured responses.
CFT complements this by using compassion to reduce the control of the brain’s fear centre, regulate the nervous system and activate the brain’s affiliative pathways that promote safeness and connection. Together, these approaches help individuals move from survival mode to thriving.
A science-based blueprint for burnout recovery
Siber’s Beyond Burnout Blueprint integrates ACT and CFT into a framework designed to tackle burnout at its roots, as opposed to tempering its impact with lifestyle adjustments.Unlike conventional wellness fixes, which often focus on short-term nervous system regulation techniques like exercise or meditation, this approach goes further into the psychological and systemic bodily reactions that fuel burnout.
The framework begins with creating a vision, which involves clarifying your deeply held values to serve as a guide throughout the process.
“Imagine the life you’re building toward, not just the challenges you’re trying to escape,” Siber explains.
Then, the process entails welcoming the unwanted, which involves learning how to sit with discomfort rather than suppressing it, thereby fostering resilience and emotional openness.
Watching your words is another critical step, focusing on minimising unhelpful narratives that fuel self-criticism and replacing them with more compassionate and flexible self-talk. Far from being a ‘nice-to-have’, compassion helps to regulate the nervous system.
“Practicing fierce compassion is essential for cultivating self-compassion, which softens the grip of burnout and promotes emotional healing,” Siber explains.
“Compassion makes the flexibility ACT cultivates more accessible and sustainable.
“Compassion, especially self-compassion, isn’t a finish line you cross once. It’s a lifelong relationship you tend to, one choice, one breath, one moment at a time.”
Also, people should identify their strengths and what matters to them, allowing them to rediscover what energises and fulfils them, she suggests.
Siber’s describes exercises designed to help apply these principles in their daily lives. The “Spotting Inflexibility” exercise, for instance, helps individuals identify patterns of psychological rigidity that fuel burnout. By noticing these patterns without judgement, readers can begin to shift their responses.
Burnout in high-pressure professions
Burnout doesn’t discriminate, but it disproportionately affects those in high-stakes fields like healthcare, education, law, finance, and tech.Siber highlights the unique challenges faced by these professions, from moral injury in healthcare to the relentless demands of competitive corporate cultures.
For leaders and teams, she emphasises the importance of systemic change, such as fair workloads, flexible arrangements and psychologically safe environments.
“True prevention requires redesigning work itself,” Siber says. “Fair workloads, trained managers, and accessible mental health resources are essential.”
For people in high pressure roles, Siber explains why nurturing resilience is a more sustainable tactic than lifestyle changes: “Burnout resilience allows you to regulate, refocus, and rise when burnout shows up. It’s not about working harder to fix yourself. It’s about learning to move through discomfort without losing sight of what matters most.”
This post was originally published on Taylor & Francis Group Newsroom and is republished on DIW with permission.
Image: Vitaly Gariev / Unsplash
Reviewed by Irfan Ahmad
Read next:
• Tech companies are blaming massive layoffs on AI. What’s really going on?
• Political Unrest Is the Leading Cause of Internet Shutdowns
by External Contributor via Digital Information World
Tuesday, March 17, 2026
Political Unrest Is the Leading Cause of Internet Shutdowns
Governments around the world continued to impose restrictions on internet access in 2025, often in response to political tensions and public unrest. According to data from Surfshark, political turmoil was by far the leading cause of such measures last year. As our chart shows, 25 regional internet shutdowns and 16 nationwide shutdowns were linked to political instability, along with 10 cases involving the blocking of social media platforms.
Protests were another major trigger. Authorities imposed 13 regional shutdowns and three social media blocks in response to demonstrations. Elections also played a role, particularly when governments sought to control the flow of information during sensitive political periods. In 2025, six nationwide shutdowns and five social media blocks were linked to election-related concerns.
These measures include actions such as blocking websites, restricting social media platforms or messaging services and imposing regional or nationwide internet shutdowns. Many of these restrictions were concentrated in Asia and Africa. Governments in ten Asian countries introduced 56 new restrictions in 2025, while eight African countries accounted for another 20 cases. India recorded the highest number of incidents, imposing 24 restrictions during the year, often linked to political unrest or protests. Other countries reporting multiple incidents included Iraq, Afghanistan and Iran, where authorities repeatedly limited internet access during periods of tension or demonstrations.
Note: This post originally appeared on Statista and is republished on DIW under Creative Commons License (CC BY‑ND).
Read next:
• Mobile Accounts for Nearly 60 Percent of Web Traffic
• 2026 Social Media Benchmark: TikTok Engagement Soars 49% YoY to 3.70%, Instagram Holds 0.48%, Facebook 0.15%, X Drops to 0.12%
by External Contributor via Digital Information World






