Tuesday, March 31, 2026

Workplace collaboration: Employees reveal what they want leaders to change

By Ellie Stewart

Building a collaborative culture is the ultimate business goal, but it can be a slog in practice. It doesn't take much—just one broken link in the chain—to throw a whole project off the rails.

To see how teams are collaborating and staying productive right now, Adobe for Business surveyed over 1,000 full-time US workers. They wanted to see which tools and processes are actually helping and which are just adding noise.

The cost of collaboration barriers

Collaboration struggles can last just minutes, and they're resolved in no time. Sometimes it takes several days or even weeks to get to the bottom of a breakdown before a shared understanding is reached. To help quantify the cost of collaboration breakdowns in terms of lost time, the Adobe for Business study found that on average, 97 hours are lost due to communication struggles, and 81 hours are wasted in unproductive meetings.

The 97 hours a year lost to communication breakdowns equates to nearly two hours a week, so what can businesses do to avoid these breakdowns and help employees reclaim valuable time?

The workers surveyed estimated that if ineffective collaboration processes were removed, they could reclaim 178 hours a year, nearly 3 and a half hours a week, to put toward strategic, high-impact work. For anyone in a leadership spot, clearing out these hurdles isn't just about efficiency—it’s about survival. In fact, 90% of those surveyed think that if you just got the blockers out of the way, they could wrap up a 40-hour week in four days. That's a massive chunk of time currently being thrown away.

The study also considered the time workers in different industries think they could save, finding that employees in the finance industry are particularly in support of this workweek change. Nearly all (94%) finance employees surveyed reported that they could switch to a four-day workweek if collaboration were improved.

Inefficiency causes across roles, industries and location

The why behind collaboration inefficiencies varies by job role and industry, providing valuable insights for business leaders on potential changes to implement to best suit their teams. The data shows that "death by meeting" hits the C-suite the hardest. Senior staff are losing roughly 91 hours a year to meetings that don't go anywhere—that’s two hours gone every single week. It’s better for entry-level staff, but not by much; they're still losing 65 hours. The size of business matters here, too: big enterprise teams are wasting 69% more time than people at smaller shops.

Top 5 states losing the most time to unproductive meetings:

  • New York - 90 hours lost a year
  • New Jersey - 81 hours lost a year
  • California - 79 hours lost a year
  • Florida - 76 hours a year

The potential benefits of addressing collaboration challenges are increased for certain industries where a significant amount of valuable time is being drained. Workers in the manufacturing industry reported they could reclaim the most time back due to collaboration blockers, at up to 214 hours a year, which is over four hours a week.


Industries losing the most time to collaboration friction:
  • Manufacturing - 214 hours a year
  • Sales - 208 hours a year
  • Finance - 200 hours a year
  • Marketing - 186 hours a year
  • Tech - 179 hours a year

These teams stand to gain valuable time back if effective methods of collaboration are put into place to increase productivity, more than the national average of 178 hours lost a year.

Here's why projects fail and goals are misaligned

It’s not uncommon for some projects to veer off course, but it’s important for teams to examine why this happens in order to reclaim time lost to inefficient collaboration. The employee survey from Adobe for Business indicates that communication breakdowns are the key contributor to blocking effective collaboration, causing nearly half (46%) of all project delays.

It’s no surprise people are exhausted when a third of their projects (36%) start without any real consensus from the stakeholders. Most projects tend to get stuck before they even get a chance to start. It leaves the rest of the team scrambling to clean up a mess they didn't even make in the first place.

Without team alignment from the offset, the consequences to projects are immediately felt. Here are five key ‘costs’ of disconnected teams, according to the employees surveyed:

  • Leads to wasted time and effort - 76%
  • They experience missed deadlines - 58%
  • Report decreased work quality - 57%
  • Flag struggle to track progress - 47%
  • Encounter budget overruns - 23%

        One of the most substantial ways in which team misalignment in project goals can impact employees is by causing a significant rework. Roughly a third (33%) surveyed identified that they have had to rework projects due to misalignment.

        Employees also noted the key reasons why they feel projects are thrown off course:

        • Unclear leadership directives - 40%
        • Lack of standardized processes across teams - 34%
        • Frequent changes in project priorities - 34%
        • Insufficient visibility into other teams’ progress - 28%
        • Too many disconnected tools - 28%

              In addition to the above impacts felt by employees, they also cite a lack of regular cross-functional check-ins (27%), an absence of a single source of truth for project information (23%), and a lack of training on processes (17%) as blockers to projects staying on course.

              The psychological toll of collaboration blockers

              Aside from the impact of ineffective collaboration on the project at hand, there’s a significant impact on the workforce from a psychological perspective. More than half (56%) of US employees surveyed said navigating collaboration hurdles caused mental fatigue.

              Varying work environments also led to employees citing different levels of mental toll thanks to ineffective collaboration. Over half (55%) of both remote and on-site workers noted poor collaboration as cause of stress. Without supportive workflows in place, this stress goes on to have repercussions in the form of retention. On-site employees are 47% more likely to seek new job opportunities due to a lack of effective workflow management and team collaboration.

              What employees want to dismantle ineffective collaboration

              Instead of opting to add more tech solutions to try and solve inefficiencies in collaboration, there needs to be strategic intervention, and employees in the Adobe for Business study point to the enablers they see as most valuable in unlocking smoother ways of working with their teams.


              The set up of clear and consistent communication channels (42%) was the most requested improvement to help solve a lack of effective collaboration according to employees. This was followed by explicitly defined roles and responsibilities (38%) within the team to ensure everyone is aligned on expectations.

              Demand is also high for a platform that acts as a ‘single source of truth’ for the project, over a fifth of all employees deemed it to be essential. This demand increases for remote workers who are 28% more likely to request a ‘single source of truth’ as a solution for collaboration breakdowns compared to on-site workers. Employees seek this unified approach in order to avoid a siloed team structure, as over one in five employees identified this approach as a major barrier to collaboration.

              Understanding collaboration enablers is extremely important and as part of this, it’s essential to consider the varying support required by different demographics within the team. Timely decision making and clear next steps (41%) is highly valued by Baby Boomers, whereas Gen X and Millennials want to prioritize clear communication channels (42%) to effectively collaborate. Gen Z say a shared understanding of project goals (40%) would be most valuable to them.

              To support employees in reducing the collaboration gap, teams want to see updates to workflow management that centralizes project insights to a ‘single source of truth’, automates low-impact admin tasks, and formalizes processes to provide the structure and real-time visibility of performance necessary.

              Companies can’t afford to just sit back and hope their teams figure out how to work together. You have to be proactive about fixing these gaps—not just for the sake of the bottom line, but to avoid high performers from leaving. Once you get everyone on the same page, the busywork falls away and the real work finally starts.

              Reviewed by Irfan Ahmad.

              Read next:

              Fragmented phone use — not total screen time — is the main driver of information overload, study finds

              • Most Parents Keep Track of Their Children’s Online Browsing


              by Guest Contributor via Digital Information World

              Monday, March 30, 2026

              Most Parents Keep Track of Their Children’s Online Browsing

              How Parents Track Their Children

              With the ever-evolving digital landscape, children are now on more devices than ever. From school, to socializing, and home, children now spend almost every phase of their day-to-day life interacting with some form of technology to stay in touch. This creates new challenges for parents in tracking their children across various digital devices and platforms. How often and to what extent are parents able to fully keep up?

              A 2026 All About Cookies survey found that 96% of parents keep tabs on their child’s devices in some way, as evidenced by the graphic below.


              With the recent shift to a more digital schooling system, it’s not a major surprise that school performance is the #1 thing that parents keep track of. Screentime, banking/financial, social media accounts, as well as internet browsing history, rounded out the top 5 things that parents kept the biggest eyes on outside of academic monitoring.

              A Majority of Parents Have Access to their Child’s Devices

              With almost every parent surveyed keeping track of their child in some way, shape, or form, many of those parents have access to their kid’s passwords on various devices.


              Over 85% of parents claimed to have access to their child’s computer/tablet (88%) as well as their cellphone (86%).

              An interesting statistic to note is that while 79% of parents say they keep track of their children’s social media accounts, only 62% of them have access to their passwords. The 17% discrepancy could be coming from parents who feel that tracking their child’s social media activity, as a follower, is an effective enough measure.

              Digital Tools Parents Use to Keep Track of Their Children Offline

              While the digital realm is a place where parents want to keep close track of their children, many are relying on apps and devices to keep tabs on their kids when they’re not actively scrolling.

              When it comes to parental tracking, 86% of parents use some form of tool as a way to monitor their child’s physical location.


              A large majority (60%) of parents who do keep track of their children do so by using their child’s cell phone’s location sharing feature. The second most popular tracking method for parents is using a family monitoring app (such as Life360 or Bark), with 43% of parents opting for this method.

              Other various methods that parents use to track their children outside of the top two listed above are a dedicated tracking device, a smartwatch with built-in tracking, or a parental control app.

              Over 40% of Parents Have Caught Their Child Misbehaving With Tracking Tools

              While many parents utilize the tracking tools listed above, how many find them effective?

              According to those surveyed, 41% of parents have been able to catch their child doing something they weren’t supposed to be doing due to some form of digital tracking.


              While a majority of parents have caught their children doing something they shouldn’t have in an online capacity, there was also a small percentage (9%) who were able to use digital tracking tools to catch their child misbehaving in the real world.

              All About Cookies did note that 89% of parents disclosed to their children that they are being tracked.

              Some Parents Have Concerns about Tracking Their Children

              While a very large majority of parents are tracking their children in some way, it seems that some may have concerns about using specific apps to track their kids.

              According to the survey, 62% of parents have some level of concern about using tracking technology.


              The results show that parents have various levels of concern when it comes to: tracking their adolescent over time (31%), possible data breaches that could leave them or their child’s data exposed (26%), and possibly jeopardizing the relationship they have with their child (20%)

              Final Thoughts

              These results show that while parents do keep track of their children and, in some instances, have utilized digital trackers to catch their children exhibiting bad behavior, they also have some level of concern over how often and exactly when to track their child.

              Parents will need to navigate this difficult situation by attempting to find a balance between keeping track of their child while also keeping them safe in both the digital and personal worlds that are constantly changing.

              About Author: Derick Migliacci is a Digital PR Strategist for AllAboutCookies. He brings over 3 years of experience in the PR world as well as a passion for digital trends, cybersecurity, and technology.
              Reviewed by Irfan Ahmad.

              Read next:

              ‘Dangerous’ AI child sexual abuse reaches record high as public backs clampdown on ‘uncensored’ tools

              Is the AI black box right on time?
              by Guest Contributor via Digital Information World

              Saturday, March 28, 2026

              Is the AI black box right on time?

              by Inderscience

              Irrespective of the ethics and the apocalyptic predictions, artificial intelligence (AI) has already become a central component of economic and institutional decision-making. Research in the International Journal of Intelligent Systems Design and Computing has gone beyond an industry-specific analysis of the state-of-the-AI-art and offers a detailed framework of how the many different AI tools are being adopted.

              The main point that arises from the analysis is that while AI technologies are being used widely across sectors, organizations do not yet have a strategy that allows AI to be integrated in a way that balances innovation with accountability.

              AI encompasses so-called machine learning for recognising patterns in data, natural language processing that can interpret and human language, and generative tools that produce text, images, video, computer code, and other output. All these tools are changing many sectors from healthcare diagnostics to processing industrial and financial data, to produce hit pop songs and accompanying videos.

              Education and business operations are undergoing similar shifts. Adaptive learning platforms in education adjust course material to suit the way individual students learn. In retail and logistics, AI is being used to refine supply chains, manage inventory, and personalize the customer "experience". Even in the world of law, law enforcement is using AI to assess crime scenes and weigh evidence, while judges are using these tools to summarise their concluding remarks from massive briefs.

              One of the most pressing issues highlighted by the research is data privacy, as AI systems depend on large volumes of often sensitive and personal information. In addition, there is the notion of algorithmic transparency, wherein we are are losing the ability to understand how a given AI system is arriving at a specific decision. Indeed, many of the most advanced AI models now work essentially as black boxes, meaning their internal processes simply cannot be interpreted…perhaps without resorting to another AI to do the interpretation! Such a lack of transparency might undermine trust in high-stakes contexts such as medical diagnoses or judicial decisions.

              To address the issues, the researchers propose a framework based on stakeholder theory, which maintains an emphasis on the importance of all parties affected by the decisions AI might make. In the business context, they stress that organisations should not focus solely on efficiency or profit, they must have perspective that them to weigh the interests of employees, customers, regulators, and society at large when adopting AI. This might only come about, of course, with governance, regulations, and ethical obligations.

              Idemudia, E.C. (2025) 'Artificial intelligence's effect and influence on multiple disciplines and sectors', Int. J. Intelligent Systems Design and Computing, Vol. 3, Nos. 3/4, pp.254–274.
              DOI: 10.1504/IJISDC.2025.152183.

              Image: Immo Wegmann - Unsplash

              Edited and reviewed by Ayaz Khan.

              Originally published by Inderscience and republished here with permission. Editor’s note: Typo corrected (“bot” to “not”).

              Read next: AI makes rewilding look tame – and misses its messy reality
              by External Contributor via Digital Information World

              AI makes rewilding look tame – and misses its messy reality

              Mike Jeffries, Northumbria University, Newcastle

              AI-generated rewilding images present neat, idealized landscapes, ignoring ecological messiness and controversial species realities.
              ‘Create an image of what rewilding in England looks like’, according to ChatGPT. Image generated by The Conversation using ChatGPT.CC BY-SA

              Humans have always imagined the natural world. From Ice Age cave paintings to the modern day, we depict the animals and landscapes we value – and ignore those we don’t.

              Now artificial intelligence is doing the imagining for us. And when asked to picture “rewilded” Britain, it produces landscapes that are strikingly similar – and tame.

              Two geographers at the University of Aberdeen recently did exactly this. In their research they present examples of how widely used AI chatbots (Gemini, ChatGPT and others) generated images of rewilded landscapes in the UK. The bots were prompted with commands such as “Can you produce an image of what rewilding in Scotland looks like?” or “Create an image of what rewilding in England looks like”, tailored to each bot’s style.

              The authors recognise that the commands are very general, but that gives the bots free rein. The images generated were then compared using both the composition (for example point of view, scale, lighting) and content (what is in the picture and what is not, primarily the habitat types, species or humans).

              A landscape without risk

              The AI rewilded landscapes were all very similar, all but one featuring distant hills, grading politely to a valley foreground of open meadow or heath with a stream or pool. A golden light plays across the scenes, illuminating foreground flowers. Ponies and deer feature routinely, plus the occasional Highland cow. Perhaps unsurprisingly there were no humans, nor any human presence shown by buildings or other artefacts.

              Two AI-generated images of rewilded landscapes
              Images generated by the Aberdeen researchers using ChatGPT of rewilding in Scotland (left) and England (right). Note the similarity to the image generated by The Conversation using the same prompt (at the top of this article). Wartmann & Cary / ChatGPT, CC BY-SA

              There was also no mess, no decay, no death, no animals likely to provoke a sharp intake of breath. No wolves, lynx, bears or bison, the creatures that routinely haunt the real arguments about rewilding.

              Two AI-generated images of rewilded landscapes
              Copilot’s take on rewilding in Scotland (left) and England (right). Wartmann & Cary / ChatGPT, CC BY-SA

              The pictures were achingly dull, polite, as the authors point out “ordered and harmonious bucolic”.

              Only experts get the messy version

              AI really can generate images of ecologically accurate rewilding. This one made with Gemini, for instance, captures the messiness and chaos of a genuinely rewilded British landscape:

              Gemini prompt: ‘A hyper-realistic, wide-angle landscape photograph of the British countryside 50 years after a large-scale rewilding project. The scene is defined by 'ecological messiness’ and structural diversity: thickets of thorny scrub like blackthorn and hawthorn transitioning into expanding groves of self-seeded oak and birch. No straight lines or mown grass. The ground is a mosaic of tall tussocky grasses, rotting fallen logs (deadwood), and muddy wallows created by free-roaming herbivores. In the mid-ground, a small herd of Exmoor ponies or Iron Age pigs are rooting through the undergrowth. The vegetation is dense and layered, featuring wild dog rose, brambles, and stands of willow in damp hollows. The lighting is the soft, dampened silver of a British overcast afternoon, highlighting the textures of lichen, moss, and wet leaves. No fences, no roads, no manicured edges—just a complex, tangled, and thriving wild ecosystem.‘ Gemini / The Conversation, CC BY-SA

              However, it only does this when given highly specific instructions about species, landscapes, habitat types, and so on. In other words, you need to know what a rewilded landscape should look like in order to get a convincing image of one.

              For most users, the result is something else entirely: a lowest common denominator vision of nature.

              AI is copying our sanitised vision of the future

              The sanitised AI landscapes produced in the recent study are not surprising. The Aberdeen researchers note the models draw inspiration from available sources, including the social media and websites of environmental initiatives and NGOs promoting rewilding such as Cairngorm Connect and Knepp Estate Rewilding. Their visuals often used aerial perspectives, from inaccessible vantage points using drones. Animals tended to be both iconic but also lovable such as beavers or wildcats.

              People and our structures such as homes or farm buildings were largely missing. Reptiles, amphibians and invertebrates were notably absent too.

              Wolves, bison, rewilded forest
              Rewilding images are more accurate when they display natural processes like scavenging or storm damage. (Image generated by The Conversation using Gemini and a detailed prompt). The Conversation / Gemini, CC BY-SA

              A particular concern of the authors’ is that the imagery used by the NGOs excludes processes, species and people who might challenge a narrow, conventional view of prettified nature. No wonder the AI was conjuring the sanitised landscapes, although actual rewilding routinely creates landscapes that are an aesthetic challenge, in particular messy, scrubby terrain.

              We’ve always argued about what nature should look like

              Visual imagery has long had a powerful influence on our view of nature. Wild landscapes in the UK were regarded with disdain by the more genteel classes. The writer Daniel Defoe, in his 1726 travelogue touring throughout Britain, characterised the Lake District as “All Barren and wild, of no use or advantage to man or beast…Unpassable hills…. All the pleasant part of England is at an end”. He wasn’t a fan.

              The Romantic movement turned this bias on its head and venerated the sublime or sometimes terrible beauty of the landscape. For example Caspar David Friedrich’s famed painting of 1818, Wanderer above a sea of fog, with a lone adventurer gazing into the distant view of summits and clouds from a crag.

              There is a touch of the sublime to the AI landscapes, certainly the viewpoint from on high. However a challenge for rewilding projects is that the resulting landscapes can be distinctly ugly and messy, certainly, neither wistfully pretty nor the dramatic sublime.

              AI-generated image of wild pigs and horses in a rewilded Britain
              The messy reality of a rewilded Britain. (Image generated by The Conversation using Gemini and a detailed 376 word prompt). The Conversation / Gemini, CC BY-SA

              Rewilded sites are often scrubby and untidy. This can be on a large scale as natural processes kick in and open habitat scrubs over. Scrub habitat can be superb for wildlife, for example the Knepp Estate credits the regeneration of willow scrub for the return of iconic butterfly the purple emperor. The trouble is that scrub looks untidy and uncared for.

              This has become a particularly common criticism of nature recovery projects, especially in urban settings: road verges unmown, weeds in pavements, parks less manicured. Some researchers call it an aesthetic backlash. The AI wildscapes are largely free of scrub which is no surprise because this does not feature much on the image sources the AI drew upon. This is a risk for projects in the real world. If the public comes to expect nature recovery to look neat and picturesque, then the messy reality may be harder to accept.

              No scrub, no wolves, no people. AI has created a very tame rewilding.The Conversation

              Mike Jeffries, Associate Professor, Ecology, Northumbria University, Newcastle

              This article is republished from The Conversation under a Creative Commons license. Read the original article.

              Read next: ‘Dangerous’ AI child sexual abuse reaches record high as public backs clampdown on ‘uncensored’ tools


              by External Contributor via Digital Information World

              Friday, March 27, 2026

              Fragmented phone use — not total screen time — is the main driver of information overload, study finds

              by Tiina Aulanko-JokirinneSarah Hudson

              Frequent micro-checks and bursts of messaging are most strongly linked to feeling overloaded — and these habits are the hardest to change, says research from Aalto University.

              Image: Muhmed Alaa El-Bank / Unsplash

              Amid hot discussion on screen time, social media use and the impact of digital devices on our well-being, a seven-month study from Aalto University in Finland sheds new light on what overwhelms users the most –– and the results aren’t what you might think.

              ‘Screen time does matter, but the heaviest users aren’t the most overloaded,’ says doctoral researcher Henrik Lassila. ‘Those who feel most overwhelmed are the ones who return to their phone again and again for brief moments and then put it down shortly after.’

              The seven-month study followed the digital behaviour of nearly 300 adults in Germany across smartphones and computers. Participants completed repeated surveys about information overload, while all apps and websites used were logged, creating a rich longitudinal dataset of real-world device use.

              The findings show that fragmented use occurs most often on mobile devices and especially in messaging. For example, watching a short clip, locking the screen, then returning a few minutes later — patterns that create gaps and constant task switching. These ‘bursty’ routines were most strongly associated with feeling overwhelmed, even when total time spent on devices was similar.

              ‘We feel overloaded when we can’t process all the incoming information and our minds feel ‘full’ or stressed,’ Lassila says. ‘Information overload is linked with negative emotions, which can in turn drive more checking — a vicious cycle.’ While the study doesn’t directly address the question of why fragmented checking is so stressful, Lassila suggests that task-switching has been identified in other studies as particularly cognitively tiring.

              Interestingly, although fragmented use often includes messaging, the study found that more time spent messaging did not by itself correspond to higher digital overwhelm. Rather, it was the short, frequent returns to the device that mattered most.

              Hard habits to break

              Earlier surveys have suggested that people quit social media when they feel a sense of digital overwhelm. The new study found little evidence for that. ‘People find it hard to change their behaviour,’ says Professor Janne Lindqvist. ‘Surprisingly, highly overloaded and non-overloaded participants used their devices for roughly the same total time over the study period. Those at the highest levels of overload tended to stay there, and those not overloaded rarely became overloaded.’

              According to the researchers, device use and the feeling of overload are tightly woven into daily routines, making them difficult to change. One practical idea is a ‘micro-check tracker’ that would show users how often they return to their phones in short bursts. ‘You don’t need to respond to every ping immediately. Do one thing at a time,’ Lindqvist advises. ‘Ideally, turn off non-essential notifications and be present with whatever you’re doing.’

              In a follow-up study currently under peer review, the team also finds that overload correlates with psychological stress, negative emotions - and anxiety.

              ‘These days many of us are on our phones repeatedly,’ Lindqvist says. ‘Try batching: check messages twice a day and reply in one session. Based on our findings, you may feel less stressed.’

              The paper, ‘Stop Fiddling With Your Phone and Go Offline’, will be presented at CHI 2026, the leading conference on human–computer interaction, and is available online here.

              Note: This post was originally published on Aalto University and is republished here with permission.

              Reviewed by Irfan Ahmad.

              Read next: 

              • Research Shows TikTok Spreads Inaccurate Mental Health Content More Than Other Social Media Platforms


              by External Contributor via Digital Information World

              Thursday, March 26, 2026

              Research Shows TikTok Spreads Inaccurate Mental Health Content More Than Other Social Media Platforms

              By UEA Communications

              Image: Solen Feyissa - Pexels

              Researchers investigated the accuracy of mental health and neurodivergence information across social media platforms including YouTube, TikTok, Facebook, Instagram and X (formerly Twitter).

              A substantial proportion of TikTok posts about ADHD and autism are misleading - according to a new study from the University of East Anglia (UEA).

              Researchers investigated the accuracy of mental health and neurodivergence information across social media platforms including YouTube, TikTok, Facebook, Instagram and X (formerly Twitter).

              They found that these platforms are awash with misleading or unsubstantiated mental health content - and that TikTok is the worst offender.

              The study also reveals that posts about neurodivergence such as autism and ADHD contained higher levels of misinformation than many other mental health topics.

              Dr Eleanor Chatburn, from UEA’s Norwich Medical School, said: “Our work uncovered misinformation rates on social media as high as 56 per cent. This highlights how easily engaging videos can spread widely online, even when the information isn’t always accurate.

              “Social media has become an important place where many young people learn about mental health, but the quality of this information can vary greatly. This means that misleading content can circulate quickly, particularly if there aren’t accessible and reliable sources available.”

              How the research happened

              The team analysed more than 5,000 social media posts about mental health topics including autism, ADHD, schizophrenia, bipolar disorder, depression, eating disorders, OCD, anxiety and phobias.

              The systematic review is the first to examine mental health and neurodivergence information across multiple social media platforms.

              TikTok shows higher levels of misinformation

              The study found that TikTok frequently contained higher levels of inaccurate or unsubstantiated mental health content than other platforms.

              Dr Alice Carter undertook the research as part of her doctoral thesis. She said: “When we looked closely at TikTok content, studies reported that 52 per cent of ADHD-related videos and 41 per cent of autism videos analysed were inaccurate.

              “By contrast, YouTube averaged 22 per cent misinformation, while Facebook averaged just under 15 per cent,” she added.

              Why misinformation is such a problem

              Dr Chatburn said: “Metal health information on social media matters because many young people now turn to these platforms to understand their symptoms and possible diagnoses.

              “TikTok content has been linked to young people increasingly believing they may have mental health or neurodevelopmental conditions. While this questioning can be a helpful starting point, it’s important these questions lead to proper clinical assessment with a professional.

              “As well as leading to misunderstanding of serious conditions and pathologising ordinary behaviour, misinformation can also lead to delayed diagnosis for people that actually do need help.

              “When false ideas spread, they can feed stigma and make people less likely to reach out for support when they really need it.

              “It can also make mental illness seem scary or hopeless, which creates even more fear and misunderstanding.

              “On top of that, when people come across misleading advice about treatments, especially ones that aren’t backed by evidence, it can delay them from getting proper care and ultimately make things worse.”

              Professionals vs influencers - who should we trust?

              Unsurprisingly, the review found that content created by healthcare professionals was consistently more accurate. However, professional voices still represent only a small share of mental health content circulating on these platforms.

              Dr Carter said: “In the case of ADHD on TikTok for example, just three per cent of professional videos contained misinformation - compared to 55 per cent of videos by non professionals.

              “While lived-experience can play an important role, with personal stories helping people to feel understood and raising awareness of mental health conditions, it is vital to ensure that accurate and evidence-based information from clinicians and trusted organisations is also visible and easy to find.

              “TikTok’s algorithms are also designed to push rapidly engaging content and this is a major driver of misinformation.

              “Once users show interest in a topic, they are bombarded with similar posts - creating powerful echo chambers that can reinforce false or exaggerated claims.

              “It is a perfect storm for misinformation to go viral faster than facts can catch up."

              YouTube Kids - a rare bright spot

              YouTube Kids was found to contain no misinformation for anxiety and depression, and only 8.9 per cent for ADHD - a result attributed to the platform’s stricter moderation rules.

              By contrast, standard YouTube was described as “highly inconsistent”, with videos ranging from poor to moderately reliable, depending heavily on the topic, channel and influencer.

              Clinicians must become creators

              The review concludes with a call for health organisations and clinicians to create and promote better evidence-based content.

              The team have also called for improved content moderation, standardised tools for assessing online mental health information, and clearer definitions of misinformation.

              The Quality of Mental Health and Neurodivergence-Related Information on Social Media: A Systematic Review’ is published in The Journal of Social Media Research.

              Note: This article was originally published by the University of East Anglia, and is republished with permission.

              Reviewed by Asim BN.

              Read next: 

              • 72% of Gen Z Say Customer Reviews Are Most Credible Brand Influence, Survey Finds

              • AI Has Made Marketing Faster, But It Hasn’t Improved Brand Engagement or Differentiation


              by External Contributor via Digital Information World

              Your voice, your typing, your sleep – what workplace wellbeing apps are really analysing

              Mohammad Hossein Amirhosseini, University of East London

              Image: Cottonbro studio / Pexels

              A workplace wellbeing app might seem like a simple and helpful tool – a mood check-in, some stress management advice, or a chatbot asking how your week has gone. But behind that supportive language, some systems are also quietly analysing your voice, writing style and digital behaviour for signs of psychological distress.

              These tools are already on the market – aimed at workplaces, universities and healthcare. They are framed as early-intervention systems that promise to cut costs and identify problems before they become serious. Unfortunately, companies are under no obligation to report using them, so data about how widespread they are is lacking.

              The basic idea behind these tools is that behaviour leaves patterns. Artificial intelligence (AI) systems trained on large datasets learn to recognise signals associated with particular mental health conditions, and when similar signals appear in new data, the system produces a probability estimate.

              For many people, the surprising part is how much ordinary behaviour can reveal. Voice recordings can pick up changes in rhythm, pitch and hesitation. Language models can analyse word choice and emotional tone. Smartphone data has also been explored as a way of tracking changes in sleep, movement and social interaction – all without the person doing anything out of the ordinary.

              But detecting a statistical signal is very different from identifying a genuine problem. Human behaviour is deeply contextual. Someone may speak slowly because they are tired, nervous or communicating in a second language. Reduced online activity might simply reflect a busy week.

              Even well-designed systems will make mistakes. A person who is genuinely struggling may not show the behavioural patterns the system was trained to recognise, while someone else may be incorrectly flagged as being in distress.

              The pressure to develop these tools is real. The World Health Organization estimates that depression and anxiety cost the global economy US$1 trillion (£800 million) a year in lost productivity. Universities report rising demand for counselling, and employers are dealing with burnout and stress-related absence. Automated early-warning systems can seem like an attractive answer.

              When wellbeing becomes surveillance

              But this technology can change something fundamental about how mental health is understood. Traditionally, mental health is assessed through conversations between a person and a therapist, where context matters enormously. These systems work differently, inferring psychological states from behavioural traces that were never intended to communicate emotional information.

              Once those inferences are made, they can influence decisions well beyond healthcare. Assessments of someone’s emotional state could shape workplace programmes, student support systems or insurance models, affecting how institutions judge a person’s reliability or suitability for a role. In effect, psychological states become a new kind of data.

              There are particular risks for some groups. Neurodivergent people often communicate in ways that differ from the norms assumed by many datasets. Someone speaking in a second language may pause more frequently, producing speech patterns an algorithm could misinterpret. A person going through grief or illness may display signals that resemble those associated with mental health conditions – without actually having one.

              Used carefully by healthcare professionals, these tools could have genuine value – helping therapists spot early warning signs of deteriorating mental health. But the same capability looks very different when deployed across a workplace or university without people’s knowledge.

              At a minimum, people should know when these tools are being used, what data is being analysed and whether the system has been independently tested. A claim that software can detect distress is not, on its own, enough.The Conversation

              Mohammad Hossein Amirhosseini, Associate Professor, Computer Science and Digital Technologies, University of East London

              This article is republished from The Conversation under a Creative Commons license. Read the original article.

              Reviewed by Asim BN.

              Read next: 

              Artificial Intelligence: Friend or Foe?


              by External Contributor via Digital Information World