Tuesday, March 31, 2026

Workplace collaboration: Employees reveal what they want leaders to change

By Ellie Stewart

Building a collaborative culture is the ultimate business goal, but it can be a slog in practice. It doesn't take much—just one broken link in the chain—to throw a whole project off the rails.

To see how teams are collaborating and staying productive right now, Adobe for Business surveyed over 1,000 full-time US workers. They wanted to see which tools and processes are actually helping and which are just adding noise.

The cost of collaboration barriers

Collaboration struggles can last just minutes, and they're resolved in no time. Sometimes it takes several days or even weeks to get to the bottom of a breakdown before a shared understanding is reached. To help quantify the cost of collaboration breakdowns in terms of lost time, the Adobe for Business study found that on average, 97 hours are lost due to communication struggles, and 81 hours are wasted in unproductive meetings.

The 97 hours a year lost to communication breakdowns equates to nearly two hours a week, so what can businesses do to avoid these breakdowns and help employees reclaim valuable time?

The workers surveyed estimated that if ineffective collaboration processes were removed, they could reclaim 178 hours a year, nearly 3 and a half hours a week, to put toward strategic, high-impact work. For anyone in a leadership spot, clearing out these hurdles isn't just about efficiency—it’s about survival. In fact, 90% of those surveyed think that if you just got the blockers out of the way, they could wrap up a 40-hour week in four days. That's a massive chunk of time currently being thrown away.

The study also considered the time workers in different industries think they could save, finding that employees in the finance industry are particularly in support of this workweek change. Nearly all (94%) finance employees surveyed reported that they could switch to a four-day workweek if collaboration were improved.

Inefficiency causes across roles, industries and location

The why behind collaboration inefficiencies varies by job role and industry, providing valuable insights for business leaders on potential changes to implement to best suit their teams. The data shows that "death by meeting" hits the C-suite the hardest. Senior staff are losing roughly 91 hours a year to meetings that don't go anywhere—that’s two hours gone every single week. It’s better for entry-level staff, but not by much; they're still losing 65 hours. The size of business matters here, too: big enterprise teams are wasting 69% more time than people at smaller shops.

Top 5 states losing the most time to unproductive meetings:

  • New York - 90 hours lost a year
  • New Jersey - 81 hours lost a year
  • California - 79 hours lost a year
  • Florida - 76 hours a year

The potential benefits of addressing collaboration challenges are increased for certain industries where a significant amount of valuable time is being drained. Workers in the manufacturing industry reported they could reclaim the most time back due to collaboration blockers, at up to 214 hours a year, which is over four hours a week.


Industries losing the most time to collaboration friction:
  • Manufacturing - 214 hours a year
  • Sales - 208 hours a year
  • Finance - 200 hours a year
  • Marketing - 186 hours a year
  • Tech - 179 hours a year

These teams stand to gain valuable time back if effective methods of collaboration are put into place to increase productivity, more than the national average of 178 hours lost a year.

Here's why projects fail and goals are misaligned

It’s not uncommon for some projects to veer off course, but it’s important for teams to examine why this happens in order to reclaim time lost to inefficient collaboration. The employee survey from Adobe for Business indicates that communication breakdowns are the key contributor to blocking effective collaboration, causing nearly half (46%) of all project delays.

It’s no surprise people are exhausted when a third of their projects (36%) start without any real consensus from the stakeholders. Most projects tend to get stuck before they even get a chance to start. It leaves the rest of the team scrambling to clean up a mess they didn't even make in the first place.

Without team alignment from the offset, the consequences to projects are immediately felt. Here are five key ‘costs’ of disconnected teams, according to the employees surveyed:

  • Leads to wasted time and effort - 76%
  • They experience missed deadlines - 58%
  • Report decreased work quality - 57%
  • Flag struggle to track progress - 47%
  • Encounter budget overruns - 23%

        One of the most substantial ways in which team misalignment in project goals can impact employees is by causing a significant rework. Roughly a third (33%) surveyed identified that they have had to rework projects due to misalignment.

        Employees also noted the key reasons why they feel projects are thrown off course:

        • Unclear leadership directives - 40%
        • Lack of standardized processes across teams - 34%
        • Frequent changes in project priorities - 34%
        • Insufficient visibility into other teams’ progress - 28%
        • Too many disconnected tools - 28%

              In addition to the above impacts felt by employees, they also cite a lack of regular cross-functional check-ins (27%), an absence of a single source of truth for project information (23%), and a lack of training on processes (17%) as blockers to projects staying on course.

              The psychological toll of collaboration blockers

              Aside from the impact of ineffective collaboration on the project at hand, there’s a significant impact on the workforce from a psychological perspective. More than half (56%) of US employees surveyed said navigating collaboration hurdles caused mental fatigue.

              Varying work environments also led to employees citing different levels of mental toll thanks to ineffective collaboration. Over half (55%) of both remote and on-site workers noted poor collaboration as cause of stress. Without supportive workflows in place, this stress goes on to have repercussions in the form of retention. On-site employees are 47% more likely to seek new job opportunities due to a lack of effective workflow management and team collaboration.

              What employees want to dismantle ineffective collaboration

              Instead of opting to add more tech solutions to try and solve inefficiencies in collaboration, there needs to be strategic intervention, and employees in the Adobe for Business study point to the enablers they see as most valuable in unlocking smoother ways of working with their teams.


              The set up of clear and consistent communication channels (42%) was the most requested improvement to help solve a lack of effective collaboration according to employees. This was followed by explicitly defined roles and responsibilities (38%) within the team to ensure everyone is aligned on expectations.

              Demand is also high for a platform that acts as a ‘single source of truth’ for the project, over a fifth of all employees deemed it to be essential. This demand increases for remote workers who are 28% more likely to request a ‘single source of truth’ as a solution for collaboration breakdowns compared to on-site workers. Employees seek this unified approach in order to avoid a siloed team structure, as over one in five employees identified this approach as a major barrier to collaboration.

              Understanding collaboration enablers is extremely important and as part of this, it’s essential to consider the varying support required by different demographics within the team. Timely decision making and clear next steps (41%) is highly valued by Baby Boomers, whereas Gen X and Millennials want to prioritize clear communication channels (42%) to effectively collaborate. Gen Z say a shared understanding of project goals (40%) would be most valuable to them.

              To support employees in reducing the collaboration gap, teams want to see updates to workflow management that centralizes project insights to a ‘single source of truth’, automates low-impact admin tasks, and formalizes processes to provide the structure and real-time visibility of performance necessary.

              Companies can’t afford to just sit back and hope their teams figure out how to work together. You have to be proactive about fixing these gaps—not just for the sake of the bottom line, but to avoid high performers from leaving. Once you get everyone on the same page, the busywork falls away and the real work finally starts.

              Reviewed by Irfan Ahmad.

              Read next:

              Fragmented phone use — not total screen time — is the main driver of information overload, study finds

              • Most Parents Keep Track of Their Children’s Online Browsing


              by Guest Contributor via Digital Information World

              Monday, March 30, 2026

              Most Parents Keep Track of Their Children’s Online Browsing

              How Parents Track Their Children

              With the ever-evolving digital landscape, children are now on more devices than ever. From school, to socializing, and home, children now spend almost every phase of their day-to-day life interacting with some form of technology to stay in touch. This creates new challenges for parents in tracking their children across various digital devices and platforms. How often and to what extent are parents able to fully keep up?

              A 2026 All About Cookies survey found that 96% of parents keep tabs on their child’s devices in some way, as evidenced by the graphic below.


              With the recent shift to a more digital schooling system, it’s not a major surprise that school performance is the #1 thing that parents keep track of. Screentime, banking/financial, social media accounts, as well as internet browsing history, rounded out the top 5 things that parents kept the biggest eyes on outside of academic monitoring.

              A Majority of Parents Have Access to their Child’s Devices

              With almost every parent surveyed keeping track of their child in some way, shape, or form, many of those parents have access to their kid’s passwords on various devices.


              Over 85% of parents claimed to have access to their child’s computer/tablet (88%) as well as their cellphone (86%).

              An interesting statistic to note is that while 79% of parents say they keep track of their children’s social media accounts, only 62% of them have access to their passwords. The 17% discrepancy could be coming from parents who feel that tracking their child’s social media activity, as a follower, is an effective enough measure.

              Digital Tools Parents Use to Keep Track of Their Children Offline

              While the digital realm is a place where parents want to keep close track of their children, many are relying on apps and devices to keep tabs on their kids when they’re not actively scrolling.

              When it comes to parental tracking, 86% of parents use some form of tool as a way to monitor their child’s physical location.


              A large majority (60%) of parents who do keep track of their children do so by using their child’s cell phone’s location sharing feature. The second most popular tracking method for parents is using a family monitoring app (such as Life360 or Bark), with 43% of parents opting for this method.

              Other various methods that parents use to track their children outside of the top two listed above are a dedicated tracking device, a smartwatch with built-in tracking, or a parental control app.

              Over 40% of Parents Have Caught Their Child Misbehaving With Tracking Tools

              While many parents utilize the tracking tools listed above, how many find them effective?

              According to those surveyed, 41% of parents have been able to catch their child doing something they weren’t supposed to be doing due to some form of digital tracking.


              While a majority of parents have caught their children doing something they shouldn’t have in an online capacity, there was also a small percentage (9%) who were able to use digital tracking tools to catch their child misbehaving in the real world.

              All About Cookies did note that 89% of parents disclosed to their children that they are being tracked.

              Some Parents Have Concerns about Tracking Their Children

              While a very large majority of parents are tracking their children in some way, it seems that some may have concerns about using specific apps to track their kids.

              According to the survey, 62% of parents have some level of concern about using tracking technology.


              The results show that parents have various levels of concern when it comes to: tracking their adolescent over time (31%), possible data breaches that could leave them or their child’s data exposed (26%), and possibly jeopardizing the relationship they have with their child (20%)

              Final Thoughts

              These results show that while parents do keep track of their children and, in some instances, have utilized digital trackers to catch their children exhibiting bad behavior, they also have some level of concern over how often and exactly when to track their child.

              Parents will need to navigate this difficult situation by attempting to find a balance between keeping track of their child while also keeping them safe in both the digital and personal worlds that are constantly changing.

              About Author: Derick Migliacci is a Digital PR Strategist for AllAboutCookies. He brings over 3 years of experience in the PR world as well as a passion for digital trends, cybersecurity, and technology.
              Reviewed by Irfan Ahmad.

              Read next:

              ‘Dangerous’ AI child sexual abuse reaches record high as public backs clampdown on ‘uncensored’ tools

              Is the AI black box right on time?
              by Guest Contributor via Digital Information World

              Saturday, March 28, 2026

              Is the AI black box right on time?

              by Inderscience

              Irrespective of the ethics and the apocalyptic predictions, artificial intelligence (AI) has already become a central component of economic and institutional decision-making. Research in the International Journal of Intelligent Systems Design and Computing has gone beyond an industry-specific analysis of the state-of-the-AI-art and offers a detailed framework of how the many different AI tools are being adopted.

              The main point that arises from the analysis is that while AI technologies are being used widely across sectors, organizations do not yet have a strategy that allows AI to be integrated in a way that balances innovation with accountability.

              AI encompasses so-called machine learning for recognising patterns in data, natural language processing that can interpret and human language, and generative tools that produce text, images, video, computer code, and other output. All these tools are changing many sectors from healthcare diagnostics to processing industrial and financial data, to produce hit pop songs and accompanying videos.

              Education and business operations are undergoing similar shifts. Adaptive learning platforms in education adjust course material to suit the way individual students learn. In retail and logistics, AI is being used to refine supply chains, manage inventory, and personalize the customer "experience". Even in the world of law, law enforcement is using AI to assess crime scenes and weigh evidence, while judges are using these tools to summarise their concluding remarks from massive briefs.

              One of the most pressing issues highlighted by the research is data privacy, as AI systems depend on large volumes of often sensitive and personal information. In addition, there is the notion of algorithmic transparency, wherein we are are losing the ability to understand how a given AI system is arriving at a specific decision. Indeed, many of the most advanced AI models now work essentially as black boxes, meaning their internal processes simply cannot be interpreted…perhaps without resorting to another AI to do the interpretation! Such a lack of transparency might undermine trust in high-stakes contexts such as medical diagnoses or judicial decisions.

              To address the issues, the researchers propose a framework based on stakeholder theory, which maintains an emphasis on the importance of all parties affected by the decisions AI might make. In the business context, they stress that organisations should not focus solely on efficiency or profit, they must have perspective that them to weigh the interests of employees, customers, regulators, and society at large when adopting AI. This might only come about, of course, with governance, regulations, and ethical obligations.

              Idemudia, E.C. (2025) 'Artificial intelligence's effect and influence on multiple disciplines and sectors', Int. J. Intelligent Systems Design and Computing, Vol. 3, Nos. 3/4, pp.254–274.
              DOI: 10.1504/IJISDC.2025.152183.

              Image: Immo Wegmann - Unsplash

              Edited and reviewed by Ayaz Khan.

              Originally published by Inderscience and republished here with permission. Editor’s note: Typo corrected (“bot” to “not”).

              Read next: AI makes rewilding look tame – and misses its messy reality
              by External Contributor via Digital Information World

              AI makes rewilding look tame – and misses its messy reality

              Mike Jeffries, Northumbria University, Newcastle

              AI-generated rewilding images present neat, idealized landscapes, ignoring ecological messiness and controversial species realities.
              ‘Create an image of what rewilding in England looks like’, according to ChatGPT. Image generated by The Conversation using ChatGPT.CC BY-SA

              Humans have always imagined the natural world. From Ice Age cave paintings to the modern day, we depict the animals and landscapes we value – and ignore those we don’t.

              Now artificial intelligence is doing the imagining for us. And when asked to picture “rewilded” Britain, it produces landscapes that are strikingly similar – and tame.

              Two geographers at the University of Aberdeen recently did exactly this. In their research they present examples of how widely used AI chatbots (Gemini, ChatGPT and others) generated images of rewilded landscapes in the UK. The bots were prompted with commands such as “Can you produce an image of what rewilding in Scotland looks like?” or “Create an image of what rewilding in England looks like”, tailored to each bot’s style.

              The authors recognise that the commands are very general, but that gives the bots free rein. The images generated were then compared using both the composition (for example point of view, scale, lighting) and content (what is in the picture and what is not, primarily the habitat types, species or humans).

              A landscape without risk

              The AI rewilded landscapes were all very similar, all but one featuring distant hills, grading politely to a valley foreground of open meadow or heath with a stream or pool. A golden light plays across the scenes, illuminating foreground flowers. Ponies and deer feature routinely, plus the occasional Highland cow. Perhaps unsurprisingly there were no humans, nor any human presence shown by buildings or other artefacts.

              Two AI-generated images of rewilded landscapes
              Images generated by the Aberdeen researchers using ChatGPT of rewilding in Scotland (left) and England (right). Note the similarity to the image generated by The Conversation using the same prompt (at the top of this article). Wartmann & Cary / ChatGPT, CC BY-SA

              There was also no mess, no decay, no death, no animals likely to provoke a sharp intake of breath. No wolves, lynx, bears or bison, the creatures that routinely haunt the real arguments about rewilding.

              Two AI-generated images of rewilded landscapes
              Copilot’s take on rewilding in Scotland (left) and England (right). Wartmann & Cary / ChatGPT, CC BY-SA

              The pictures were achingly dull, polite, as the authors point out “ordered and harmonious bucolic”.

              Only experts get the messy version

              AI really can generate images of ecologically accurate rewilding. This one made with Gemini, for instance, captures the messiness and chaos of a genuinely rewilded British landscape:

              Gemini prompt: ‘A hyper-realistic, wide-angle landscape photograph of the British countryside 50 years after a large-scale rewilding project. The scene is defined by 'ecological messiness’ and structural diversity: thickets of thorny scrub like blackthorn and hawthorn transitioning into expanding groves of self-seeded oak and birch. No straight lines or mown grass. The ground is a mosaic of tall tussocky grasses, rotting fallen logs (deadwood), and muddy wallows created by free-roaming herbivores. In the mid-ground, a small herd of Exmoor ponies or Iron Age pigs are rooting through the undergrowth. The vegetation is dense and layered, featuring wild dog rose, brambles, and stands of willow in damp hollows. The lighting is the soft, dampened silver of a British overcast afternoon, highlighting the textures of lichen, moss, and wet leaves. No fences, no roads, no manicured edges—just a complex, tangled, and thriving wild ecosystem.‘ Gemini / The Conversation, CC BY-SA

              However, it only does this when given highly specific instructions about species, landscapes, habitat types, and so on. In other words, you need to know what a rewilded landscape should look like in order to get a convincing image of one.

              For most users, the result is something else entirely: a lowest common denominator vision of nature.

              AI is copying our sanitised vision of the future

              The sanitised AI landscapes produced in the recent study are not surprising. The Aberdeen researchers note the models draw inspiration from available sources, including the social media and websites of environmental initiatives and NGOs promoting rewilding such as Cairngorm Connect and Knepp Estate Rewilding. Their visuals often used aerial perspectives, from inaccessible vantage points using drones. Animals tended to be both iconic but also lovable such as beavers or wildcats.

              People and our structures such as homes or farm buildings were largely missing. Reptiles, amphibians and invertebrates were notably absent too.

              Wolves, bison, rewilded forest
              Rewilding images are more accurate when they display natural processes like scavenging or storm damage. (Image generated by The Conversation using Gemini and a detailed prompt). The Conversation / Gemini, CC BY-SA

              A particular concern of the authors’ is that the imagery used by the NGOs excludes processes, species and people who might challenge a narrow, conventional view of prettified nature. No wonder the AI was conjuring the sanitised landscapes, although actual rewilding routinely creates landscapes that are an aesthetic challenge, in particular messy, scrubby terrain.

              We’ve always argued about what nature should look like

              Visual imagery has long had a powerful influence on our view of nature. Wild landscapes in the UK were regarded with disdain by the more genteel classes. The writer Daniel Defoe, in his 1726 travelogue touring throughout Britain, characterised the Lake District as “All Barren and wild, of no use or advantage to man or beast…Unpassable hills…. All the pleasant part of England is at an end”. He wasn’t a fan.

              The Romantic movement turned this bias on its head and venerated the sublime or sometimes terrible beauty of the landscape. For example Caspar David Friedrich’s famed painting of 1818, Wanderer above a sea of fog, with a lone adventurer gazing into the distant view of summits and clouds from a crag.

              There is a touch of the sublime to the AI landscapes, certainly the viewpoint from on high. However a challenge for rewilding projects is that the resulting landscapes can be distinctly ugly and messy, certainly, neither wistfully pretty nor the dramatic sublime.

              AI-generated image of wild pigs and horses in a rewilded Britain
              The messy reality of a rewilded Britain. (Image generated by The Conversation using Gemini and a detailed 376 word prompt). The Conversation / Gemini, CC BY-SA

              Rewilded sites are often scrubby and untidy. This can be on a large scale as natural processes kick in and open habitat scrubs over. Scrub habitat can be superb for wildlife, for example the Knepp Estate credits the regeneration of willow scrub for the return of iconic butterfly the purple emperor. The trouble is that scrub looks untidy and uncared for.

              This has become a particularly common criticism of nature recovery projects, especially in urban settings: road verges unmown, weeds in pavements, parks less manicured. Some researchers call it an aesthetic backlash. The AI wildscapes are largely free of scrub which is no surprise because this does not feature much on the image sources the AI drew upon. This is a risk for projects in the real world. If the public comes to expect nature recovery to look neat and picturesque, then the messy reality may be harder to accept.

              No scrub, no wolves, no people. AI has created a very tame rewilding.The Conversation

              Mike Jeffries, Associate Professor, Ecology, Northumbria University, Newcastle

              This article is republished from The Conversation under a Creative Commons license. Read the original article.

              Read next: ‘Dangerous’ AI child sexual abuse reaches record high as public backs clampdown on ‘uncensored’ tools


              by External Contributor via Digital Information World

              Friday, March 27, 2026

              Fragmented phone use — not total screen time — is the main driver of information overload, study finds

              by Tiina Aulanko-JokirinneSarah Hudson

              Frequent micro-checks and bursts of messaging are most strongly linked to feeling overloaded — and these habits are the hardest to change, says research from Aalto University.

              Image: Muhmed Alaa El-Bank / Unsplash

              Amid hot discussion on screen time, social media use and the impact of digital devices on our well-being, a seven-month study from Aalto University in Finland sheds new light on what overwhelms users the most –– and the results aren’t what you might think.

              ‘Screen time does matter, but the heaviest users aren’t the most overloaded,’ says doctoral researcher Henrik Lassila. ‘Those who feel most overwhelmed are the ones who return to their phone again and again for brief moments and then put it down shortly after.’

              The seven-month study followed the digital behaviour of nearly 300 adults in Germany across smartphones and computers. Participants completed repeated surveys about information overload, while all apps and websites used were logged, creating a rich longitudinal dataset of real-world device use.

              The findings show that fragmented use occurs most often on mobile devices and especially in messaging. For example, watching a short clip, locking the screen, then returning a few minutes later — patterns that create gaps and constant task switching. These ‘bursty’ routines were most strongly associated with feeling overwhelmed, even when total time spent on devices was similar.

              ‘We feel overloaded when we can’t process all the incoming information and our minds feel ‘full’ or stressed,’ Lassila says. ‘Information overload is linked with negative emotions, which can in turn drive more checking — a vicious cycle.’ While the study doesn’t directly address the question of why fragmented checking is so stressful, Lassila suggests that task-switching has been identified in other studies as particularly cognitively tiring.

              Interestingly, although fragmented use often includes messaging, the study found that more time spent messaging did not by itself correspond to higher digital overwhelm. Rather, it was the short, frequent returns to the device that mattered most.

              Hard habits to break

              Earlier surveys have suggested that people quit social media when they feel a sense of digital overwhelm. The new study found little evidence for that. ‘People find it hard to change their behaviour,’ says Professor Janne Lindqvist. ‘Surprisingly, highly overloaded and non-overloaded participants used their devices for roughly the same total time over the study period. Those at the highest levels of overload tended to stay there, and those not overloaded rarely became overloaded.’

              According to the researchers, device use and the feeling of overload are tightly woven into daily routines, making them difficult to change. One practical idea is a ‘micro-check tracker’ that would show users how often they return to their phones in short bursts. ‘You don’t need to respond to every ping immediately. Do one thing at a time,’ Lindqvist advises. ‘Ideally, turn off non-essential notifications and be present with whatever you’re doing.’

              In a follow-up study currently under peer review, the team also finds that overload correlates with psychological stress, negative emotions - and anxiety.

              ‘These days many of us are on our phones repeatedly,’ Lindqvist says. ‘Try batching: check messages twice a day and reply in one session. Based on our findings, you may feel less stressed.’

              The paper, ‘Stop Fiddling With Your Phone and Go Offline’, will be presented at CHI 2026, the leading conference on human–computer interaction, and is available online here.

              Note: This post was originally published on Aalto University and is republished here with permission.

              Reviewed by Irfan Ahmad.

              Read next: 

              • Research Shows TikTok Spreads Inaccurate Mental Health Content More Than Other Social Media Platforms


              by External Contributor via Digital Information World

              Thursday, March 26, 2026

              Research Shows TikTok Spreads Inaccurate Mental Health Content More Than Other Social Media Platforms

              By UEA Communications

              Image: Solen Feyissa - Pexels

              Researchers investigated the accuracy of mental health and neurodivergence information across social media platforms including YouTube, TikTok, Facebook, Instagram and X (formerly Twitter).

              A substantial proportion of TikTok posts about ADHD and autism are misleading - according to a new study from the University of East Anglia (UEA).

              Researchers investigated the accuracy of mental health and neurodivergence information across social media platforms including YouTube, TikTok, Facebook, Instagram and X (formerly Twitter).

              They found that these platforms are awash with misleading or unsubstantiated mental health content - and that TikTok is the worst offender.

              The study also reveals that posts about neurodivergence such as autism and ADHD contained higher levels of misinformation than many other mental health topics.

              Dr Eleanor Chatburn, from UEA’s Norwich Medical School, said: “Our work uncovered misinformation rates on social media as high as 56 per cent. This highlights how easily engaging videos can spread widely online, even when the information isn’t always accurate.

              “Social media has become an important place where many young people learn about mental health, but the quality of this information can vary greatly. This means that misleading content can circulate quickly, particularly if there aren’t accessible and reliable sources available.”

              How the research happened

              The team analysed more than 5,000 social media posts about mental health topics including autism, ADHD, schizophrenia, bipolar disorder, depression, eating disorders, OCD, anxiety and phobias.

              The systematic review is the first to examine mental health and neurodivergence information across multiple social media platforms.

              TikTok shows higher levels of misinformation

              The study found that TikTok frequently contained higher levels of inaccurate or unsubstantiated mental health content than other platforms.

              Dr Alice Carter undertook the research as part of her doctoral thesis. She said: “When we looked closely at TikTok content, studies reported that 52 per cent of ADHD-related videos and 41 per cent of autism videos analysed were inaccurate.

              “By contrast, YouTube averaged 22 per cent misinformation, while Facebook averaged just under 15 per cent,” she added.

              Why misinformation is such a problem

              Dr Chatburn said: “Metal health information on social media matters because many young people now turn to these platforms to understand their symptoms and possible diagnoses.

              “TikTok content has been linked to young people increasingly believing they may have mental health or neurodevelopmental conditions. While this questioning can be a helpful starting point, it’s important these questions lead to proper clinical assessment with a professional.

              “As well as leading to misunderstanding of serious conditions and pathologising ordinary behaviour, misinformation can also lead to delayed diagnosis for people that actually do need help.

              “When false ideas spread, they can feed stigma and make people less likely to reach out for support when they really need it.

              “It can also make mental illness seem scary or hopeless, which creates even more fear and misunderstanding.

              “On top of that, when people come across misleading advice about treatments, especially ones that aren’t backed by evidence, it can delay them from getting proper care and ultimately make things worse.”

              Professionals vs influencers - who should we trust?

              Unsurprisingly, the review found that content created by healthcare professionals was consistently more accurate. However, professional voices still represent only a small share of mental health content circulating on these platforms.

              Dr Carter said: “In the case of ADHD on TikTok for example, just three per cent of professional videos contained misinformation - compared to 55 per cent of videos by non professionals.

              “While lived-experience can play an important role, with personal stories helping people to feel understood and raising awareness of mental health conditions, it is vital to ensure that accurate and evidence-based information from clinicians and trusted organisations is also visible and easy to find.

              “TikTok’s algorithms are also designed to push rapidly engaging content and this is a major driver of misinformation.

              “Once users show interest in a topic, they are bombarded with similar posts - creating powerful echo chambers that can reinforce false or exaggerated claims.

              “It is a perfect storm for misinformation to go viral faster than facts can catch up."

              YouTube Kids - a rare bright spot

              YouTube Kids was found to contain no misinformation for anxiety and depression, and only 8.9 per cent for ADHD - a result attributed to the platform’s stricter moderation rules.

              By contrast, standard YouTube was described as “highly inconsistent”, with videos ranging from poor to moderately reliable, depending heavily on the topic, channel and influencer.

              Clinicians must become creators

              The review concludes with a call for health organisations and clinicians to create and promote better evidence-based content.

              The team have also called for improved content moderation, standardised tools for assessing online mental health information, and clearer definitions of misinformation.

              The Quality of Mental Health and Neurodivergence-Related Information on Social Media: A Systematic Review’ is published in The Journal of Social Media Research.

              Note: This article was originally published by the University of East Anglia, and is republished with permission.

              Reviewed by Asim BN.

              Read next: 

              • 72% of Gen Z Say Customer Reviews Are Most Credible Brand Influence, Survey Finds

              • AI Has Made Marketing Faster, But It Hasn’t Improved Brand Engagement or Differentiation


              by External Contributor via Digital Information World

              Your voice, your typing, your sleep – what workplace wellbeing apps are really analysing

              Mohammad Hossein Amirhosseini, University of East London

              Image: Cottonbro studio / Pexels

              A workplace wellbeing app might seem like a simple and helpful tool – a mood check-in, some stress management advice, or a chatbot asking how your week has gone. But behind that supportive language, some systems are also quietly analysing your voice, writing style and digital behaviour for signs of psychological distress.

              These tools are already on the market – aimed at workplaces, universities and healthcare. They are framed as early-intervention systems that promise to cut costs and identify problems before they become serious. Unfortunately, companies are under no obligation to report using them, so data about how widespread they are is lacking.

              The basic idea behind these tools is that behaviour leaves patterns. Artificial intelligence (AI) systems trained on large datasets learn to recognise signals associated with particular mental health conditions, and when similar signals appear in new data, the system produces a probability estimate.

              For many people, the surprising part is how much ordinary behaviour can reveal. Voice recordings can pick up changes in rhythm, pitch and hesitation. Language models can analyse word choice and emotional tone. Smartphone data has also been explored as a way of tracking changes in sleep, movement and social interaction – all without the person doing anything out of the ordinary.

              But detecting a statistical signal is very different from identifying a genuine problem. Human behaviour is deeply contextual. Someone may speak slowly because they are tired, nervous or communicating in a second language. Reduced online activity might simply reflect a busy week.

              Even well-designed systems will make mistakes. A person who is genuinely struggling may not show the behavioural patterns the system was trained to recognise, while someone else may be incorrectly flagged as being in distress.

              The pressure to develop these tools is real. The World Health Organization estimates that depression and anxiety cost the global economy US$1 trillion (£800 million) a year in lost productivity. Universities report rising demand for counselling, and employers are dealing with burnout and stress-related absence. Automated early-warning systems can seem like an attractive answer.

              When wellbeing becomes surveillance

              But this technology can change something fundamental about how mental health is understood. Traditionally, mental health is assessed through conversations between a person and a therapist, where context matters enormously. These systems work differently, inferring psychological states from behavioural traces that were never intended to communicate emotional information.

              Once those inferences are made, they can influence decisions well beyond healthcare. Assessments of someone’s emotional state could shape workplace programmes, student support systems or insurance models, affecting how institutions judge a person’s reliability or suitability for a role. In effect, psychological states become a new kind of data.

              There are particular risks for some groups. Neurodivergent people often communicate in ways that differ from the norms assumed by many datasets. Someone speaking in a second language may pause more frequently, producing speech patterns an algorithm could misinterpret. A person going through grief or illness may display signals that resemble those associated with mental health conditions – without actually having one.

              Used carefully by healthcare professionals, these tools could have genuine value – helping therapists spot early warning signs of deteriorating mental health. But the same capability looks very different when deployed across a workplace or university without people’s knowledge.

              At a minimum, people should know when these tools are being used, what data is being analysed and whether the system has been independently tested. A claim that software can detect distress is not, on its own, enough.The Conversation

              Mohammad Hossein Amirhosseini, Associate Professor, Computer Science and Digital Technologies, University of East London

              This article is republished from The Conversation under a Creative Commons license. Read the original article.

              Reviewed by Asim BN.

              Read next: 

              Artificial Intelligence: Friend or Foe?


              by External Contributor via Digital Information World

              ‘Manners for machines’: how new rules could stop AI scrapers destroying the internet

              T.J. Thomson, RMIT University; Daniel Angus, Queensland University of Technology; Jake Goldenfein, The University of Melbourne, and Kylie Pappalardo, Queensland University of Technology


              Australians are among the most anxious in the world about artificial intelligence (AI).

              This anxiety is driven by fears AI is used to spread misinformation and scam people, anxiety over job losses, and the fact AI companies are training their models on others’ expertise and creative works without compensation.

              AI companies have used pirated books and articles, and routinely send bots across the web to systematically scrape content for their models to learn from. That content may come from social media platforms such as Reddit, university repositories of academic work, and authoritative publications like news outlets.

              In the past, online scraping was subject to a kind of detente. Although scraping may sometimes have been technically illegal, it was needed to make the internet work. For instance, without scraping there would be no Google. Website owners were OK with scraping because it made their content more available, according with the vision of the “open web”.

              Under these conditions, scraping was managed through principles such as respect, recognition, and reciprocity. In the context of AI, those are now faltering.

              A new online landscape

              Many news outlets are now blocking web scrapers. Creators are choosing not to use certain platforms or are posting less.

              Barriers are being put in place across the open web. When only some can afford to pay to access news and information, then democracy, scientific innovation and creative communities are all harmed.

              Exceptions to copyright infringement, such as fair dealing for research or study, were legislated long before generative AI became publicly available. These exceptions are no longer fit for purpose in an AI age.

              The Australian government has ruled out a new copyright exception for text and data mining. This signals a commitment to supporting Australia’s creative industries, but leaves great uncertainty about how creative content can be managed legally and at scale now that AI companies are crawling the web.

              In response, the international nonprofit Creative Commons has proposed a new voluntary framework: CC Signals.

              Creative Commons licences allow creators to share content and specify how it can be used. All licences require credit to acknowledge the source, but various additional restrictions can be applied. Creators can ask others not to modify their work, or not to use it for commercial purposes. For example, The Conversation’s articles are available for reuse under a CC BY-ND licence, which means they must be credited to the source and must not be remixed, transformed, or built upon.

              Summary of CC licences. Creative Commons

              How would CC Signals work?

              The proposed CC Signals framework lets creators decide if or how they want their material to be used by machines. It aims to strike a balance between responsible AI use and not stifling innovation, and is based on the principles of consent, compensation, and credit.

              Simplistically, CC Signals work by allowing a “declaring party” – such as a news website – to attach machine-readable instructions to a body of content. These instructions specify what combinations of machine uses are permitted, and under what conditions.


              CC Signals are standardised, and both humans and machines can understand them.

              This proposal arrives at a moment that closely mirrors the early days of the web, when norms around automated access (crawling and scraping) were still being worked out in practice rather than law.

              A useful historical parallel is robots.txt, a simple file web hosts use to signal which parts of a site can be accessed by the bots that crawl the web and look for content. It was never enforceable, but it became widely adopted because it provided a clear, standardised way to communicate expectations between content hosts and developers.

              CC Signals could operate in much the same spirit. But, as with any system, it has potential benefits as well as drawbacks.

              The pros

              The framework provides more nuance and flexibility than the current scrape/don’t scrape environment we’re in. It offers creators more control over the use of their content.

              It also has the potential to affect how much high-quality content is available for scraping. Without access to high-quality data, AI’s biases are exacerbated and make the technology less useful.

              The framework might also benefit smaller players who don’t have the bargaining power to negotiate with big tech companies but who, nonetheless, desire remuneration, credit, or visibility for their work.

              The cons

              The greatest challenge with CC Signals is likely to be a practical one – how to calculate, and then enforce, the monetary or in-kind support required by some of the signals.

              This is also a major sticking point with content industry proposals for collective licensing schemes for AI. Calculating and distributing licence fees for the thousands, if not millions, of internet works that are accessed by generative AI systems around the world is a logistical nightmare.

              Creative Commons has said it plans to produce best-practice guides for how to make contributions and give credit under the CC Signals. But this work is still in progress.

              Where to from here?

              Creative Commons asserts that the CC Signals framework is not so much a legal tool as an attempt to define “manners for machines”. Manners is a good way to look at this.

              The legal and practical hurdles to implementing effective copyright management for AI systems are huge. But we should be open to new ideas and frameworks that foreground respect and recognition for creators without shutting down important technological developments.

              CC Signals is an imperfect framework, but it is a start. Hopefully there are more to come.The Conversation

              T.J. Thomson, Associate Professor of Visual Communication & Digital Media, RMIT University; Daniel Angus, Professor of Digital Communication, Director of QUT Digital Media Research Centre, Queensland University of Technology; Jake Goldenfein, Associate Professor, Melbourne Law School, The University of Melbourne, and Kylie Pappalardo, Associate Professor, School of Law, Queensland University of Technology

              This article is republished from The Conversation under a Creative Commons license. Read the original article.

              Reviewed by Asim BN.

              Read next:

              • Top AI Chatbots Gather Extensive User Data, Led by Meta AI and Google Gemini

              Online ad fraud is a feature, not a bug


              by External Contributor via Digital Information World

              Wednesday, March 25, 2026

              Online ad fraud is a feature, not a bug

              By Benjamin Kessler

              Image: Erik Mclean / Unsplash

              Technological advancements and the dynamics of the platform economy make rooting out fraud more complicated than it may seem.

              With print media circulation and broadcast television viewership in free fall, a lot is riding on the online advertising space being able to take up the slack. The good news is, digital ad spend is booming.

              The bad news? A good chunk of that money is chasing a mirage.

              Online ad fraud—where ad publishers falsely inflate engagement metrics (impressions, clicks, etc.) to boost revenues—is a growing problem that eats upwards of 20 percent of global ad spend.

              Min Chen and Abhishek Ray, both professors in the information systems and operations management area at Costello College of Business at George Mason University, are researching how online ad networks, such as Google Ads, can improve upon existing anti-fraud methods. Their recently published paper in Management Science explores deep-rooted dynamics of the online ad ecosystem that make eliminating fraud even more complicated than it may seem at first glance. The paper was co-authored by Subodha Kumar of Temple University.

              The researchers used a game-theoretic model to replicate the interconnected decision-making of the three players involved: advertisers, publishers, and the networks that serve as go-between.

              “The way the ecosystem works is that the platforms in the middle, the ad networks, shares the benefit from the transaction,” Chen explains. “People have been arguing whether the network is incentivized to put their best efforts behind deterring fraud, since the fraudulent traffic benefits the networks too. So we tried to create a model to capture this.”

              “If the advertisers rely solely on the reports from the ad networks, they may be at risk. They should use third-party tools to audit the performance better.” — Min Chen, information systems and operations management professor at the Costello College of Business at George Mason University

              In addition, the model incorporates the two main fraud deterrents that networks routinely use. One is technological—platforms can adopt tougher standards for fraud detection, widening the scope of suspicious activity that gets flagged. The other is economic—lowering payments to all publishers so as to disincentivize large-scale fraud.

              Surprisingly, the researchers find that the online ad economy works best when the two approaches seem to be working at cross-purposes. A tightening in fraud detection technology, paired with high payments for publishers, may sometimes produce the best outcomes for advertisers, publishers, and networks, as the market evolves.

              The reason is rooted in the imperfect nature of fraud detection. To be sure, detection systems are improving all the time, especially with the advent of AI. But fraudsters do their best to blend in and adapt, using technological tools that often outpace those of their pursuers. “You cannot catch all the fraud, and if you try, you are going to mis-detect a lot of non-fraud,” Chen says.

              Tougher fraud detection, then, will always mean more false positives, no matter how good the technology gets. To counter this inherent unfairness that penalizes good and bad actors alike, the ad network’s payment to publishers need to go up. Otherwise, publishers may take their business elsewhere—especially those most valuable to the system, i.e. those that are trustworthy — thereby decreasing the advertisers’ valuation on ad traffic.

              “These ad networks are kind of a unique system where you can be monetarily rewarded for being honest, or punished for being dishonest,” Ray says. “What we discover for this system is there can be a way in which we can give carrots to people, not just sticks.”

              On a similar note, the researchers find that an attempt to purge “bad apple” advertisers from the system can backfire due to false positives. In fact, fraud can sharply increase if networks, believing they have solved the problem, relax their fraud detection standards and raise incentives for the remaining advertisers. “Since the publishers who produce the fraudulent traffic are fewer now, the ad network may no longer need to maintain a strict detection policy. This can encourage the remaining ones to commit much more fraud,” Chen explains.

              To Ray and Chen, online ad fraud is, in at least one sense, no different from older forms of malfeasance that are found in all free societies. “We need to have some kind of mechanism for managing the level of fraud, because the fraud detection method is never going to be perfect, whether it’s financial fraud, accounting fraud, etc.,” Chen says.

              But as an example of the contemporary platform economy, the online advertising ecosystem is also distinctive, in that its de facto regulatory authority has skin in the game. The ad networks’ mixed incentives—as both beneficiaries and inhibitors of fraud—can undermine integrity and trust within an already-compromised system.

              “If the advertisers rely solely on the reports from the ad networks, they may be at risk,” Chen says. “They should use third-party tools to audit the performance better.”

              Editor’s Note: This post was originally published on George Mason University News and republished on DIW with permission.

              Reviewed by Asim BN.

              Read next: 

              • Why you may be paying more than you need to for digital subscriptions

              • Researchers Pioneer New Technique to Stop LLMs from Giving Users Unsafe Responses


              by External Contributor via Digital Information World

              Researchers Pioneer New Technique to Stop LLMs from Giving Users Unsafe Responses

              By Matt Shipman, NC State News

              Image: Nahrizul Kadri / Unsplash

              Researchers have identified key components in large language models (LLMs) that play a critical role in ensuring these AI systems provide safe responses to user queries. The researchers used these insights to develop and demonstrate AI training techniques that improve LLM safety while minimizing the “alignment tax,” meaning the AI becomes safer without significantly affecting performance.

              LLMs, such as ChatGPT, are being used for an increasing number of applications – including people asking for advice or instructions on how to perform a variety of tasks. The nature of some of these applications means that it is important for LLMs to generate safe responses to user queries.

              “We don’t want LLMs to tell people to harm themselves or to give them information they can use to harm other people,” says Jung-Eun Kim, corresponding author of a paper on the work and an assistant professor of computer science at North Carolina State University.

              At issue is a model’s safety alignment, or training protocols designed to ensure that the AI’s outputs are consistent with human values.

              “There are two challenges here,” says Kim. “The first challenge is the so-called alignment tax, which refers to the fact that incorporating safety alignment has an adverse effect on the accuracy of a model’s outputs.”

              “The second challenge is that existing LLMs generally incorporate safety alignment at a superficial level, which makes it possible for users to circumvent safety features,” says Jianwei Li, first author of the paper and a Ph.D. student at NCState. “For example, if a user asks for instructions to steal money, a model will likely refuse. But if a user asks for instructions to steal money in order to help people, the model would be more likely to provide that information.

              “This second challenge can be exacerbated when users ‘fine-tune’ an LLM – modifying it to operate in a specific domain,” says Li. “For example, an LLM may have good safety performance. But if a user wants to modify that LLM for use in the context of a specific business or organization, the user may train that LLM on additional data. Previous research shows us that fine-tuning can weaken safety performance.

              “Our goal with this work was to provide a better understanding of existing safety alignment issues and outline a new direction for how to implement a non-superficial safety alignment for LLMs.”

              To that end, the researchers created the Superficial Safety Alignment Hypothesis (SSAH), which neatly captures how safety alignment currently works in LLMs. Basically, it holds that superficial safety alignment views a user request as binary, either safe or unsafe. In addition, the SSAH notes that LLMs currently make the binary determination on whether to answer the request at the beginning of the answer-generating process. If the request is deemed safe, a response is generated and provided to the user. If the request is deemed not safe, the model declines to generate a response.

              The researchers also identified safety-critical “neurons” in LLM neural networks that are critical for determining whether the model should fulfill or refuse a user request.

              “We found that ‘freezing’ these specific neurons during the fine-tuning process allows the model to retain the safety characteristics of the original model while adapting to new tasks in a specific domain,” says Li.

              “And we demonstrated that we can minimize the alignment tax while preserving safety alignment during the fine-tuning process,” says Kim.

              “The big picture here is that we have developed a hypothesis that serves as a conceptual framework for understanding the challenges associated with safety alignment in LLMs, used that framework to identify a technique that helps us address one of those challenges, and then demonstrated that the technique works,” says Kim.

              “Moving forward, our work here highlights the need to develop techniques that will allow models to continuously re-evaluate and re-select their reasoning direction – safe or unsafe –throughout the response generation process,” says Li.

              The paper, “Superficial Safety Alignment Hypothesis,” will be presented at the Fourteenth International Conference on Learning Representations (ICLR2026), being held April 23-27 in Rio de Janeiro, Brazil.

              The researchers have made relevant code and additional information available at: https://ssa-h.github.io/.

              This post was originally published on NC State News and republished here with permission.

              Reviewed by Ayaz Khan.

              Read next: 

              • Using your AI chatbot as a search engine? Be careful what you believe

              • Why you may be paying more than you need to for digital subscriptions


              by External Contributor via Digital Information World

              Why you may be paying more than you need to for digital subscriptions

              Erhan Kilincarslan, University of Huddersfield


              Image: 
              Vitaly Gariev / Unsplash

              The way we watch TV, listen to music, order groceries and take photos has changed in the past decade or so. For many of us, all of these activities involve a monthly payment.

              Subscriptions have quietly become a major part of household spending across the world. But many people underestimate how much they actually pay. And there is evidence which suggests that the design of subscription services – combined with common human traits – can make these payments easy to overlook.

              In the UK, consumers spend around £26 billion a year subscribing to everything from digital media to cosmetics and coffee. (Around 69% of UK households subscribe to at least one video streaming service such as Netflix or Amazon Prime Video.)

              And a few small monthly payments can quickly add up. Data from Barclays bank suggests that individual consumers spend £50.60 on – so more than £600 a year. It also shows that spending on digital content and subscription services has increased by nearly 50% since 2020. In households where several people hold subscriptions, the combined spending can be considerably higher.

              The result is a subscription economy that is growing faster than many consumers realise. And one reason households underestimate their spending is that some subscriptions continue running even when people no longer use them.

              The UK government estimates that of the 155 million subscriptions currently active in the UK, nearly 10 million are unwanted – at a cost to consumers of £1.6 billion each year.

              The charity Citizens Advice has calculated that over £300 million a year is spent on subscriptions that people are not actually using, often because they automatically renewed after a free trial.

              In many cases the individual payments are small, which makes them easy to miss in a bank statement.

              Behavioural economics offers one explanation. Research shows that people tend to evaluate spending using what’s known as “mental accounting” – the tendency to treat small payments separately instead of thinking about how they add up overall. As a result, people group purchases into categories rather than looking at the total amount leaving their bank account.

              A £9.99 streaming subscription or a £4.99 app service may not feel significant on its own. But when several subscriptions accumulate, the combined cost can become substantial.

              Another factor is automatic renewal. Many services continue charging unless customers actively cancel. This interacts with what behavioural scientists call “status quo bias”, the tendency to stick with the default option.

              When cancelling requires effort or attention, people often postpone the decision and continue paying.

              Consumer groups have also raised concerns about so called subscription traps. These occur when people are unintentionally signed up to recurring payments or find it difficult to cancel them.

              It has been claimed that more than 20 million adults in the UK have signed up to a subscription without realising it and about 4.7 million people are still paying for one they did not knowingly sign up to.

              These cases often involve free trials that automatically convert into paid subscriptions or online sign up processes where the recurring payment is not clearly explained.

              Researchers studying digital interfaces have also identified design practices that make subscriptions easier to start than to cancel, sometimes described as “dark patterns” in online design.

              New rules

              The growing scale of the problem has attracted regulatory attention. The UK government has introduced measures aimed at tackling subscription traps, including clearer information about recurring payments and easier cancellation processes. A consultation is now taking place on how these rules will be implemented before they come fully into force.

              The goal is to ensure that consumers understand the financial commitment they are entering when signing up to a subscription service.

              The new measures will probably help reduce some accidental subscriptions, particularly those created through unclear sign-up processes or free trials that automatically convert into paid plans. And it seems sensible to make sure that subscription contracts contain clearer information and easier cancellation rights to help consumers avoid unwanted recurring payments.

              But behavioural factors such as inertia and automatic renewal mean the problem may not disappear entirely. Even when cancellation is straightforward, consumers often delay reviewing small recurring payments, allowing subscriptions to continue.

              For households, digital spending often feels invisible. Subscriptions are typically spread across multiple platforms and paid automatically through bank cards or direct debits. Without a deliberate review of monthly statements, it can be difficult to see how much these payments add up to.

              Subscriptions can offer convenience and flexibility. But as the subscription economy continues to grow, it can also quietly increase household spending in ways that many consumers barely notice.The Conversation

              Erhan Kilincarslan, Reader in Accounting and Finance, University of Huddersfield

              This article is republished from The Conversation under a Creative Commons license. Read the original article.

              Reviewed by Asim BN.

              Read next:

              • Using your AI chatbot as a search engine? Be careful what you believe

              • Instagram, Facebook, TikTok Engagement Rose in Q1 2026 While Snapchat Declined


              by External Contributor via Digital Information World