Wednesday, April 15, 2026

Google promotes ‘teacher approved’ apps for kids. Here’s what parents should know

Chris Zomer, Deakin University and Niels Kerssens, Utrecht University

Researchers urges parents to verify children’s apps independently amid concerns over Google’s approval system transparency.
Image: Ron Lach/ Pexels

As school holidays continue around Australia, many parents are looking for educational ways to keep their children entertained.

If you own an Android device and have young children, you may find yourself browsing Google Play for educational and age-appropriate apps. If you go to the children’s section, you will be led to a page with “Teacher Approved apps & games” featuring apps for children under 13 according to different age ranges and themes.

Popular “Teacher Approved” apps such as learning app Lingokids and the game Bluey: Let’s Play have been downloaded more than 50 million times. YouTube Kids, another “Teacher Approved” app, has been downloaded more than 500 million times.

Google says “teachers and specialists” rate the “Teacher Approved” apps. But in our research we argue it’s unclear who exactly those teachers and experts are. The educational value of Google Teacher Approved apps can also be unclear at times.

What is ‘Teacher Approved’?

Google launched the “Teacher Approved” program in 2020 to set a quality standard for apps for children aged under 13.

To be included in the “Teacher Approved” section, an app needs to adhere to Google’s family policies, which includes having an easy-to-understand interface and content that is appropriate for children. Any ads, in-app purchases or cross-promotion “must be appropriate” too.

Google has an online course for developers who want to be included in the Teacher Approved section. We took this as part of our our research.

In the course, Google states “an app doesn’t have to be educational” as long as it is “enriching” and “support(s) a child’s healthy development”. At the same time, Google says teachers are assessing apps for “learning impact”. However, it is not clear how learning is assessed, especially for apps that are not educational.

Our research

In our study, we analysed how apps were presented in the children’s section on Google Play to make them seem educational.

We also interviewed five industry stakeholders (three founders/chief executives and two design specialists) from different companies developing apps for children.

We chose to involve industry rather than parents, as anecdotal evidence suggests parents have little understanding of the “Teacher Approved” program.

Confusing labels and categories

We found “Teacher Approved” apps are often categorised with vague or interchangeable labels such as “enriching apps”, “enriching games” and “games for kids”. This can make it difficult to understand the purpose of the apps, or to know whether they are educational or not.

We also found some apps with a “Teacher Approved” badge were labelled by the app developer as entertainment rather than “educational”. For example, Paw Patrol Rescue World was “Teacher Approved”, despite being labelled as “action-adventure” by the developer.

With the Teacher Approved badge Google creates the impression of educational value and trustworthiness for all sorts of apps. As one of the developers we interviewed explained:

how many people would look at a little graphical badge and go ‘oh, I trust this now, because they’ve got this badge’.

Who approves the apps?

The Teachers Approved badge implies teachers are used to evaluate the apps that appear in the children’s section on Google Play.

However, on the developer’s section of its website, Google notes it is not exclusively teachers who assess the apps. It says “teachers and children’s education and media specialists recommend high-quality [Teacher Approved] apps for kids on Google Play.”

In 2020, Google shared the names of two experts who were “lead advisers” at the time – a developmental psychologist and an education and media expert. But it is not clear who the “teachers” and “specialists” who currently rate the apps are and how many of them are actually teachers.

The Conversation asked Google where the teachers or specialists are located, whether they are paid, and what criteria non-teachers need to meet to be included in the program. The company did not respond before deadline.

What can parents do?

Our research suggests the current situation is confusing for parents. In the meantime, there are some things parents can do if they are not sure about apps their kids are using:

  • use independent sites such as Children and Media Australia that evaluate the educational content of apps

  • don’t rely on the content description on Google Play, but test the apps yourself

  • don’t use apps with advertising, as this will interrupt the learning experience.The Conversation

Chris Zomer, Research Fellow at the ARC Centre of Excellence for the Digital Child, Deakin University and Niels Kerssens, Assistant Professor in Digital Media and Society, Utrecht University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Irfan Ahmad.

Read next: AI is changing more than your writing — it may be shaping your worldview


by External Contributor via Digital Information World

AI is changing more than your writing — it may be shaping your worldview

By USC Dornsife News

Image: Valentin Ivantsov - pexels

Use of ChatGPT, Claude and other large language models, or LLMs — what most people call “AI” — has surged since ChatGPT debuted publicly in 2022. Hundreds of millions of people now use these tools weekly, according to recent estimates.

Users might assume these tools are just helping them organize their thoughts, but recent research suggests they may be doing something more subtle and more powerful — influencing how we all think, speak and even understand the world.

In a recent opinion piece, researchers at the USC Dornsife College of Letters, Arts and Sciences, investigated how artificial intelligence systems like ChatGPT could be nudging people toward similar ways of communicating and reasoning — a process researchers call “cultural homogenization.”

“AI isn’t just reflecting culture anymore,” said lead author Yalda Daryani, a PhD student in social psychology at USC Dornsife. “It’s actively shaping it. It’s deciding what sounds polite, what sounds clear, even what counts as a good answer.”

So the researchers set out to understand how large language models like ChatGPT, Anthropic’s Claude and Google’s Gemini might influence human culture on a global scale, and how policies could address the broader effects these LLMs might have.

A pattern emerges with AI use

The researchers — under the guidance of Morteza Dehghani, professor of psychology and computer science at USC Dornsife and head of the Morality and Language Lab — reviewed a wide range of recent studies across psychology, computer science and linguistics to understand how LLMs perform across different cultures and how people respond when using AI in real-world tasks such as writing or decision-making.

They found a consistent pattern: AI systems tend to reflect and reinforce a narrow slice of human experience.

A central finding of the research is that these systems often align with what the researchers describe as “WHELM” perspectives — Western, high-income, educated, liberal and male. In other words, they reflect the values and communication styles most common in English-language online data.

“When you ask AI for advice, you’re not getting a neutral answer,” Daryani said. “You’re getting the perspective of a very specific group of people, even if it doesn’t say that explicitly.”

This pattern appears in how AI handles moral questions. The research showed that AI systems tend to favor values such as individual freedom and fairness, while placing less emphasis on ideas like tradition, authority and community, which are more central in many non-Western cultures.

AI’s impact extends to subtle social interactions

The influence goes beyond values. It also affects how people communicate.

“When millions of people use AI to draft messages, those differences start to disappear,” Daryani said. “Over time, we may all start sounding very alike.”

Even when users ask questions in other languages, the models often return examples tied to American or European culture — such as U.S. holidays or English-language films — while offering less detailed or more stereotypical descriptions of non-Western traditions.

Dehghani says this pattern creates a kind of feedback loop. “The more we rely on these systems, the more their outputs become part of our shared knowledge, and then that same material gets used to train the next generation of AI. So the cycle reinforces itself.”

That loop, the researchers warn, could gradually narrow the range of ideas, traditions and communication styles that people are exposed to and pass on over time.

Why does that matter? Because cultural diversity isn’t just about language or customs, the researchers say. It shapes how people think, solve problems and make decisions. A wide range of perspectives can lead to better solutions and more creative ideas. If that diversity shrinks, the researchers argue, society could lose important ways of understanding the world.

How to build a better AI

Of note, the team does not suggest that AI is inherently harmful. LLMs can make writing easier, improve access to information and help people communicate more clearly. The concern, the researchers say, is what happens when a small number of systems begin to influence billions of interactions every day.

“Once the system is trained on a narrow set of data, it’s very hard to undo that,” Daryani said.

To address the issue, the team outlines a three-part approach based on their study findings, beginning with the data used to train models. Most AI systems learn from English-language content drawn heavily from Western sources. The researchers say developers should include more material from different languages, regions and cultural traditions to capture cultural knowledge that might otherwise be systematically underrepresented.

During later training stages aimed at refining and evaluating LLMs, the researchers suggest incorporating culturally diverse examples as well as consulting experts such as psychologists, anthropologists, linguists, and policymakers working in collaboration with diverse cultural communities to ensure responses reflect different social norms and values.

They then recommend changing how the training results are judged. Tech companies do employ workers from a variety of countries during this step, but those workers are trained to apply standardized Western evaluation criteria. Instead, reviewers should evaluate answers based on multiple standards.

Taken together, these changes could help AI systems recognize that there is no one “correct” way to communicate or reason, preserving a broader range of human perspectives as the technology continues to evolve.

For Daryani, the stakes are clear: “Languages, traditions, ways of thinking — once they disappear, we can’t get them back. The question isn’t whether this is difficult to fix. It’s whether we can afford not to.”

About the study

Zhivar Sourati, a PhD student at the USC Viterbi School of Engineering, was a co-author of the report, published in Policy Insights from the Behavioral and Brain Sciences.

Originally published by USC Dornsife College of Letters, Arts and Sciences News. Republished here with permission.

Reviewed by Irfan Ahmad.

Read next: In the face of rampant AI, is ‘data poisoning’ a new form of civil disobedience?
by External Contributor via Digital Information World

In the face of rampant AI, is ‘data poisoning’ a new form of civil disobedience?

Claire Tanner, Monash University; Mor Vered, Monash University, and Sam Cadman, Monash University

Image: Declan Sun/Unsplash

The explosion of generative artificial intelligence (AI) tools has provoked both hopes and anxieties about the potential benefits and harms of this technology. In advanced economies, people are almost equally worried and optimistic about it.

This is perhaps unsurprising. AI consumes vast amounts of natural resources yet promises to save the planet. It may improve human efficiency and productivity, while putting millions out of work.

For many white-collar workers, AI use now seems non-optional. The messaging is clear – get on board or be left behind.

Amid this uncertainty and rapid technological uptake, concerned citizens have made efforts to resist AI. One form of AI resistance, aimed at sabotaging the functionality of AI large language models, is data poisoning. But how accessible is it to the everyday person? And what is at stake in its use?

What is AI resistance?

Acts of AI resistance range from social sanctions and boycotts, to strikes, protests, public outcry and lawsuits. Driving these acts are perceived threats to jobs, ethics, safety, democracy and sovereignty, and the environment.

AI is also described as an existential risk to creative industries, including music, fiction and film. In the United Kingdom, generative AI has been characterised as an “industrial scale theft” that threatens a £124.6 billion (A$237bn) creative sector and more than 2.4 million jobs.

People have long used civil disobedience to address social injustices. Famously, Rosa Parks’ refusal to sit at the back of a bus in Alabama led to a 13-month bus boycott by tens of thousands of Black residents. It only ended when racial segregation on public transport was deemed unconstitutional in the United States.

Acts of sabotage have also long been central to collective action against injustice. In fights for labour rights, workers have employed diverse tactics to reduce efficiency and productivity. This has ranged from hotel workers putting salt in sugar bowls to farm workers breaking machinery.

Data poisoning can be viewed as a modern version of these historic actions.

How does data poisoning work?

Data poisoning means deliberately inserting misleading, biased, or nonsensical content into the data AI models learn from, to make their outputs worse. Only 250 poisoned documents in a dataset could compromise outputs across AI models of any size.

There are various ways to poison data. Some require highly technical skills, others are accessible to anyone with an internet connection – if their text or images are used as training data.

Researchers have developed several data poisoning tools that exploit the vulnerabilities of AI models. Glaze and Nightshade enable artists to make poisoned visual images that can’t be used as training data. The tool CoProtector defends against the exploitation of open source code repositories like Github. Monash University and the Australian Federal Police have created Silverer, enabling social media users to doctor personal images to prevent them from being used in deepfakes.

Example images of AI model output generated with data poisoned with the Nightshade tool. Shan et al., arXiv (2023), CC BY

But you don’t need a tool or advanced skills to affect AI. Simply creating websites with factitious information, making jokes in Reddit, feeding models their own outputs, or editing Wikipedia can poison data.

Data poisoning is commonly presented as a dangerous act perpetrated by “cyber criminals” or “malicious actors”. But what if it’s used to protect human rights?

Is data poisoning legal? Is it ethical?

Legal obligations related to data poisoning are often directed to AI developers and organisations. The EU Artificial Intelligence Act requires that appropriate measures are adopted to prevent and detect data poisoning.

The legal status of AI data poisoning by individual users is less clear. Criminal penalties may apply under US or UK computer fraud and misuse laws. Interference with an AI model would also likely breach the terms of service of AI companies.

If AI data poisoning is unlawful, questions could still be asked about its ethical status. Philosophers have long recognised that civil disobedience can be justifiable in circumstances where legally sanctioned practices produce serious injustice.

If AI companies are operating with state approval in ways that impact citizens’ rights to privacy, copyright, safe and secure work, quality education, social and sexual safety, data poisoning may constitute ethical civil disobedience.

For philosopher John Rawls, “[civil disobedience] is one of the stabilising devices of a constitutional system, although by definition an illegal one”.

If the intention is to prevent mass unemployment, preserve the integrity of elections, and protect against social harms (suicide, child abuse, increased human isolation, loss of human creativity and environmental degradation), data poisoning could align with the principles of justice that underpin democratic social institutions.

A significant problem with data poisoning is that even if models become compromised – and outputs grow inconsistent, misleading, or nonsensical – users overly trust AI systems. Data poisoning then could contribute to harms it seeks to resist, amplifying the inaccuracy of systems humans are increasingly relying on, irrespective of their quality and effects.

Data poisoning is not simply an immoral cyber crime. It can be an ethically complex strategy to address social injustices. AI development needs to be of collective benefit and aligned with public values and interests. If AI company employees are askingAre we the baddies?”, history may prove that in some cases data poisoners are on the side of good.The Conversation

Claire Tanner, Senior Lecturer in Sociology and Gender Studies, Monash University; Mor Vered, Senior Lecturer, Data Science & AI, Monash University, and Sam Cadman, Research Fellow, Criminology, Monash University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Edited by Asim BN.

Read next: From bias to balance: how AI can reshape hiring decisions
by External Contributor via Digital Information World

Tuesday, April 14, 2026

How the structure of online reviews shapes their helpfulness

The article at a glance

The usefulness of online product reviews depends not only on what is said, but on how the information is structured. Research co-authored at Cambridge Judge Business School shows that the sequencing of positive and negative points plays a central role in how readers interpret reviews. This suggests that better-designed review forms – ones that guide how feedback is organised – could significantly improve their value for decision-making.

Study finds structuring feedback differently improves usefulness depending on product rating and reader expectations significantly.
Image: Omar:. Lopez-Rincon / unsplash

We all have experience evaluating things: online products, services and even other people. Yet such evaluations are rarely straightforward, as most targets combine positive, negative and neutral aspects. For example, a reviewer assessing a laptop might praise its performance and design while criticising its battery life.

This raises a practical question: in what order should such information be presented to be most useful? A similar challenge arises when assessing others’ performance, where strengths and weaknesses must be weighed carefully. More broadly, this creates a dilemma: should an evaluation begin with criticism and end on a positive note, or start positively before turning to drawbacks?

Despite this common dilemma, existing research has largely focused on the overall sentiment of evaluative messages – whether feedback is positive or negative – rather than how different elements are organised within a message.

Dr Yeun Joon Kim, Associate Professor in Organisational Behaviour at Cambridge Judge Business School, explains:

“Any target of evaluation typically has both positive and negative aspects, which makes crafting evaluative messages challenging. The key question is how to structure these elements within a single message. For example, one might present criticism upfront and then move to praise, or instead integrate negative points within an otherwise positive evaluation. Yet research has paid little attention to this structural dimension. We wanted to understand whether certain structures are consistently more effective, or whether their effectiveness depends on the performance of the target being evaluated.”

The role of feedback structure in making reviews helpful

Research on this topic was conducted by Dr Luna Luan, Lecturer at the University of Queensland, and by Dr Kim based on nearly 200,000 Amazon reviews of various products ranging from clothing to food to electronics.

The research finds that a review’s usefulness to readers depends not just on whether it is overall positive or negative, but also on the sequencing of positive and negative content throughout the review: “We term this arrangement ‘feedback structure’, defined as the organisation of multiple pieces of evaluative information within a single message about a target,” says the research, which finds further that different types of sequencing are more or less helpful depending on how highly rated the product is in that particular review.

In short, say the authors: “How evaluative information is organised matters as much as what is said.”

Why the best review structure depends on how well a product is rated

For high-rated products, reviews that grow increasingly positive are most helpful to readers, while those that turn negative are least helpful. For average-rated products, progressively negative trajectories enhance helpfulness, whereas reviews that start negative and grow positive are least effective. For low-rated products, reviews are judged most helpful when they open constructively before introducing criticism.

“The results are nuanced but very clear,” says Dr Luan, who worked on the research while earning her PhD at Cambridge Judge. “Looking at the overall sentiment of reviews does not fully translate into message effectiveness. It is the broader structure of sentiment – how positivity and negativity evolve throughout the review – that shapes how readers interpret online reviews.”

Adds co-author Dr Kim: “Our findings have very real practical implications for how platforms and companies can design review pages in order to elicit the sort of reviews that will be most helpful to readers based on how highly products are rated. For example, instead of simply asking ‘Write your review here’, the online review form could instead include micro-prompts that guide how reviewers structure feedback in a way recipients find most helpful.

“More broadly, this research suggests that performance evaluations within organisations should also consider how feedback is structured, tailoring it to the level of employee performance.”

Moving beyond the feedback sandwich and other online feedback models

Previous research on the helpfulness of online product reviews identified a couple of commonly used approaches by writers of online reviews:

  • the “‘feedback sandwich’, where criticism is sandwiched between praises” to make the negative part seem not so severe, say the authors
  • the Pendleton model, dating from a well-known 1980s book on education, which begins with a factual narrative followed by praise and concluding with criticism

Both these approaches use a 3-part format (beginning, middle and ending) that seeks a more balanced message to readers. The research at Cambridge Judge also adopts this 3-part approach, but also adds a couple of other 3-part structures: opening tone (positive, neutral or negative) and valence trajectory (increasing, decreasing or steady) – therefore yielding 9 possible structures ranging from Type A reviews that start positive and become more positive as they go along, to Type I reviews that start negatively and become even more negative – with lots of variance in between.

The final sample for the research examined 5,487 distinct products, analysing 195,675 reviews of those products based on product performance and related factors as reflected in the reviews, and a helpfulness score as measured by reader votes.

When common review styles are not the most helpful

A central finding of the research is that the most commonly used review styles are not necessarily the most helpful to readers. In particular, for average- and low-rated products, the structures that reviewers tend to adopt often differ from those that readers find most useful.

This mismatch likely reflects different underlying motivations. Reviewers are not always writing to maximise usefulness for others, but may instead be expressing their own experiences, frustrations or emotions – especially when evaluating products of moderate or poor quality. As a result, review writing often serves both as information sharing and as a form of self-expression. This helps explain why widely used review styles do not always align with what readers perceive as most informative or helpful.

Which reviews are most helpful for highly rated, average and low-rated products

For highly rated products, the most helpful reviews start critical and grow more positive

The most helpful reviews of highly rated products are those that begin negatively but then increase in positivity consistently. “Such reviews capture attention by initially presenting criticisms, which enhances credibility, before shifting to positive evaluations that frame the product as fundamentally solid,” say the authors. “This approach creates the impression of balance and trustworthiness.” Reviews that transitioned to positivity from a neutral or positive start were not statistically behind, however.

The least helpful reviews of highly rated products were those that start negatively and get more negative. “This downward trajectory may foster confusion and discouragement, particularly when the product is generally high quality but the review remains predominantly critical,” say the authors.

Escalating negativity in reviews is most helpful for average-rated products

For average-rated products, the most helpful reviews were those that have escalating negativity, “which readers appear to find more informative and diagnostic when evaluating products of middling quality”.

“The least helpful structure (for average-rated products) was Type G, in which reviews began negatively but ended with a more positive tone. Readers may interpret this as non-constructive or even misleading, as it initially raises concerns but then shifts toward positivity in a way that undermines the credibility of the critique.”

Positive openings make reviews of low-rated products more helpful

As for low-rated products, the most prevalent structure (starting negatively, then increasing in positivity) were not perceived as very helpful. The way reviews opened was what mattered most to recipients of low-rated product reviews in terms of helpfulness, particularly reviews that begin positively and remain steady in tone.

“Beginning on a positive note appears to establish goodwill and foster an open mindset among readers, making them more receptive to the review that follows for low-rated products,” says the research. “By contrast, the review structures found to be least effective for reviews of low-rated products were those characterised by negative starting points. Starting with blunt criticism sets an overly harsh tone from the outset, which can make readers defensive or discouraged, diminishing receptivity to later, more constructive content. It can also render the review redundant, since the product’s low rating already signals dissatisfaction.”

Suggestions on how to structure review platforms to boost helpfulness

The study details how micro-prompts on review platforms could be structured. When products are highly rated, reviewers could be encouraged to start with any minor issues before explaining what went well overall, leading readers to perceive a review as credible and balanced. For average-rated products, reviewers could be asked to start with what could be improved before being guided progressively toward a negative trajectory that readers find as diagnostic. For low-rated products, reviewers could be invited to open constructively by noting positive aspects before sharing their main concerns, helping to establish goodwill and preventing a review being seen as overly harsh.

“Such small changes in prompt wording or field order can significantly alter how reviewers structure their narratives, aligning their natural writing flow with the structures that audiences actually value. Importantly, these nudges do not censor or distort authentic consumer voices but instead help reviewers present their thoughts in ways that maximise clarity, credibility and usefulness,” say the authors.

Featured research: Luan, Y.L. and Kim, Y.J. (2026) “The role of review structure in perceived helpfulness.” Scientific Reports (DOI: 10.1038/s41598-026-41169-z) (published online Mar 2026).

This article was originally published by the University of Cambridge Judge Business School and republished on DIW with permission.

Reviewed by Irfan Ahmad.

Read next: AIs have ‘personalities’ – here’s how they affect you more deeply than you may realize


by External Contributor via Digital Information World

Monday, April 13, 2026

Does ‘federated unlearning’ in AI improve data privacy, or create a new cybersecurity risk?

Abbas Yazdinejad, University of Regina and Ann Fitz-Gerald, Balsillie School of International Affairs
Image: Yamu_Jay / pixabay

As the capacity of artificial intelligence (AI) increases at an exponential rate, so do concerns about the privacy of user data.

Increasingly, organizations around the world are adopting something called federated unlearning that enables AI training without centralizing sensitive data. This allows hospitals, banks and government agencies to collaborate while keeping data local — an approach that’s regarded as a major advance in privacy.

Federated unlearning promises that user data can be removed from a trained AI system. A hospital, for example, could ask its AI system to forget a patient’s data.

In the European Union, this is defined as the “right to be forgotten.” Similar data deletion rights exist globally, though with different legal strengths and technical interpretations.

But what if the request to forget is not itself trustworthy? Our research shows that while federated unlearning appears to be a natural extension of data rights, it also introduces new hidden security risks that undermine trust in our digital world.

New stealth vulnerabilities

During a process of federated unlearning, participants train local models on personal data, then send updates for those models to a central server. The server aggregates these updates to learn a single, shared system, which allows models to benefit from both the scale and scope of data.

Researchers already know these federated systems can become affected by data poisoning attacks where attackers bias the data they use to train their local model to alter the shared model’s performance.

Poisoning attacks can create stealth vulnerabilities, also known as “backdoors,” that only activate under specific conditions.

Federated unlearning introduces a new and subtle dimension to this threat.

An attacker could first inject harmful patterns into the model. Later, they could submit a request to remove their data. If the unlearning process is imperfect — as many current methods are — the visible traces of the attack may disappear, while the hidden effects remain.

A new security blind spot

This issue creates a new kind of cross-sectoral national security vulnerability that is easy to overlook.

In one hypothetical scenario, repeated unlearning requests could gradually degrade a model’s performance — a slow, hard-to-detect disruption. Unlike traditional cyberattacks, this would not cause the immediate failure of a model, but would erode its reliability over time.

In another case, carefully timed data removal could bias outcomes. A financial risk model, for instance, could be subtly shifted by removing certain data contributions at key moments.

These risks are amplified by the very nature of federated systems. Because data remains distributed, there is often limited visibility into how individual contributions affect the final model.

What emerges is a security blind spot — a mechanism designed to enhance privacy that may also weaken system integrity.

Why current solutions fall short

Many federated unlearning techniques are designed with efficiency in mind. Instead of retraining a model from scratch — which can be costly — the techniques attempt to approximate the removal of data influence. While practical, this approach has limits.

Emerging evidence shows that machine learning models can retain complex patterns even after attempts to remove data and, in adversarial settings, harmful effects may persist even after “unlearning.”

At the same time, there are few safeguards to verify whether an unlearning request itself is legitimate. This gap is not only technical, but also structural, and can lead to multiple security vulnerabilities.

Unlearning is a security problem

Federated unlearning is often framed as a privacy feature. This framing is incomplete. In practice, removing data from a model changes its behaviour — sometimes in unpredictable ways. This makes unlearning a security-sensitive operation, and not just a data management tool.

Like other critical system actions, federated unlearning should be subject to verification, auditing and monitoring. These additional actions could include:

  • Validating the origin of unlearning requests.
  • Tracking how model behaviour changes after data removal.
  • Detecting repeat or suspicious requests.
  • Designing methods that ensure complete removal of harmful influence.

A critical moment for AI governance

AI systems are increasingly used in decisions affecting people’s lives — from medical diagnoses to financial approvals. Here, privacy and reliability both matter.

Federated unlearning sits at this intersection. It aims to protect data rights, but may introduce risks not widely understood. If ignored, systems which are designed to enhance trust could become undermined.

Canada is at an important juncture in shaping how AI systems are governed. Policies around data deletion, accountability and transparency are evolving rapidly.

Federated unlearning will likely become part of this landscape. As it’s adopted, it must be treated with the same level of scrutiny as other security-critical mechanisms.

The challenge is no longer to just make AI forget data. It is to ensure that, in the process of forgetting, we are not allowing something more dangerous to remain.The Conversation

Abbas Yazdinejad, Assistant Professor, Department of Computer Science, University of Regina and Ann Fitz-Gerald, Director and Professor, International Security, Wilfrid Laurier University, Balsillie School of International Affairs

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Asim BN.

Read next:

• From tracking to AI: how 5 popular workout apps handle user data and privacy

• VPN adoption rates by country: trends and statistics


by External Contributor via Digital Information World

From tracking to AI: how 5 popular workout apps handle user data and privacy

Personalized training is spiking in popularity, and so are AI alternatives that may be more affordable. But as technology promises to help you reach your goals, it also adds new risks to your personal information. This study, conducted by Surfshark, uncovers the hidden cost of digital fitness — revealing that apps link the data they collect to your identity, track you, and now use it for AI training.


.

Key insights

  • Google Trends reveals a clear pattern: the search term “fitness” spikes globally every January. Since 2022, the highest value was recorded in January 2026, reaching a score of 100. This score indicates peak search interest on a relative scale from 0 to 100, where 100 represents the highest interest during the chart's time period. On average, each January sees a 23% rise in search interest compared to the preceding December for each year in the analyzed period. April marks the start of the climb, building toward the next peak in summer. On average, the growth from April to the peak month in summer was approximately 13%. The January spike is likely driven by New Year's resolutions, whereas the increased interest in spring might be linked to people focusing on getting in shape for summer. However, researchers note that global physical inactivity levels haven't changed much in 20 years, with approximately 80% of adolescents and one in three adults worldwide not meeting the World Health Organization (WHO) physical activity guidelines.¹

  • Technology, especially AI, is increasingly transforming the fitness industry and could shift how these challenges are addressed. By analyzing user data, AI has the potential to create highly personalized fitness experiences, tailoring workout plans to individual progress and goals. This demand is reflected in the increasing global interest in personal training, as indicated by Google Trends data, which shows notable growth in searches since 2025. To illustrate with numbers, the score in January 2025 was 37, while in winter 2026, it reached a peak of 100 during the analyzed period. This represents a 2.7-fold increase. Last year, the peak was in August, with a score of 75, and growth began in April. But this year, interest has been high right from the start, hinting it might stay strong all year long. While traditional personal training can be costly, AI may seem like a more accessible alternative.

  • All the apps analyzed² incorporate AI features to improve user experience. However, with this advancement, such apps might also use personal data for AI development, which could lead to privacy concerns. For example, Strava uses gathered information from users to enhance the quality, reliability, and/or accuracy of their AI features by creating, developing, training, testing, improving, and maintaining AI and ML models run by Strava or its service providers.³ However, they state that, where possible, they use aggregated, de-identified information for AI features. In the case of Peloton, they use collected data to build, train, analyze, and improve the accuracy of their services, enhance products, and increase operational efficiency. While Peloton may use third-party AI service providers, they explicitly state that any personal data processed by these technologies is strictly for enhancing their services.⁴

  • Among the top workout apps analyzed, Strava collects the most data linked to user identity, gathering 20 out of 35 data types listed in the Apple App Store. For example, these data types include location, purchase and search history, photos and videos, and other user content. Nike Training Club follows closely with 19 data types, while Peloton collects the least, with only 2 data types. Although many of these data types may be essential for app functionality, they can also be used for purposes such as advertising, analytics, product personalization, and more. For example, Ladder uses only 3 out of 10 data types linked to users for app functionality, but collects 7 data types for product personalization and employs 6 for analytics. Companies may also access and use additional sensitive biometric data when these apps connect to wearables or third-party services.

  • Furthermore, 4 out of the 5 analyzed apps also use data for tracking, as stated by app developers in the information provided on the Apple App Store, with Apple Fitness+ being the exception. “Tracking” refers to linking user or device data collected from the app — such as a user ID, device ID, or profile — with user or device data collected from other apps, websites, or offline properties for targeted advertising purposes. Tracking also refers to sharing user or device data with data brokers.⁵

Methodology and sources

This study is divided into two main parts to explore fitness trends and the data collection practices of popular workout apps. The first part utilizes Google Trends to analyze search interest in “fitness” and “personal training” from January 1, 2022, onwards. This timeframe was selected due to enhancements in data collection since that date, allowing for a more accurate identification of global patterns and shifts in these topics over time.

The second part looks into how the five top workout apps for iPhone — Strava, Nike Training Club, Peloton, LADDER, and Fitness+ — handle data collection. These apps were selected from a CNET list² based on the largest number of monthly active users in 2025, as reported by Similarweb, with the exception of the preinstalled Fitness+, for which such data was not available. However, Fitness+ is likely used by most Apple device owners due to its default presence. We examined their data collection practices using information from the Apple App Store and reviewed their privacy policies for any details related to AI model training.

By combining these approaches, the study aims to provide a clear picture of current fitness interests and underscore the importance of data privacy in the digital fitness landscape.

DIW Editor's note: This analysis is based on Google Trends data, Apple App Store privacy labels, and publicly available company privacy policies. Google Trends reflects relative search interest rather than direct user behavior, but is widely used to identify broad interest patterns. App Store privacy labels are self-reported by developers within Apple’s standardized disclosure framework. Statements about AI and data use are derived from policy disclosures and may not reflect full technical implementation or all internal processing practices.

For the complete research material behind this study, click here.

Data was collected from:

Google Trends (2026). Explore search trends; Apple (2026). App Store.

References:

¹ Ramírez Varela, A., Bauman, A., Woods, C.B. et al. (2026). Low global physical activity despite two decades of policy progress;

² CNET (2026). The 7 Best Workout Apps That Are Fitness Expert-Approved;

³ Strava (2026). Privacy Policy;

⁴ Peloton (2025). Privacy Policy;

⁵ Apple (2026). User privacy and data use.

This post was originally published on Surfshark and republished on DIW with permission.

Reviewed by Asim BN.

Read next: 

• Why are communities pushing back against data centers?

• Algorithms don’t care: how AI worsens the double burden for Indonesia’s


by External Contributor via Digital Information World

Algorithms don’t care: how AI worsens the double burden for Indonesia’s female gig workers

Suci Lestari Yuana, Universitas Gadjah Mada

Artificial intelligence is often celebrated as the future of work. It is efficient, innovative and neutral. Yet, for many women in Indonesia’s gig economy, AI feels like a source of mounting pressure.

In my recent research on female gig workers in Indonesia, I examine what I call AI colonialism. This term describes how colonial influence persists today through technology and digital systems that maintain control.

This concept captures how powerful actors use AI – often based in the Global North – to exploit workers in the Global South. Much like historical colonialism, this digital iteration relies on the extraction of data, labour and resources to cement unequal power relations.

In Indonesia, AI-driven platforms like ride-hailing and e-commerce draw on informal labour but push the risks and responsibilities back onto workers. But women pay the highest price because algorithms fail to recognise the realities of care work, safety concerns and social norms.

AI and the gendered restructuring of work

Indonesia’s labour market has long been defined by informality. Millions are working without formal contracts or social protections. Tech companies like Gojek, Grab, Maxim and Shopee didn’t formalise this workforce – they only digitised it.

Image: Grab / unsplash

Drivers are classified as partners rather than employees. This means no minimum wage, no sick pay and no maternity leave. Income is dictated entirely by completed tasks and algorithmic ratings.

For women, this structure collides with the so-called “double burden” since they are responsible for paid work and unpaid care.

Lia, a 33-year-old food delivery rider, wakes before sunrise to cook and get her children ready for school. It is only after she has cleared her domestic duties that she finally logs into the app.

“The system doesn’t know I have children,” she told me. “It only knows whether I am online.”

Platform algorithms reward constant, uninterrupted availability. Incentive schemes demand a specific number of trips within narrow time windows – a high bar for those with domestic ties.

If Lia logs off to pick up her children, she risks losing potential bonuses. If she reduces her hours due to menstrual pain or fatigue, her performance metrics drop.

Neoliberal capitalism relies on a massive amount of unpaid “invisible labour”, such as childcare and housework, but refuses to pay for it or provide a safety net for those who do it. Far from correcting this imbalance, AI systems make things worse.

When Cinthia, a female food delivery rider and a single mother of a one-year-old, fell ill and turned off her app for several days, she noticed fewer job offers upon returning. “It felt like the system punished me,” she said. “Now I’m afraid to stop working.”

The algorithm does not explicitly discriminate. However, it operates on the assumption of a worker without caregiving constraints – a norm that systematically disadvantages women.

Discrimination behind a ‘neutral’ interface

The digital economy often claims neutrality. But gender bias persists.

Yanti, a 43-year-old ride-hailing driver in Yogyakarta, regularly messages male passengers before pickup: “I am a woman driver. Is that okay?”

Many cancel immediately.

The app records cancellations. It does not record gender bias.

Because Yanti avoids working late at night for safety reasons, she misses out on rush-hour incentives. The system, however, doesn’t account for safety – it simply interprets her absence as lower productivity.

Scholars like Virginia Eubanks have pointed out that automated systems often mirror and amplify social inequalities rather than eliminate them.

In Indonesia’s platform economy, discrimination isn’t necessarily hard-coded. It is a byproduct of a design logic that favours efficiency over equity.

In India, women drivers also report earning less on average than their male counterparts, partly due to safety-driven choices regarding timing and route selection. The algorithm does not account for risk in its calculations. It only measures raw output.

Safety, surveillance and algorithmic discipline

For women drivers, safety is a constant negotiation.

Around 90% of the women in our focus group discussions chose food delivery because it felt safer than ride-hailing. Even so, harassment persists in delivery work.

Lia shared how a male colleague targeted her with inappropriate comments as they waited for orders. “It’s not only customers,” she said. “Sometimes it’s other drivers.”

During the COVID-19 pandemic, gig workers were labelled “essential”. Yet their income dropped dramatically by as much as 67% in early 2020. To cover the loss, many worked 13 or more hours per day.

Platforms maintained their rigid performance metrics throughout the crisis. Drivers who are forced to stop working due to illness often see their ratings decline. Health vulnerability was translated directly into an algorithmic penalty.

This reflects labour discipline through digital infrastructure: control shifting from foreman to code.

AI colonialism is more than just foreign ownership. It is about the way extractive logics are woven into everyday digital systems. Workers bear the burden of labour, data, time and risk – yet the platforms hold all the power over algorithmic governance.

Coping, solidarity and everyday resistance

Female gig workers have built dense networks of solidarity through WhatsApp and Telegram groups. They share information about policy changes, warn each other about unsafe customers and exchange strategies for navigating algorithmic shifts.

If an account becomes “gagu/silent” (receiving few orders), experienced drivers “warm it up” by temporarily boosting its activity. They lend money for fuel. They pool resources for vehicle repairs.

When someone faces harassment, others circulate the information quickly to protect fellow drivers. They visited the platform office together when a member was suspended.

Rather than waiting to be formally acknowledged as employees, these women build protection among themselves. This “solidarity over recognition” emerges from shared vulnerability as mothers, caregivers and workers in male-dominated spaces.

Their mutual aid turns care into a strategy and a form of “everyday resistance” – subtle acts that challenge dominant systems, while reflecting a distinctly feminist ethic of survival through relational solidarity.

Beyond innovation narratives

AI is not colonial by design. But when embedded in platform capitalism within unequal societies, it can reproduce colonial patterns of exploitation and loss of ownership.

If we are serious about building just digital futures, we must move beyond innovation narratives and listen to workers, especially women and vulnerable groups in the Global South.

Their stories are a vital reminder that behind every “efficient” algorithm is a human being navigating the delicate balance of survival, dignity and hope.The Conversation

Suci Lestari Yuana, Lecturer at the Faculty of Social and Political Sciences, Universitas Gadjah Mada

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Irfan Ahmad.

Read next: 

• A pocket-sized personal trainer: AI-written texts aim to get older adults moving

• Who's Tuned In (And Out) of Science And Tech?


by External Contributor via Digital Information World