Wednesday, December 10, 2025

Google Expands Android’s Safety Features With Emergency Live Video Rollout

Google has begun rolling out Emergency Live Video on Android, introducing a way for users to share real-time visual information with emergency responders during calls or texts. The feature allows dispatchers to send a request to a user’s device when they determine that viewing the scene would help them assess the situation and provide timely assistance.

Users receive an on-screen prompt and can choose whether to share their camera feed. The stream is encrypted by default, and users retain full control throughout the process, with the ability to stop transmission instantly. The feature requires no setup and is designed to operate through a single, direct action on the user’s device.

Emergency Live Video is intended to support responders in evaluating incidents such as medical crises or fast-moving hazards, and it can help them guide callers through urgent steps until aid arrives. The capability expands Google’s existing emergency-focused tools, including Emergency Location Service, Car Crash Detection, Fall Detection and Satellite SOS.

The rollout begins across the United States and select regions in Germany and Mexico. Devices running Android 8 or later with Google Play services support the feature. Google is working with public safety organizations worldwide to extend availability, and interested agencies can access partner documentation.


Notes: This post was drafted with the assistance of AI tools and reviewed, edited, and published by humans.

Read next: Studies Reveal Severe Gen Z Burnout and Recommend Stronger Workplace Support and Clearer Expectations
by Asim BN via Digital Information World

Studies Reveal Severe Gen Z Burnout and Recommend Stronger Workplace Support and Clearer Expectations

Gen Z workers are reporting some of the highest burnout levels ever recorded, with new research suggesting they are buckling under unprecedented levels of stress.

While people of all age levels report burnout, Gen Z and millennials are reporting “peak burnout” at earlier ages. In the United States, a poll of 2,000 adults found that a quarter of Americans are burnt out before they’re 30 years old.


Image: Vitaly Gariev / Unsplash

Similarly, a British study measured burnout over an 18-month period after the COVID-19 pandemic and found Gen Z members were reporting burnout levels of 80 per cent. Higher levels of burnout among the Gen Z cohort were also reported by the BBC a few years ago.

Globally, a survey covering 11 countries and more than 13,000 front-line employees and managers reported that Gen Z workers were more likely to feel burnt out (83 per cent) than other employees (75 per cent).

Another international well-being study found that nearly one-quarter of 18- to 24-year-olds were experiencing “unmanageable stress,” with 98 per cent reporting at least one symptom of burnout.

And in Canada, a Canadian Business survey found that 51 per cent of Gen Z respondents felt burnt out — lower than millennials at 55 per cent, but higher than boomers at 29 per cent and Gen X, at 32 per cent.

As a longstanding university educator of Gen Z students, and a father of two of this generation, the levels of Gen Z burnout in today’s workplace are astounding. Rather than dismissing young workers as distracted or too demanding of work-life balance, we might consider that they’re sounding the alarm of what’s broken at work and how we can fix it.

What burnout really is

Burnout can vary from person to person and across occupations, but researchers generally agree on its core features. It occurs when there is conflict between what a worker expects from their job and what the job actually demands.

That mismatch can take many forms: ambiguous job tasks, an overload of tasks or not having enough resources or the skills needed to respond to a role’s demands.

In short, burnout is more likely to occur when there’s a growing mismatch between one’s expectations of work and its actual realities. Younger workers, women and employees with less seniority are consistently at higher risk of burnout.

Burnout typically progresses across three dimensions. While fatigue is often the first noticeable symptom of burnout, the second is cynicism or depersonalization, which leads to alienation and detachment to one’s work. This detachment leads to the third dimension of burnout: a declining sense of personal accomplishment or self-efficacy.

Why Gen Z is especially vulnerable to burnout

Several forces converge to make Gen Z particularly susceptible to burnout. First, many Gen Z entered the workforce during and after the COVID-19 pandemic.

It was a time of profound upheaval, social isolation and changing work protocols and demands. These conditions disrupted the informal learning that typically happens through everyday interactions with colleagues that were hard to replicate in a remote workforce.

Second, broader economic pressures have intensified. As American economist Pavlina Tcherneva argues, the “death of the social contract and the enshittification of jobs” — the expectation that a university education would result in a well-paying job — have left many young people navigating a far more precarious landscape.

The intensification of economic disruption, widening inequality, increasing costs of housing and living and the rise of precarious employment have put greater financial pressures on this generation.

A third factor is the restructuring of work that is taking place under artificial intelligence. As workplace strategist Ann Kowal Smith wrote in a recent Forbes article, Gen Z is the first generation to enter a labour market defined by a “new architecture of work: hybrid schedules that fragment connection, automation that strips away context and leaders too busy to model judgment.”

What can be done?

If you’re reading this and feeling burnt out, the first thing to know is that you’re not overreacting and you’re not alone. The good news is, there are ways to recover.

One of burnout’s most overlooked antidotes is combating the alienation and isolation it produces. The best way to do this is by building connection and relation to others, starting with work colleagues. This could be as simple as checking in with a teammate after a meeting or setting up a weekly coffee with a colleague.

In addition, it’s important to give up on the idea that excessive work is better work. Set boundaries at work by blocking out time in your calendar and clearly signalling your availability to colleagues.

But individual coping strategies can only go so far. The more fundamental solutions must come from workplaces themselves. Employers need to offer more flexible work arrangements, including wellness and mental health supports. Leaders and managers should communicate job expectations clearly, and workplaces should have policies to proactively review and redistribute excessive workloads.

Kowal Smith has also suggested building a new “architecture of learning” in the workplace that includes mentorship, provides feedback loops and rewards curiosity and agility.

Taken together, these workplace transformation efforts could humanize the workplace, lessen burnout and improve engagement, even at a time of encroaching AI. A workplace that works better for Gen Z ultimately works better for all of us.

Nitin Deckha, Lecturer in Justice Studies, Early Childhood Studies, Community and Social Services and Electives, University of Guelph-Humber

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read next: Pew Survey: 64% of Teens Use AI Chatbots, and 97% Go Online Daily


by Web Desk via Digital Information World

Research Tracks 8,324 U.S. Children, Identifying Social Media as a Risk Factor for Growing Inattention

A longitudinal study, published in Pediatrics Open Science, following 8,324 children aged 9 to 14 in the United States has found that social media use is associated with a gradual increase in inattention symptoms. Researchers at Karolinska Institutet in Sweden and Oregon Health & Science University tracked children annually for four years, assessing time spent on social media, television/videos, and video games alongside parent-reported attention measures.

On average, children spent 2.3 hours per day watching television or videos, 1.5 hours on video games, and 1.4 hours on social media. Only social media use was linked to growing inattention over time. The effect was small for individual children but could have broader consequences at the population level. Hyperactivity and impulsive behaviors were not affected.

The association remained consistent regardless of sex, ADHD diagnosis, genetic predisposition, socioeconomic status, or ADHD medication. Children with pre-existing inattention symptoms did not increase their social media use, indicating the relationship primarily runs from use to symptoms.

Researchers note that social media platforms can create mental distractions through notifications and messages, potentially reducing the ability to focus. The study does not suggest all children will experience attention difficulties but highlights the importance of informed decisions regarding digital media exposure.

The research team plans to continue monitoring the participants beyond age 14. The study was funded by the Swedish Research Council and the Masonic Home for Children in Stockholm, with no reported conflicts of interest.

Source: “Digital Media, Genetics and Risk for ADHD Symptoms in Children – A Longitudinal Study,” Pediatrics Open Science, 2025.

Notes: This post was drafted with the assistance of AI tools and reviewed, edited, and published by humans.


Image: Vikas Makwana / unsplash

Read next: Pew Survey: 64% of Teens Use AI Chatbots, and 97% Go Online Daily
by Asim BN via Digital Information World

Pew Survey: 64% of Teens Use AI Chatbots, and 97% Go Online Daily

A new Pew Research Center survey of 1,458 U.S. teens shows how central digital platforms and AI tools have become in their daily lives. Nearly all teens (97 percent to be exact) go online each day, and four in ten say they are online almost constantly. Older teens report higher levels of constant use than younger teens, and rates are even higher among Black and Hispanic teens.

YouTube remains the most widely used platform, with roughly nine in ten teens (92 percent to be exact) reporting any use and about three-quarters (76%) visiting it daily.

As per Pew survey, six in ten say they use TikTok daily and 55 percent said this about Instagram, while 46% use Snapchat daily. Facebook and WhatsApp see lower use. Platform preferences vary across demographic groups, with girls more likely to use Instagram and Snapchat, and boys more likely to use YouTube and Reddit.

AI chatbot use is also widespread. Sixty-four percent of teens say they use chatbots, and about three in ten do so daily. Daily use is more common among Black and Hispanic teens and among older teens. ChatGPT is the most widely used chatbot, at 59%, followed by Gemini and Meta AI. Teens in higher-income households use ChatGPT at higher rates, while Character.ai is more common among teens in lower- and middle-income homes.

Notes: This post was drafted with the assistance of AI tools and reviewed, edited, and published by humans.

Read next: Smart Devices Are Spying More Than You Think; Privacy Labels Offer Crucial Clues
by Ayaz Khan via Digital Information World

Smart Devices Are Spying More Than You Think; Privacy Labels Offer Crucial Clues

Smart gadgets collect vast amounts of our personal data through their apps. It’s usually unclear why the manufacturers need this information or what they do with it. And I don’t just mean smartphones. All kinds of devices are quietly mining us, and few people have any idea it’s happening.

Some brands of air fryers, for instance, request permission to listen in on conversations. Smart toys can also listen to and record conversations, not to mention the child’s name, age and birthday. Meanwhile, certain TVs insist on seeing all the apps on your phone.

It’s a bit of a barcode lottery: data collection varies from brand to brand and from one operating system to another, making it even harder for consumers to get on top of this situation. For instance, Android phone users who have smart speakers like Amazon Echo or Google Nest have to share much more personal data than those with Apple iOS devices.

If you think this all sounds worrying, you’re not alone. A 2024 study by the UK Information Commissioner’s Office (ICO) found that participants were concerned about the excessive and unnecessary amount of personal information being collected by devices.

Unlike with those air fryers, much data gathering takes place without the user even having to give explicit permission. If you’re wondering how this is legal given the explicit consent requirements of general data protection regulation (GDPR), the answer lies in the lengthy technical policies buried in the fine print of privacy notices. Most consumers skim-read these or find them difficult to understand, leaving them with little sense of the choices they are making.

Privacy nutrition labels

It seems to boil down to two options. We share our personal data with the apps of smart devices and hope they will only collect routine information, or we opt out and usually have to live with limited functionality or none at all.

However, there is a middle ground that most people are unaware of: privacy nutrition labels. These allow you to take some control by understanding what personal data your gadgets are collecting, without struggling through the privacy blurb.

The trouble is they are difficult to find. They are not mentioned by consumer magazine Which? or the ICO, perhaps because they are only “recommended” by the UK government and the Federal Communications Commission in the US. Yet despite not being legally binding on manufacturers, these privacy labels have become the norm when it comes to smartphone apps, while other smart devices are gradually catching up.

Ironically, this solution came from the pioneers of smart gadgets, Apple and Google. They voluntarily adopted the idea after it was proposed by researchers in 2009 as a way of informing users that their data was being collected.

Experts at Rephrain, the UK’s National Research Centre on Privacy, Harm Reduction and Adversarial Influence Online, have developed the following step-by-step guide to help consumers find their privacy labels on iPhones and Android phones (click or zoom to make the image bigger):



Rephrain, CC BY-SA


Rephrain, CC BY-SA

Once you find the relevant privacy label for the device in question, you’ll see practical, concise information about what data the app collects and why. Two sections list the types of data collected: “Data Used to Track You” and “Data Linked to You” for iPhones, and “Data Shared” and “Data Collected” for Android.

By reading the privacy label before making a purchase, consumers can decide if they are comfortable with the data collected and the way it is handled.

For example, I checked the privacy label of the app for the smart toothbrush I planned to get my husband this [holiday]. I found out it collects the device ID to track users across apps and websites owned by other companies, and data linked to identity such as location and contact information.

So before purchasing smart devices for your loved ones this [holiday], check the privacy labels of their apps on your smartphone. You may be surprised by what you find. This [holiday] season, don’t just give someone a lovely present – give them the gift of data control at the same time.The Conversation

Dana Lungu, Research Associate Research Institute for Sociotechnical Cyber Security, University of Bristol

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read next: Location Data From Apps and Carriers Enables Tracking Without Warrants


by Web Desk via Digital Information World

Tuesday, December 9, 2025

The New Era of Mobile App Deception

Retail apps now process their highest transaction volumes during peak periods. A single exploited app during these windows can compromise millions of stored-card checkouts, gift card loads and loyalty redemptions. Cybercriminals are racing to deploy AI-themed malware and cloned apps faster than security teams can respond.


Here’s what that looks like in practice: A user searches for ChatGPT or DALL·E on their mobile, and within seconds, dozens of apps appear — each claiming to offer smart chat, image generation or another AI-driven feature. On the surface, they look legitimate, but behind the familiar look and feel of these clones sits a spectrum of threats, from harmless wrappers and aggressive adware to fully developed spyware.

According to recent research, fake iOS apps have grown to nearly three times the usual volume, and fake Android apps have grown to nearly six times the national average.

The same pattern is showing up across the tech world. A recent Coinbase Base hackathon offered a $200,000 prize and drew more than 500 developers. Several of the winning projects were later accused of being empty apps linked to company employees. The situation shows how easy it’s become to fool people with something that looks polished, even when the app itself does very little.

Before hitting download, users need to understand the full range of fake apps now circulating, how these clones hide, how they trick people and which red flags to watch out for.

Inside the Spectrum of Fake Apps

Appknox researchers recently examined three apps pretending to be ChatGPT, DALL·E and WhatsApp. The apps posing as ChatGPT and DALL·E weren’t tools at all. They behaved like hidden app stores that could quietly install or delete software on a phone. The WhatsApp clone, known as WhatsApp Plus, went even further and acted as full spyware with access to messages, contacts and call logs. These findings illustrate the spectrum of mobile deception and help explain why fake apps are harder to spot.

Some apps sit at the low end and act as simple wrappers that use familiar names, but connect to real services that behave honestly. Others sit in the middle of the spectrum and imitate trusted brands to attract downloads, but don’t actually deliver anything meaningful. At the high-risk end of the spectrum are malicious clones that hide harmful systems behind trusted branding and user-friendly interfaces.

A lot of fake apps blend in so well that users have little reason to suspect that anything is wrong until that app has already been installed. You can no longer rely on familiar branding and clean designs as reliable signals of safety.

ChatGPT Wrapper Illustrates Imitation Without Deception

At the low end of the spectrum, the Appknox researchers looked at the unofficial ChatGPT Wrapper app. The app behaves exactly as described by sending user text to the OpenAI API and returning results without extra processing. Appknox researchers found no hidden modules or obfuscated code, and no background activity that suggested anything harmful. It asked only for basic permissions and avoided access to contacts, SMS or account information.

Its behavior matches its description, but this level of transparency is rare among AI-themed apps. There are a lot of apps that copy the look of AI tools while hiding unrelated systems inside. The ChatGPT Wrapper does the opposite, offering a simple service and making its function clear. It shows that unofficial apps aren’t automatically dangerous; some exist to fill gaps in official offerings without misleading users.

The wrapper also demonstrates why users must evaluate app behavior rather than brand resemblance. A familiar name doesn’t guarantee safety, and an unofficial name doesn’t guarantee risk. The real issue is whether the app performs the function it claims to without hiding additional processes.

DALL·E Lookalike Pretends to Be AI

In the middle of the spectrum, the researchers found the DALL·E 3 AI Image Generator app on Aptoide, a third-party Android app store that allows anyone to upload apps with little review. That alone is a warning sign.

This one looks convincing and uses branding that resembles an official OpenAI service. The color scheme and icons match expectations. When the app opens, a loading animation suggests an AI model is creating an image. Everything is designed to feel familiar and trustworthy.

Once Appknox researchers looked inside the app, they found that the app has no AI system at all. There is nothing inside that can generate images or run a model. Instead, the app connects only to advertising platforms like Adjust, AppsFlyer, Unity Ads and Big Ads. These connections activate immediately when the app is launched. No user content is processed, and no image is created. All activity is tied to ads.

Its internal identifiers also offer important clues. They match template-based apps that can be quickly repackaged and released under different names. This suggests the app was assembled by reusing a generic kit, then dressed up to look like an AI tool so it would attract downloads.

WhatsApp Plus Hiding a Full Spyware System

At the far end of the spectrum sits WhatsApp Plus. This app presents itself as an enhanced version of WhatsApp, but inside, it contains a full surveillance system. It uses a fraudulent certificate and relies on the Ljiami packer, which hides encrypted code inside secondary folders that activate after installation. The hidden modules give the app persistent access to the device.

Once it's installed, WhatsApp Plus asks for extensive permissions that far exceed what a messaging app needs, like the ability to read and write contacts, access SMS messages, retrieve device accounts, collect call logs and send messages on behalf of the user. The app can then intercept verification codes, scrape address books, impersonate the user and interfere with identity-based authentication.

Security platforms classify this app as spyware and Trojan malware. At first, it looks polished and works like any other app, but once installed, it behaves like an active surveillance tool. In addition to data theft, the app can take over messaging accounts and disrupt banking or financial flows that rely on SMS for verification.

How to Protect Against Brand Abuse and Malicious Clones

As the number of unofficial apps continues to grow, brand trust itself has become a vector. Bad actors are focusing more on duplicating legitimate sites and making them appear credible than on creating new malware. This makes it easier for fake apps to hide in plain sight. Here are a few practical steps that can help reduce the risk:

  1. Download from trusted stores only. Stick to Google Play or the Apple App Store. Third-party stores allow anyone to upload apps with minimal review, which makes it easier for clones and malware to sneak in.
  2. Check the developer name. Make sure the publisher listed is the real company. If an app claims to be from OpenAI or Meta but lists an unfamiliar developer, that’s a red flag.
  3. Look closely at permissions. Be cautious if an app asks for access it doesn’t need, such as contacts, call logs or the microphone. Many fake apps count on users tapping “allow” without thinking.
  4. Notice how an app behaves. Beware of an app that keeps running after it closes, shows unexpected ads or tries to install other apps.
  5. Watch for copycat branding. Cloned apps often reuse logos, color schemes and names that are close but not exact. Other warning signs include misspellings, extra words or “plus” and “pro” versions.
  6. Report suspicious apps. If something feels off, report the app through the store. Quick reporting helps protect other users.
  7. Use a mobile security tool. Security apps that check behavior, permissions and network activity can catch threats that appear harmless.

The examples uncovered by Appknox researchers mark a turning point. Fake apps no longer stand out, and familiar branding won’t guarantee safety. Mobile security now depends on understanding how modern apps behave and paying attention to the small signals that something seems off. With clear, upfront behavior checks, users have a much better chance of spotting deception and stopping it before it causes harm.


About the Author

Subho Halder is the CEO and co-founder of Appknox, a globally recognized mobile security testing platform. A leading security researcher, Subho is the mastermind behind AFE, known for uncovering critical vulnerabilities in Google, Apple, and other tech giants. A frequent speaker at BlackHat, Defcon, and top security conferences, he is a pioneer in AI-driven threat detection and enterprise security. As CEO, he drives Appknox’s vision, helping organizations proactively safeguard their mobile applications.

Read next:

• Replace Doom Scrolling with Microlearning Apps and Boost Focus in 2025

• WhatsApp Tests Strict Security Mode and AI Editing Tools in New Android Betas
by Web Desk via Digital Information World

EU Accepts Meta’s Updated Ad Model for January 2026 Rollout

EU regulators have accepted Meta’s revised advertising model for Facebook and Instagram, with the company set to present the new options to users in January 2026. The decision removes the immediate risk of daily fines under the Digital Markets Act (DMA).

The updated pay-or-consent model introduces two choices. Users may allow full data sharing for fully personalised advertising or restrict data use for a more limited form of personalisation. The European Commission noted that this marks Meta’s first time offering such an alternative on its social platforms.

The approval comes after an April 2025 non-compliance decision that included a €200 million fine for violations covering November 2023 to November 2024. Meta subsequently adjusted the proposal’s wording, design, and transparency features while retaining its overall structure.

Meta faced potential daily fines under the DMA framework, up to 5% of average daily worldwide turnover. The Commission's approval eliminates this immediate penalty risk.

After rollout in January 2026, the Commission will assess how the model functions by gathering feedback and evidence from Meta and relevant stakeholders. The Commission restated that EU users "must have full and effective choice", as required under the DMA.


Notes: This post was drafted with the assistance of AI tools and reviewed, edited, and published by humans. Image: DIW-Aigen

Read next:

• Has OpenAI Sacrificed Morality for Shareholder Profits in Its Ten-Year Journey?

• Replace Doom Scrolling with Microlearning Apps and Boost Focus in 2025
by Asim BN via Digital Information World