Showing posts with label Social Media. Show all posts
Showing posts with label Social Media. Show all posts

Monday, December 15, 2025

Small Businesses Face Growing Cybercrime Burden, Survey Shows

More than four in five small businesses experienced a cybercrime in 2025, according to the Identity Theft Resource Center’s Business Impact Report. The nonprofit surveyed 662 owners and executives at companies with 500 or fewer employees, including solopreneurs, about incidents over the previous 12 months.

The findings show that 81 percent of respondents suffered a security breach, a data breach, or both. AI-enabled attacks were identified as a root cause in more than 40 percent of incidents, reflecting a shift towards more technologically advanced external threats.

The financial impact was notable, with 37% of affected businesses reporting losses exceeding $500,000.

To manage these costs, 38.3 percent of small business leaders reported raising prices, this burden functions as an invisible "cyber tax," pushing businesses to raise prices and contributing to inflation. The report also notes declining confidence in cybersecurity preparedness and reduced use of basic security measures, despite growing concern about AI-driven risks.

Over one-third of breached small businesses reported losses exceeding $500,000, highlighting severe cybercrime costs.

Notes: This post was drafted with the assistance of AI tools and reviewed, edited, and published by humans.

Read next: Online shopping makes it harder to make ethical consumption choices, research says
by Ayaz Khan via Digital Information World

Online shopping makes it harder to make ethical consumption choices, research says

By Caroline Moraes - Professor of Marketing and Consumer Research.

Image: Vitaly Gariev / Unsplash

As the Christmas shopping period begins in earnest following Black Friday and Cyber Monday, new research led by the University of Birmingham and the University of Bristol sheds light on how consumers’ environmental and social concerns fail to translate into ethical purchasing actions during online shopping.

The study, published in the Journal of Business Ethics, explores how competitive shopping environments and marketing tactics can influence moral decision-making among consumers. It reveals that the intense focus on bargains and limited-time offers, such as those prevalent during the festive sales periods, can lead shoppers to discount any concerns they may have about sustainability or fair labour, in pursuit of a deal.

Caroline Moraes, Professor of Marketing and Consumer Research from the Centre for Responsible Business at the University of Birmingham, and co-author of the study, said: "Our findings show that the tactics used by online shops create tensions between ethical intentions and actual behaviour. Many consumers aspire to shop responsibly by buying sustainably and ethically made products. But the design of websites and the urgency and excitement that people experience across online shopping platforms, which increase even further during events like Black Friday and Boxing Day sales, can often override these values.”

The qualitative study examined how self-described ‘ethically oriented’ consumers practice online shopping for clothes.

"Buying a loved one a gift or purchasing new clothes during the festive season shouldn’t come at the cost of our values and the environment." Prof Caroline Moraes, University of Birmingham

Dr Fiona Spotswood, Associate Professor in Marketing and Consumption at the University of Bristol Business School, and lead-author of the study, said: “We paid attention to how participants navigated existing digital retail websites, how they balance social and environmental information with other product information, and how they perform online shopping routines.”

The paper outlines that ethical decision-making is inhibited by some key characteristics of online shopping, including:

  • Online shopping websites are designed for passive habitual scrolling and browsing.
  • Price and aesthetic appeal being front and centre of products’ selling points rather than ethical factors.
  • Lack of information about the ethical and environmental sustainability credentials of products.
  • Being pressured to make an immediate purchase with limited-time deals.

The research calls for retailers to adopt responsible marketing practices, ensuring transparency and fairness in promotional strategies and including ethical and sustainability criteria in their online shopping websites. It also urges consumers to reflect on the broader social and environmental impact of their purchases, particularly during peak shopping periods when ethical considerations are most likely to be compromised.

Professor Moraes said: “With more of us shopping online than ever before, our research serves as a timely reminder that people do want to be more ethical in their shopping practices, but it can be incredibly hard to act in that way. Businesses should take this into consideration when it comes to their e-commerce offering. Buying a loved one a gift or purchasing new clothes during the festive season shouldn’t come at the cost of our values and the environment.”

Four tips on how to shop more ethically online

  1. Pause before you purchase. If you recognise you have been scrolling/browsing for a long time, take a break and ask yourself if you or the person you are buying for really needs this before hitting purchase.
  2. Search for specific sustainable options. Look directly for eco-friendly products and brands that prioritise fair labour practices and that have this information easily available.
  3. Avoid overbuying. Resist the urge to stockpile just because it is on sale at the click of a button. Someone else might need that item more than you do.
  4. Re-style and/or purchase second-hand. If you are shopping for clothes, consider re-styling what you already have and/or purchase second-hand items that can help you create your very own versions of the new styles you see online.
About the author: Caroline Moraes is Professor of Marketing and Consumer Research at Birmingham Business School, University of Birmingham, UK.

This post was originally published on University of Birmingham and republished with permission.

Read next:

• Study Finds Higher Digital Skills Linked to Greater Privacy and Misinformation Concerns

Human-AI Collaboration Requires Structured Guidance, Research Shows
by External Contributor via Digital Information World

Study Finds Higher Digital Skills Linked to Greater Privacy and Misinformation Concerns

A new cross-national study by researchers at University College London and the University of British Columbia finds that people with higher digital skills report greater concern about privacy, online misinformation, and work-life disruption linked to digital technologies.

The research, published in Information, Communication & Society, analyzed European Social Survey data from nearly 50,000 respondents across 29 European countries and Israel between 2020 and 2022. Participants’ views on privacy infringement, misinformation, and work interruptions were combined into a digital concern scale ranging from 0 to 1.

Millennials aged 25 to 44 reported higher concern levels than younger and older age groups. No significant differences were found by gender, income, or urban–rural residence. Concern was lowest in Bulgaria and highest in the Netherlands and the United Kingdom.

The study found that people with greater digital literacy expressed more concern, particularly among frequent internet users, suggesting increased awareness and exposure may heighten unease rather than reduce it.

UCL and UBC research links higher digital literacy to greater online privacy and misinformation concerns.

"Figure 2 depicts people’s digital concerns across the 30 countries. The results show an overall high level of digital concerns, with a mean of 0.65 on the 0–1 scale."

H/T: Taylorandfrancisgroup

Notes: This post was drafted with the assistance of AI tools and reviewed, edited, and published by humans.

Read next: The ‘AI Homeless Man Prank’ reveals a crisis in AI education
by Asim BN via Digital Information World

Sunday, December 14, 2025

The ‘AI Homeless Man Prank’ reveals a crisis in AI education

The new TikTok trend “AI Homeless Man Prank” has sparked a wave of outrage and police responses in the United States and beyond. The prank involves using AI image generators to create realistic photos depicting fake homeless people appearing to be at someone’s door or inside their home.

Learning to distinguish between truth and falsehood is not the only challenge society faces in the AI era. We must also reflect on the human consequences of what we create.

As professors of educational technology at Laval University and education and innovation at Concordia University, we study how to strengthen human agency — the ability to consciously understand, question and transform environments shaped by artificial intelligence and synthetic media — to counter disinformation.

Image: Kenny Eliason / unsplash

A worrying trend

In one of the most viral “AI Homeless Man Prank” videos, viewed more than two million times, creator Nnamdi Anunobi tricked his mother by sending her fake photos of a homeless man sleeping on her bed. The scene went viral and sparked a wave of imitations across the country.

Two teenagers in Ohio have been charged for triggering false home intrusion alarms, resulting in unnecessary calls to police and real panic. Police departments in Michigan, New York and Wisconsin have issued public warnings that these pranks are wasting emergency resources and dehumanizing the vulnerable.

At the other end of the media spectrum, boxer Jake Paul agreed to experiment with the cameo feature of Sora 2, OpenAI’s video generation tool, by consenting to the use of his image.

But the phenomenon quickly got out of hand: internet users hijacked his face to create ultra-realistic videos in which he appears to be coming out as gay or giving make-up tutorials.

What was supposed to be a technical demonstration turned into a flood of mocking content. His partner, skater Jutta Leerdam, denounced the situation: “I don’t like it, it’s not funny. People believe it.”

These are two phenomena with different intentions: one aimed at making people laugh; the other following a trend. But both reveal the same flaw: that we have democratized technological power without paying attention to issues of morality.

Digital natives without a compass

Today’s cybercrimes — sextortion, fraud, deepnudes, cyberbullying — are not appearing out of nowhere.

Their perpetrators are yesterday’s teenagers: they were taught to code, create and publish online, but rarely to think about the human consequences of their actions.

Juvenile cybercrime is rapidly increasing, fuelled by the widespread use of AI tools and a perception of impunity. Young people are no longer just victims. They are also becoming perpetrators of cyber crime — often “out of curiosity,” for the challenge, or just “for fun.”

And yet, for more than a decade, schools and governments have been educating students about digital citizenship and literacy: developing critical thinking skills, protecting data, adopting responsible online behaviour and verifying sources.

Despite these efforts, cyber-bullying, disinformation and misinformation persist and are intensifying to the point of now being recognized as one of the top global risks for the coming years.

A silent but profound desensitization

These abuses do not stem from innate malice, but from a lack of moral guidance adapted to the digital age.

We are educating young people who are capable of manipulating technology, but sometimes unable to gauge the human impact of their actions, especially in an environment where certain platforms deliberately push the boundaries of what is socially acceptable.

Grok, Elon Musk’s chatbot integrated into X (formerly Twitter), illustrates this drift. AI-generated characters make sexualized, violent or discriminatory comments, presented as simple humorous content. This type of trivialization blurs moral boundaries: in such a context, transgression becomes a form of expression and the absence of responsibility is confused with freedom.

Without guidelines, many young people risk becoming augmented criminals capable of manipulating, defrauding or humiliating on an unprecedented scale.

The mere absence of malicious intent in content creation is no longer enough to prevent harm.

Creating without considering the human consequences, even out of curiosity or for entertainment, fuels collective desensitization as dignity and trust are eroded — making our societies more vulnerable to manipulation and indifference.

From a knowledge crisis to a moral crisis

AI literacy frameworks — conceptual frameworks that define the skills, knowledge and attitudes needed to understand, use and critically and responsibly evaluate AI — have led to significant advances in critical thinking and vigilance. The next step is to incorporate a more human dimension: to reflect on the effects of what we create on others.

Synthetic media undermine our confidence in knowledge because they make the false credible, and the true questionable. The result is that we end up doubting everything – facts, others, sometimes even ourselves. But the crisis we face today goes beyond the epistemic: it is a moral crisis.

Most young people today know how to question manipulated content, but they don’t always understand its human consequences. Young activists, however, are the exception. Whether in Gaza or amid other humanitarian struggles, they are experiencing both the power of digital technology as a tool for mobilization — hashtag campaigns, TikTok videos, symbolic blockades, coordinated actions — and the moral responsibility that this power carries.

But it’s no longer truth alone that is wavering, but our sense of responsibility.

The relationship between humans and technology has been extensively studied. But the relationship between humans through technology-generated content hasn’t been studied enough.

Towards moral sobriety in the digital world

The human impact of AI — moral, psychological, relational — remains the great blind spot in our thinking about the uses of the technology.

Every deepfake, every “prank,” every visual manipulation leaves a human footprint: loss of trust, fear, shame, dehumanization. Just as emissions pollute the air, these attacks pollute our social bonds.

Learning to measure this human footprint means thinking about the consequences of our digital actions before they materialize. It means asking ourselves:

  • Who is affected by my creation?
  • What emotions and perceptions does it evoke?
  • What mark will it leave on someone’s life?

Building a moral ecology of digital technology means recognizing that every image and every broadcast shapes the human environment in which we live.

Educating young people to not want to harm

Laws like the European AI Act define what should be prohibited, but no law can teach why we should not want to cause harm.

In concrete terms, this means:

  • Cultivating personal responsibility by helping young people feel accountable for their creations.
  • Transmitting values through experience, by inviting them to create and then reflect: how would this person feel?
  • Fostering intrinsic motivation, so that they act ethically out of consistency with their own values, not fear of punishment.
  • Involving families and communities, transforming schools, homes and public spaces into places for discussion about the human impacts of unethical or simply ill-considered uses of generative AI.

In the age of manufactured media, thinking about the human consequences of what we create is perhaps the most advanced form of intelligence.The Conversation

Nadia Naffi, Associate Professor, Educational Technology, Université Laval and Ann-Louise Davidson, Innovation Lab Director and Professor, Educational Technology and Innovation Mindset, Concordia University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read next: Revealing the AI Knowledge Gap in Marketing, The Cost of Upskilling


by External Contributor via Digital Information World

Saturday, December 13, 2025

Revealing the AI Knowledge Gap in Marketing, The Cost of Upskilling

While the marketing industry is known to undergo frequent changes, artificial intelligence has increased that evolution at a rate that sometimes feels unattainable to keep up with. Now that features like AI and generative engine optimization are in the mix, marketers are forced to learn and adapt quickly in order to stay competitive.

Because of that, continuous learning is imperative, but many marketers are overworked, underpaid, and undertrained. With urgent tasks, ever-expanding workloads, smaller teams, and shrinking budgets, upskilling often gets postponed in favor of more time-sensitive matters.

A new study from Adobe for Business surveyed more than 400 American marketers to learn more about their tasks, whether they want to upskill, who pays for that training, and if the tools they use help or hurt their productivity. Adobe for Business provided Digital Information World with exclusive data from the survey, including breakdowns by career level and generation not available in the public report.

Extending beyond the job description

The study data revealed that more than one in five marketers reported undertaking 10 or more core responsibilities in their current positions. In addition to their regular duties, they are frequently assigned tasks beyond their job description, with more than half stating they have taken on extra duties outside of their previously agreed-upon tasks.

High-priority tasks can and will arise at any time, causing managers to request that their employees stop their current work to help. In fact, study participants disclosed receiving an average of five ad hoc tasks per week from their superiors.

According to the survey, the most common marketing role responsibilities were identified as marketing strategy (46%), social media marketing (41%), and content marketing (37%). But those who take on additional work tend to focus most on marketing strategy (14%), social media marketing (13%), and market research (10%) alongside their regular duties.

These additional tasks also vary by business size, as marketers at small businesses reported performing 26% more tasks than those at enterprise-level businesses. Workers at small- and medium-sized companies tend to carry out social media duties most often. Large businesses are tasked with added marketing strategy work more frequently, and enterprise companies tack on more project management tasks.

As more work is added to marketers’ plates, the need to familiarize themselves with, if not master, new specialities and training becomes imperative for success.

Marketers want to upskill, and many are funding their training from their own pockets

Marketers are witnessing the most sophisticated technological advancements in recent history, and many are doing what they can to learn new skills and incorporate them into their daily routines.

The study shared that nearly four in five marketers spent their personal time and money outside of work building new skills in the past year, averaging roughly 57 hours of learning off the clock. Of those who trained outside of working hours, more than three in four paid with money from their own pockets, investing an average of $310 in the past 12 months.

Learning and development trends fluctuate across generations. The report found that Gen X marketers spent the most time upskilling than any other age group, with 79% seeking external training outside or regular working hours, averaging 69 learning hours in the past year. Gen Z marketers were only slightly behind on time spent advancing their knowledge, allocating 68 hours, but Gen Z (82%) had the highest percentage of marketers spending personal time expanding their skillset.

Nearly 80% of Millennials and 67% of Baby Boomers committed to outside training, but they spent the least time studying new skills, dedicating only 50 hours and 31 hours, respectively.

There are many different facets of marketing that professionals want to learn more about, but the top focus areas are:

  1. AI automation (39%)
  2. Graphic design (31%)
  3. Data analytics (27%)
  4. Leadership (26%)
  5. Data visualization (21%)
  6. Web development (20%)
  7. Email marketing (20%)
  8. TikTok (19%)

While employers expect marketers to understand and utilize AI, only 23% of survey respondents said they have received on-the-job training on the topic.

The data highlighted that every generation of marketers feels that learning AI automation skills is the most essential focus area to pursue outside of work. Baby Boomers feel the most strongly about this, as 67% took time off the clock to learn more. While Millennials were the least likely to use personal time to learn from upskilling, the report unveiled that they received more AI automation training at work than any other generation.

Career-level AI training also ranges, but the data show that lower-level employees tend to get the least amount of AI knowledge improvements, as only 15% of entry-level workers reported receiving on-the-job AI training. Employees at the manager level (30%) were more likely to have received AI training while at work; however, more than half needed to broaden their AI capabilities outside of work.

How inefficiency costs companies and what that looks like

On top of marketers flagging that they are overworked and undertrained, they also reported that their employers often expect them to have access to and mastery of various software and tools that they use, even if they have never used them before. Some even pay for the use of these tools instead of their employers footing the bill.

Roughly one in 10 marketers in the study stated they use eight or more tools weekly, and more than two in five pay out of their own pockets for tools they regularly use. Some feel they hold them back, as respondents said they have lost an estimated 60 hours of productivity annually due to inefficient tools.

The tools marketers say hinder their efficiency the most are:

  1. Spreadsheets (26%)
  2. Collaboration tools (18%)
  3. Customer relationship management systems (13%)
  4. Email marketing platforms (11%)
  5. social media management tools (11%)
  6. Project management (11%)

Marketers want to learn, so much so that they are willing to invest their own personal time and money to get the training they need to perform to the best of their abilities. Continuous learning is among the most impactful ways to do just that in a field that is constantly changing and innovating. As new technologies continue to emerge, it is in businesses’ best interest to ensure marketers are aware and trained on the most up-to-date tools that will empower them to produce the best results.

Visual learner? Scroll to the end of this article to view infographics highlighting key survey findings and statistics




Read next: Google Translate Gets AI Upgrade: Gemini Now Powers Text and Speech Translation
by Web Desk via Digital Information World

Human-AI Collaboration Requires Structured Guidance, Research Shows

Cambridge researchers found that simply adding AI to creative tasks does not automatically improve results. The study, published in Information Systems Research, involved three linked experiments, each with 160 to 200 participants.

Human-AI pairs did not naturally become more creative through repeated collaboration alone, and some became less creative over time. The researchers examined three collaboration approaches i.e.: humans proposing ideas, humans asking AI to generate ideas, and humans and AI jointly refining ideas.

Creativity improved only when participants focused on co-developing ideas, exchanging feedback and building on suggestions, rather than continuously generating new ideas without refinement.

A third experiment showed that explicit instruction to engage in co-development led to clear creativity gains across repeated tasks.

Dr. Yeun Joon Kim of Cambridge Judge Business School stated that organizations must provide targeted support, such as guidance on building and adapting ideas, to help employees and AI learn over time how to create more effectively.

The research indicates that companies should structure AI collaboration through clear instructions and workflows, rather than relying on AI use alone, to improve creative outcomes.


Notes: This post was drafted with the assistance of AI tools and reviewed, edited, and published by humans. Image: DIW-Aigen

Read next:

• How Much Do Fake Account Verifications Really Cost Across the World?

• Google Translate Gets AI Upgrade: Gemini Now Powers Text and Speech Translation
by Asim BN via Digital Information World

Friday, December 12, 2025

Google Translate Gets AI Upgrade: Gemini Now Powers Text and Speech Translation

Tech giant Google on Dec. 12, 2025 announced updates to Google Translate that expand its translation and language-learning features using its Gemini AI models. The company said the changes apply to text translation, live speech translation, and language practice tools.

Google Translate now uses Gemini-powered text translation in Search and the Translate app to better handle context, including idioms and colloquial phrases by understanding their true meaning rather than translating word-for-word. The rollout begins in the United States and India, covering English translations with nearly 20 languages "including Spanish, Hindi, Chinese" on Android, iOS, and the web (desktop/PC).

Google is also launching a beta live speech-to-speech translation feature that delivers real-time audio translations through headphones. The beta is available on Android in the U.S., Mexico, and India, supports more than 70 languages, and works with any headphones. Google said Apple's iOS support and additional countries are planned for 2026.

The company also expanded language practice tools, adding progress tracking and improved feedback, and extending availability to nearly 20 additional countries.

However, users should note that AI-powered translation feature can occasionally produce inaccurate results or make incorrect assumptions about context and region. While Gemini-powered translations represent significant improvements, blind reliance on any AI translation without verification — especially for critical communications, legal documents, or region-sensitive content — could lead to misunderstandings.

Testing in Pakistan, for example, revealed instances where Google Search's translation feature incorrectly defaulted to Hindi (India's official language) instead of Urdu when searching for regional terms like 'khota meaning in English,' suggesting the model still struggles with regional language detection and localization across South Asian markets.

Google Translate Gets AI Upgrade: Gemini Now Powers Text and Speech Translation
Image: DIW

Notes: This post was drafted with the assistance of AI tools and reviewed, edited, and published by human editor.

Read next: How Much Do Fake Account Verifications Really Cost Across the World?
by Irfan Ahmad via Digital Information World

How Much Do Fake Account Verifications Really Cost Across the World?

A new site that tracks the daily fluctuating costs behind building a bot army on over 500 social media and commercial platforms – from TikTok to Amazon and Spotify – in every nation on the planet is launched by the University of Cambridge.

For the first time, the Cambridge Online Trust and Safety Index (COTSI) allows the global community to monitor real-time market data for the “online manipulation economy”: the SIM farms that mass-produce fake accounts for scammers and social bots.

How Much Do Fake Account Verifications Really Cost Across the World?
Image: DIW-AIgen

These markets openly sell SMS message verifications for fake profiles across hundreds of sites, providing a service for “inauthentic activity” ranging from vanity metrics boosts and rage-bait accounts to coordinated influence campaigns.

A new analysis using twelve months of COTSI data, published in the journal Science , shows that verifying fake accounts for use in the US and UK is almost as cheap as in Russia, while Japan and Australia have high prices due to SIM costs and photo ID rules.

The average price of SMS verification for an online platform during the year-long study period running to July 2025 was $4.93 in Japan and $3.24 in Australia, yet just a fraction of that in the US ($0.26), UK ($0.10) and Russia ($0.08).*

The research also reveals that prices for fake accounts on Telegram and WhatsApp appear to spike in countries about to have national elections, suggesting a surge in demand due to “influence operations”.

The COTSI team, based in Cambridge’s Social Decision-Making Lab, includes experts in misinformation and cryptocurrency. They argue that SIM card regulation could help “disincentivise” online manipulation, and say their tool can be used to test policy interventions the world over.

The team suggest that platforms should add labels showing an account’s country of origin for transparency, as recently done on X, but also point out such measures can be circumvented – a service provided by many vendors in the study.

“We find a thriving underground market through which inauthentic content, artificial popularity, and political influence campaigns are readily and openly for sale,” said Dr Jon Roozenbeek, study co-lead and senior author from the University of Cambridge.

“Bots can be used to generate online attention for selling a product, a celebrity, a political candidate, or an idea. This can be done by simulating grassroots support online, or generating controversy to harvest clicks and game the algorithms.”

“All this activity requires fake accounts, and each one starts with a phone number and the SIM hardware to support it. That dependency creates a choke point we can target to gauge the hidden economics of online manipulation.”

Co-lead author Anton Dek, a researcher at the Cambridge Centre for Alternative Finance, said: “Misinformation is subject to disagreement across the political spectrum. Whatever the nature of inauthentic online activity, much of it is funnelled through this manipulation market, so we can simply follow the money.”

“A sophisticated bot can run an influence campaign through hundreds of fake accounts” - Dr Jon Roozenbeek.

Murky global market

To register a new account, online platforms require SMS (Short Message Service) verification: a text message containing a code sent to a valid phone number. This is intended to confirm a human is setting it up.

Over the last decade, a murky global marketplace has emerged with the infrastructure to bypass this security protocol, and automatically generate and sell fake accounts in bulk.

Companies claiming to offer privacy solutions operate “farms” of thousands of SIM cards and SIM banks – both real and virtual – to provide SMS verifications and re-route web traffic though mobile networks to disguise its origin.

Fake accounts bought from this “transnational grey market” of informal businesses, often based in jurisdictions with little legal oversight, are central to online scams.

This market is also behind many malicious bot campaigns now dominating propaganda and PR dark arts, according to Cambridge researchers. “A sophisticated bot can run an influence campaign through hundreds of fake accounts,” said Roozenbeek.

“Generative AI means that bots can now adapt messages to appear more human and even tailor them to relate to other accounts. Bot armies are getting more persuasive and harder to spot.”

For example, a study last year uncovered a botnet of 1,140 accounts on X using generative AI to run automated conversations.

Fake account index

The team built COTSI with opensource data pulled from some of the world’s biggest fake account suppliers. Researchers identified seventeen vendors and sorted by traffic to focus on the top ten. Four of these are used at any one time to construct the global price index, with others kept in reserve.

Importantly, COTSI monitors not just prices but also the available “stock” of fake accounts listed by each vendor in every country for hundreds of platforms.

These include all social media channels, as well as cash, dating and gaming apps, cryptocurrency exchanges and sharing economy sites such as AirBnB, music and video streaming services, ride-hailing apps such as Uber, and accounts for major brands such as Nike and McDonald’s.

“One SIM card can be used for hundreds of different platforms,” said Dek. “Vendors recoup SIM costs by selling high-demand verifications for apps like Facebook and Telegram, then profit from the long tail of other platforms.”

Additional analyses show global stocks of fake accounts are highest for platforms such as X, Uber, Discord, Amazon, Tinder and gaming platform Steam, while vendors keep millions of verifications available for the UK and US, along with Brazil and Canada.**

Meta, Grindr, and Shopify rank among platforms with the cheapest fake accounts for sale, at a global average of $0.08 per verification. This is followed by X and Instagram at an average of $0.10 per account, TikTok and LinkedIn at $0.11, and Amazon at $0.12.

The researchers tested the market themselves, with mixed results. Attempting to verify fake US Facebook accounts only worked 21% of the time with one big provider, but over 90% with another. Much of this difference comes down to virtual versus physical SIMs.***

“Fingerprinting by some platforms can mean IP addresses get banned if registration fails,” said Dek. “High-quality verifications involve a physical SIM, requiring huge banks of phones. Nations in which SIM cards are more expensive have higher prices for fake accounts. This is likely to suppress rates of malicious online activity.”

“The COTSI shines a light on the shadow economy of online manipulation by turning a hidden market into measurable data”, - Prof Sander van der Linden.

Pre-election prices

To investigate if political influence operations can be seen in these markets, the team analysed price and availability of SMS verifications for eight major social media platforms in the 30 days leading up to 61 national elections held around the world between summer 2024 and the following summer.****

They found that fake account prices shot up for direct messaging apps Telegram and WhatsApp during election run-ups the world over, likely driven by demand. An account on Telegram increased in price by an average of 12%, and by 15% on WhatsApp.

Accounts on these apps are tied to visible phone numbers, making it easy to see the country of origin. As such, those behind influence operations must register fake accounts locally, say researchers, increasing demand for SMS verifications in targeted nations.

However, on social media platforms like Facebook or Instagram, where no link between price and elections was found, fake accounts can be registered in one country and used in another. They also have greater reach which keeps demand high.

“A fake Facebook account registered in Russia can post about the US elections and most users will be none the wiser. This isn’t true of apps like Telegram and WhatsApp,” said Roozenbeek.

“Telegram is widely used for influence operations, particularly by state actors such as Russia, who invested heavily in information warfare on the channel.” WhatsApp and Telegram are among platforms with consistently expensive fake accounts, averaging $1.02 and $0.89 respectively.

‘Shadow economy’

The manipulation market’s big players have major customer bases in China and the Russian Federation, say the research team, who point out that Russian and Chinese payment systems are often used, and the grammar on many sites suggests Russian authorship. These vendors sell accounts registered in countries around the world.*****

“It is hard to see state-level political actors at work, as they often rely on closed-loop infrastructure. However, we suspect some of this is still outsourced to smaller players in the manipulation market,” said Dek.

Small vendors resell and broker existing accounts, or manually create and “farm” accounts. The larger players will provide a one-stop shop and offer bulk order services for follower numbers or fake accounts, and even have customer support.

A 2022 study co-authored by Dek showed that around ten Euros on average (just over ten US dollars) can buy some 90,000 fake views or 200 fake comments for a typical social media post.

“The COTSI shines a light on the shadow economy of online manipulation by turning a hidden market into measurable data,” added co-author of the newSciencepaper Prof Sander van der Linden.

“Understanding the cost of online manipulation is the first step to dismantling the business model behind misinformation.”


*The data used in the study published in Science, as well as the additional analyses, was collected between 25 July 2024 and 27 July 2025.

** In April 2025, the UK became the first country in Europe to pass legislation making SIM farms illegal. Researchers say that COTSI can be used to track the effects of this law once it is implemented.

*** Lead author Anton Dek explains: “By virtual SIM, we mean virtual phone numbers typically provided by Communications Platform as a Service (CPaaS) or Internet-of-Things connectivity providers.”

“These services make it easy to purchase thousands of numbers for business purposes. Such numbers are usually inexpensive per unit, but they often carry metadata indicating that they belong to a CPaaS provider, and many platforms have learned to block verifications coming from them. On the other hand, when a physical SIM card (or eSIM) from a conventional carrier is used, it is much harder to distinguish from a normal consumer’s number.”

**** The platforms used were Google/YouTube/Gmail; Facebook; Instagram; Twitter/X; WhatsApp; TikTok; LinkedIn; Telegram.

***** A recent law passed by the Russian Federation banned third-party account registrations, which saw vendors suspend SMS verification registered in Russia alone as of September 2025. However, this has not stopped vendors operating from Russia offering services linked to other nations.


This article was originally published by the University of Cambridge on December 11, 2025. Written by Fred Lewsey. Edited by DIW staff. Licensed under Creative Commons Attribution 4.0 International License.

by Web Desk via Digital Information World

AI’s errors may be impossible to eliminate – what that means for its use in health care

In the past decade, AI’s success has led to uncurbed enthusiasm and bold claims – even though users frequently experience errors that AI makes. An AI-powered digital assistant can misunderstand someone’s speech in embarrassing ways, a chatbot could hallucinate facts, or, as I experienced, an AI-based navigation tool might even guide drivers through a corn field – all without registering the errors.

People tolerate these mistakes because the technology makes certain tasks more efficient. Increasingly, however, proponents are advocating the use of AI – sometimes with limited human supervision – in fields where mistakes have high cost, such as health care. For example, a bill introduced in the U.S. House of Representatives in early 2025 would allow AI systems to prescribe medications autonomously. Health researchers as well as lawmakers since then have debated whether such prescribing would be feasible or advisable.

How exactly such prescribing would work if this or similar legislation passes remains to be seen. But it raises the stakes for how many errors AI developers can allow their tools to make and what the consequences would be if those tools led to negative outcomes – even patient deaths.

As a researcher studying complex systems, I investigate how different components of a system interact to produce unpredictable outcomes. Part of my work focuses on exploring the limits of science – and, more specifically, of AI.

Over the past 25 years I have worked on projects including traffic light coordination, improving bureaucracies and tax evasion detection. Even when these systems can be highly effective, they are never perfect.

For AI in particular, errors might be an inescapable consequence of how the systems work. My lab’s research suggests that particular properties of the data used to train AI models play a role. This is unlikely to change, regardless of how much time, effort and funding researchers direct at improving AI models.

Image: DIW-AIgen

Nobody – and nothing, not even AI – is perfect

As Alan Turing, considered the father of computer science, once said: “If a machine is expected to be infallible, it cannot also be intelligent.” This is because learning is an essential part of intelligence, and people usually learn from mistakes. I see this tug-of-war between intelligence and infallibility at play in my research.

In a study published in July 2025, my colleagues and I showed that perfectly organizing certain datasets into clear categories may be impossible. In other words, there may be a minimum amount of errors that a given dataset produces, simply because of the fact that elements of many categories overlap. For some datasets – the core underpinning of many AI systems – AI will not perform better than chance.

For example, a model trained on a dataset of millions of dogs that logs only their age, weight and height will probably distinguish Chihuahuas from Great Danes with perfect accuracy. But it may make mistakes in telling apart an Alaskan malamute and a Doberman pinscher, since different individuals of different species might fall within the same age, weight and height ranges.

This categorizing is called classifiability, and my students and I started studying it in 2021. Using data from more than half a million students who attended the Universidad Nacional Autónoma de México between 2008 and 2020, we wanted to solve a seemingly simple problem. Could we use an AI algorithm to predict which students would finish their university degrees on time – that is, within three, four or five years of starting their studies, depending on the major?

We tested several popular algorithms that are used for classification in AI and also developed our own. No algorithm was perfect; the best ones − even one we developed specifically for this task − achieved an accuracy rate of about 80%, meaning that at least 1 in 5 students were misclassified. We realized that many students were identical in terms of grades, age, gender, socioeconomic status and other features – yet some would finish on time, and some would not. Under these circumstances, no algorithm would be able to make perfect predictions.

You might think that more data would improve predictability, but this usually comes with diminishing returns. This means that, for example, for each increase in accuracy of 1%, you might need 100 times the data. Thus, we would never have enough students to significantly improve our model’s performance.

Additionally, many unpredictable turns in lives of students and their families – unemployment, death, pregnancy – might occur after their first year at university, likely affecting whether they finish on time. So even with an infinite number of students, our predictions would still give errors.

The limits of prediction

To put it more generally, what limits prediction is complexity. The word complexity comes from the Latin plexus, which means intertwined. The components that make up a complex system are intertwined, and it’s the interactions between them that determine what happens to them and how they behave.

Thus, studying elements of the system in isolation would probably yield misleading insights about them – as well as about the system as a whole.

Take, for example, a car traveling in a city. Knowing the speed at which it drives, it’s theoretically possible to predict where it will end up at a particular time. But in real traffic, its speed will depend on interactions with other vehicles on the road. Since the details of these interactions emerge in the moment and cannot be known in advance, precisely predicting what happens to the the car is possible only a few minutes into the future.

AI is already playing an enormous role in health care.

Not with my health

These same principles apply to prescribing medications. Different conditions and diseases can have the same symptoms, and people with the same condition or disease may exhibit different symptoms. For example, fever can be caused by a respiratory illness or a digestive one. And a cold might cause cough, but not always.

This means that health care datasets have significant overlaps that would prevent AI from being error-free.

Certainly, humans also make errors. But when AI misdiagnoses a patient, as it surely will, the situation falls into a legal limbo. It’s not clear who or what would be responsible if a patient were hurt. Pharmaceutical companies? Software developers? Insurance agencies? Pharmacies?

In many contexts, neither humans nor machines are the best option for a given task. “Centaurs,” or “hybrid intelligence” – that is, a combination of humans and machines – tend to be better than each on their own. A doctor could certainly use AI to decide potential drugs to use for different patients, depending on their medical history, physiological details and genetic makeup. Researchers are already exploring this approach in precision medicine.

But common sense and the precautionary principle suggest that it is too early for AI to prescribe drugs without human oversight. And the fact that mistakes may be baked into the technology could mean that where human health is at stake, human supervision will always be necessary.The Conversation

Carlos Gershenson, Professor of Innovation, Binghamton University, State University of New York

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read next:

• If social media for kids is so bad, should we be allowed to post kids’ photos online?

• Meta Reminds Users That AI Interactions Will Soon Shape Content and Ads

• AI And Marketing: How Connecting With Customers In The Real World Gives Brands The Advantage

OpenAI Flags High Cybersecurity Risks in Future AI Models


by Web Desk via Digital Information World

If social media for kids is so bad, should we be allowed to post kids’ photos online?

As Australia’s ban on under-16-year-olds having certain social media accounts kicks in this week, debate on whether it’s a good idea or even legal rages on – both at home and overseas.

Image: Samsung Memory / Unsplash

Yet barely acknowledged in this debate is what happens when a child doesn’t have an account, yet their entire childhood is still documented online. Should this be permitted?

“Sharenting” – when parents share their children’s lives online – entered the dictionary a few years ago. Awareness of potential risks has been increasing, but many parents still routinely share pictures and videos of their children online.

Sharenting is widespread and persistent. A review of practices over the past ten years describes that parents commonly share details such as children’s names, dates of birth, birthday parties, milestones (birthdays, school achievements), health info and photos. This produces a “digital identity” of the child long before they can consent.

And it’s not just parents. Dance schools, soccer clubs and various other community groups, as well as family members and friends, commonly post about children online. All contribute to what’s essentially a collective digital album about the child. Even for children not yet old enough to have their own account, their lives could be heavily documented online until they do.

This challenge moves us well beyond traditional approaches to safety messages such as “don’t share your personal details online” or “don’t talk to strangers”. It requires a deeper understanding of what exactly safety and wellbeing for children on online platforms looks like.

A passive data subject

Here’s a typical sharenting scenario. A family member uploads a photo captioned “Mia’s 8th birthday at Bondi beach!” to social media, where it gets tagged and flooded with comments from relatives and friends.

Young Mia isn’t scrolling. She isn’t being bullied. She doesn’t have her own account. But in the act of having a photo and multiple comments about her uploaded, she has just become a passive data subject. Voluntarily disclosed by others, Mia’s sensitive information – data on her face and age – exposes her to risks without her consent or participation.

The algorithm doesn’t care Mia is eight years old. It cares that her photo keeps adults on the app for longer. Her digital persona is being used to sustain the platform’s real product: adult attention. Children’s images posted by family and friends function as engagement tools, with parents reporting that “likes” and comments encourage them to continue sharing more about their child.

We share such posts to connect with family and to feel part of a community. Yet a recent Italian study of 228 parents found 93% don’t fully realise the associated data harvesting practices that take place, and their risk to the child’s privacy, security and image protection.

A public narrative of one’s life

Every upload of a child’s face, especially across years and from multiple sources, help create a digital identity they don’t have control over. Legally and ethically, many frameworks attempt to restrict commercial data profiling of minors, but recent studies show profiling is still happening at scale.

By the time a child is 16 – old enough to create their own account – a platform may already have accumulated a sizeable and lucrative profile of them to sell to advertisers.

The fallout isn’t just about data; it’s personal. That cute birthday photo can resurface in a background check for future employment or become ammunition for teenage bullying.

More subtly, a young person forging their identity must now contend with a pre-written, public narrative of their life, one they didn’t choose or control.

New laws aiming to ban children from social media address real harms such as exposure to misogynistic or hateful material, dangerous online challenges, violent videos, and content promoting disordered eating and suicide – but they focus on the child as a user. In today’s data economy, you don’t need an account to be tracked and profiled. You just need to be relevant to someone else who has an account.

What can we do?

The essential next step is social media literacy for all of us. This is a new form of literacy for the digital world we live in now. It means understanding how algorithms shape our feeds, how dark design patterns keep us scrolling, and that any “like” or photo is a data point in a vast commercial machine.

Social media literacy is not just for kids in classrooms, but for parents, coaches, carers and anyone else engaging with kids in our online world. We all need to understand this.

Sharenting-awareness campaigns exist, from eSafety’s parental privacy resources, to the EU-funded children’s digital rights initiative, but they are not yet shifting the culture. That’s because we’re conditioned to think about our children’s physical safety, not so much their data safety. Because the risks of posting aren’t immediate or visible, its easy to underestimate them.

Shifting adult behaviour closes the gap between our concerns and our actions, and the reality of children’s exposure to content on social media.

Keeping children safe online means looking beyond kids as users and recognising the role adults play in creating a child’s digital footprint.The Conversation

Joanne Orlando, Researcher, Digital Wellbeing, Western Sydney University

This article is republished from The Conversation under a Creative Commons license. Read the original article.


by Web Desk via Digital Information World

Thursday, December 11, 2025

OpenAI Flags High Cybersecurity Risks in Future AI Models

OpenAI (the maker of ChatGPT) expects its upcoming artificial intelligence (AI) models to potentially reach high levels of cybersecurity capability, potentially enabling the exploitation of previously unknown software vulnerabilities or assisting in complex enterprise or industrial intrusions. The company emphasized that these risks are part of broader dual-use capabilities that may also benefit defenders.

To address these concerns, OpenAI is investing in defensive measures, including tools for auditing code, patching vulnerabilities, and supporting security workflows. The company is implementing a layered, defense-in-depth approach including "access controls, infrastructure hardening, egress controls," along with monitoring, detection systems, and threat intelligence programs.

OpenAI plans to introduce a trusted access program allowing qualified cyberdefense professionals to employ advanced capabilities for defensive purposes. It is also establishing the Frontier Risk Council, an advisory group of cybersecurity professionals to guide safe deployment and evaluate potential misuse.

Additional initiatives include Aardvark, an agentic security researcher currently in private beta designed to identify and help patch software vulnerabilities, and collaboration with the Frontier Model Forum to develop shared understanding of threat models across the AI industry. OpenAI frames these measures as ongoing, long-term investments to strengthen defenses and mitigate risks associated with increasingly capable AI systems.


Notes: This post was drafted with the assistance of AI tools and reviewed, edited, and published by humans. Image: DIW-AIgen

Read next: Google Expands Android’s Safety Features With Emergency Live Video Rollout
by Ayaz Khan via Digital Information World

Wednesday, December 10, 2025

Google Expands Android’s Safety Features With Emergency Live Video Rollout

Google has begun rolling out Emergency Live Video on Android, introducing a way for users to share real-time visual information with emergency responders during calls or texts. The feature allows dispatchers to send a request to a user’s device when they determine that viewing the scene would help them assess the situation and provide timely assistance.

Users receive an on-screen prompt and can choose whether to share their camera feed. The stream is encrypted by default, and users retain full control throughout the process, with the ability to stop transmission instantly. The feature requires no setup and is designed to operate through a single, direct action on the user’s device.

Emergency Live Video is intended to support responders in evaluating incidents such as medical crises or fast-moving hazards, and it can help them guide callers through urgent steps until aid arrives. The capability expands Google’s existing emergency-focused tools, including Emergency Location Service, Car Crash Detection, Fall Detection and Satellite SOS.

The rollout begins across the United States and select regions in Germany and Mexico. Devices running Android 8 or later with Google Play services support the feature. Google is working with public safety organizations worldwide to extend availability, and interested agencies can access partner documentation.


Notes: This post was drafted with the assistance of AI tools and reviewed, edited, and published by humans.

Read next: Studies Reveal Severe Gen Z Burnout and Recommend Stronger Workplace Support and Clearer Expectations
by Asim BN via Digital Information World

Studies Reveal Severe Gen Z Burnout and Recommend Stronger Workplace Support and Clearer Expectations

Gen Z workers are reporting some of the highest burnout levels ever recorded, with new research suggesting they are buckling under unprecedented levels of stress.

While people of all age levels report burnout, Gen Z and millennials are reporting “peak burnout” at earlier ages. In the United States, a poll of 2,000 adults found that a quarter of Americans are burnt out before they’re 30 years old.


Image: Vitaly Gariev / Unsplash

Similarly, a British study measured burnout over an 18-month period after the COVID-19 pandemic and found Gen Z members were reporting burnout levels of 80 per cent. Higher levels of burnout among the Gen Z cohort were also reported by the BBC a few years ago.

Globally, a survey covering 11 countries and more than 13,000 front-line employees and managers reported that Gen Z workers were more likely to feel burnt out (83 per cent) than other employees (75 per cent).

Another international well-being study found that nearly one-quarter of 18- to 24-year-olds were experiencing “unmanageable stress,” with 98 per cent reporting at least one symptom of burnout.

And in Canada, a Canadian Business survey found that 51 per cent of Gen Z respondents felt burnt out — lower than millennials at 55 per cent, but higher than boomers at 29 per cent and Gen X, at 32 per cent.

As a longstanding university educator of Gen Z students, and a father of two of this generation, the levels of Gen Z burnout in today’s workplace are astounding. Rather than dismissing young workers as distracted or too demanding of work-life balance, we might consider that they’re sounding the alarm of what’s broken at work and how we can fix it.

What burnout really is

Burnout can vary from person to person and across occupations, but researchers generally agree on its core features. It occurs when there is conflict between what a worker expects from their job and what the job actually demands.

That mismatch can take many forms: ambiguous job tasks, an overload of tasks or not having enough resources or the skills needed to respond to a role’s demands.

In short, burnout is more likely to occur when there’s a growing mismatch between one’s expectations of work and its actual realities. Younger workers, women and employees with less seniority are consistently at higher risk of burnout.

Burnout typically progresses across three dimensions. While fatigue is often the first noticeable symptom of burnout, the second is cynicism or depersonalization, which leads to alienation and detachment to one’s work. This detachment leads to the third dimension of burnout: a declining sense of personal accomplishment or self-efficacy.

Why Gen Z is especially vulnerable to burnout

Several forces converge to make Gen Z particularly susceptible to burnout. First, many Gen Z entered the workforce during and after the COVID-19 pandemic.

It was a time of profound upheaval, social isolation and changing work protocols and demands. These conditions disrupted the informal learning that typically happens through everyday interactions with colleagues that were hard to replicate in a remote workforce.

Second, broader economic pressures have intensified. As American economist Pavlina Tcherneva argues, the “death of the social contract and the enshittification of jobs” — the expectation that a university education would result in a well-paying job — have left many young people navigating a far more precarious landscape.

The intensification of economic disruption, widening inequality, increasing costs of housing and living and the rise of precarious employment have put greater financial pressures on this generation.

A third factor is the restructuring of work that is taking place under artificial intelligence. As workplace strategist Ann Kowal Smith wrote in a recent Forbes article, Gen Z is the first generation to enter a labour market defined by a “new architecture of work: hybrid schedules that fragment connection, automation that strips away context and leaders too busy to model judgment.”

What can be done?

If you’re reading this and feeling burnt out, the first thing to know is that you’re not overreacting and you’re not alone. The good news is, there are ways to recover.

One of burnout’s most overlooked antidotes is combating the alienation and isolation it produces. The best way to do this is by building connection and relation to others, starting with work colleagues. This could be as simple as checking in with a teammate after a meeting or setting up a weekly coffee with a colleague.

In addition, it’s important to give up on the idea that excessive work is better work. Set boundaries at work by blocking out time in your calendar and clearly signalling your availability to colleagues.

But individual coping strategies can only go so far. The more fundamental solutions must come from workplaces themselves. Employers need to offer more flexible work arrangements, including wellness and mental health supports. Leaders and managers should communicate job expectations clearly, and workplaces should have policies to proactively review and redistribute excessive workloads.

Kowal Smith has also suggested building a new “architecture of learning” in the workplace that includes mentorship, provides feedback loops and rewards curiosity and agility.

Taken together, these workplace transformation efforts could humanize the workplace, lessen burnout and improve engagement, even at a time of encroaching AI. A workplace that works better for Gen Z ultimately works better for all of us.

Nitin Deckha, Lecturer in Justice Studies, Early Childhood Studies, Community and Social Services and Electives, University of Guelph-Humber

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read next: Pew Survey: 64% of Teens Use AI Chatbots, and 97% Go Online Daily


by Web Desk via Digital Information World

Research Tracks 8,324 U.S. Children, Identifying Social Media as a Risk Factor for Growing Inattention

A longitudinal study, published in Pediatrics Open Science, following 8,324 children aged 9 to 14 in the United States has found that social media use is associated with a gradual increase in inattention symptoms. Researchers at Karolinska Institutet in Sweden and Oregon Health & Science University tracked children annually for four years, assessing time spent on social media, television/videos, and video games alongside parent-reported attention measures.

On average, children spent 2.3 hours per day watching television or videos, 1.5 hours on video games, and 1.4 hours on social media. Only social media use was linked to growing inattention over time. The effect was small for individual children but could have broader consequences at the population level. Hyperactivity and impulsive behaviors were not affected.

The association remained consistent regardless of sex, ADHD diagnosis, genetic predisposition, socioeconomic status, or ADHD medication. Children with pre-existing inattention symptoms did not increase their social media use, indicating the relationship primarily runs from use to symptoms.

Researchers note that social media platforms can create mental distractions through notifications and messages, potentially reducing the ability to focus. The study does not suggest all children will experience attention difficulties but highlights the importance of informed decisions regarding digital media exposure.

The research team plans to continue monitoring the participants beyond age 14. The study was funded by the Swedish Research Council and the Masonic Home for Children in Stockholm, with no reported conflicts of interest.

Source: “Digital Media, Genetics and Risk for ADHD Symptoms in Children – A Longitudinal Study,” Pediatrics Open Science, 2025.

Notes: This post was drafted with the assistance of AI tools and reviewed, edited, and published by humans.


Image: Vikas Makwana / unsplash

Read next: Pew Survey: 64% of Teens Use AI Chatbots, and 97% Go Online Daily
by Asim BN via Digital Information World

Pew Survey: 64% of Teens Use AI Chatbots, and 97% Go Online Daily

A new Pew Research Center survey of 1,458 U.S. teens shows how central digital platforms and AI tools have become in their daily lives. Nearly all teens (97 percent to be exact) go online each day, and four in ten say they are online almost constantly. Older teens report higher levels of constant use than younger teens, and rates are even higher among Black and Hispanic teens.

YouTube remains the most widely used platform, with roughly nine in ten teens (92 percent to be exact) reporting any use and about three-quarters (76%) visiting it daily.

As per Pew survey, six in ten say they use TikTok daily and 55 percent said this about Instagram, while 46% use Snapchat daily. Facebook and WhatsApp see lower use. Platform preferences vary across demographic groups, with girls more likely to use Instagram and Snapchat, and boys more likely to use YouTube and Reddit.

AI chatbot use is also widespread. Sixty-four percent of teens say they use chatbots, and about three in ten do so daily. Daily use is more common among Black and Hispanic teens and among older teens. ChatGPT is the most widely used chatbot, at 59%, followed by Gemini and Meta AI. Teens in higher-income households use ChatGPT at higher rates, while Character.ai is more common among teens in lower- and middle-income homes.

Notes: This post was drafted with the assistance of AI tools and reviewed, edited, and published by humans.

Read next: Smart Devices Are Spying More Than You Think; Privacy Labels Offer Crucial Clues
by Ayaz Khan via Digital Information World