Monday, October 6, 2025

South Asia Faces a Surge in Child Exploitation as AI Abuse Material Expands

New research from the Childlight Global Child Safety Institute shows an alarming rate of sexual abuse among children in South Asia. Drawing on studies from India, Pakistan, Maldives, Bangladesh and other countries, the report estimates that around one in eight children in the region has been sexually assaulted before reaching adulthood. This projection translates to roughly 54 million affected minors, based on regional population data and statistical modeling.

The figures come from a review of surveys conducted between 2010 and 2024. About 14.5 percent of girls and 11.5 percent of boys in the data reported experiencing sexual violence before the age of 18. Limited reporting in neighboring countries suggests that actual numbers could be higher than official estimates.

Countries in South Asia CSAM rate 2023 CSAM rate 2024
Afghanistan 47.5 28.9
Bangladesh 145.2 64.1
Bhutan 75 41
India 62 15.5
Maldives 158.4 94
Nepal 58.9 19.4
Pakistan 77.8 41.3
Sri Lanka 59.8 27.8

Technology Adds a New Dimension

The same study points to a sharp rise in technology-linked exploitation. Between 2022 and 2023, the number of identified AI-generated sexual images involving minors increased by over 1,300 percent. These synthetic images, sometimes known as deepfakes, use generative tools to superimpose children’s faces onto explicit photos.

In 2024, data from monitoring networks linked over 2.25 million cases of child sexual abuse material to India, 1.1 million to Bangladesh, and about 1.03 million to Pakistan. When adjusted for population size, smaller countries such as the Maldives showed higher exposure rates, with about 94 reported cases per 10,000 residents. Bangladesh followed with 64, Pakistan with 41, and Nepal with roughly 19.

Gaps in Law and Reporting

Among South Asian nations, India maintains the most comprehensive record system under the Protection of Children from Sexual Offences Act. Police reports show cases rising from about 54,000 in 2021 to more than 64,000 in 2022. Around 90 percent of those cases reached the charge-sheet stage, but conviction rates remain much lower.

In Pakistan, the number of recorded cases nearly doubled within the same period, from around 1,500 to close to 3,000, according to the NGO Sahil. The increase reflects better awareness and more open reporting, though many incidents still go unreported.

Global Patterns and Local Risks

Data from the National Center for Missing and Exploited Children show how widespread the problem has become. Between 2020 and 2022, over 83 million global cases of child sexual abuse material were reported to the center. About two-thirds came from Asia. India accounted for roughly 16 percent of those reports, followed by the Philippines, Pakistan, Indonesia, and Bangladesh.

While artificial intelligence has made it easier to create harmful content, it is also being tested as a tool to detect it. European researchers have developed scanning systems that can flag altered or disguised child abuse images. One pilot study examined nearly 300,000 websites and found several dozen containing illegal material. These systems could support law enforcement if used responsibly.

Digital Platforms and Enforcement Challenges

NCMEC data show continued growth in global reports. About 20 million cases were logged in 2020, 29 million in 2021, and more than 32 million in 2022. Meta’s platforms, Facebook, Instagram, and WhatsApp, generated around 90 percent of these alerts through automated systems.

Image: DIW-Aigen.

Nearly half of the global reports lacked enough information to act on or involved repeated uploads of existing material. The absence of consistent laws across countries and weak coordination between agencies make the issue harder to contain.

Data as a Prevention Tool

Childlight’s researchers stress that prevention depends on reliable data and early detection. Countries with consistent monitoring tend to identify cases sooner and provide more support for victims. The institute recommends stronger cooperation between governments, private companies, and civil organizations.

It treats child exploitation as both a law enforcement and public health challenge. Abuse affects physical and mental health, school attendance, and long-term development. Building accurate data systems is seen as the first step toward targeted intervention.

A Regional Burden with Global Links

South Asia’s exposure to child exploitation remains high both offline and online. Progress in data collection and policing has improved awareness, yet new technologies have made the threat more complex. AI-generated material now circulates faster than authorities can respond.

Childlight’s study concludes that without stronger coordination between technology firms and public institutions, millions of children will remain at risk. The region’s data show a growing problem that reflects global patterns of digital abuse, one that continues to evolve as technology advances.

Notes: This post was edited/created using GenAI tools.

Read next:

• Survey Finds Few Americans Turn to Chatbots for News

• 2025 Blogging Report: AI Use Explodes While Average Article Length Slides


by Irfan Ahmad via Digital Information World

Workers Grapple With Unease as AI Becomes Part of Everyday Jobs

Artificial intelligence has become routine in many workplaces, but the adjustment is far from smooth. A recent survey highlights a complicated picture. People appreciate the speed and support AI offers, yet many quietly fear losing their abilities or connections at work.

Skills slipping under the surface

One recurring concern is skill erosion. Many employees say they depend on AI to finish tasks that once required full concentration. About one in five already notice a drop in their own capability when they try to work without automation. Another quarter believe the technology sharpens their thinking, but roughly the same number feel it dulls it.


The data also shows mixed attitudes toward colleagues. Some rely heavily on AI systems, while others view that dependence with skepticism. Roughly four in ten people use AI both at work and at home, which suggests that the reliance is spreading into daily habits.

Confidence in identifying AI content is also proving unreliable. Although most respondents thought they could tell a real image from a synthetic one, only a third managed to do so when tested. That shortfall signals how quickly digital literacy is being tested in a changing media landscape.

Mixed feelings about the future of automation

The study, conducted by Howdy, paints a picture of divided confidence in AI’s staying power. More than one third of workers think the industry around it might be overinflated, with some warning of economic risks if the trend collapses. Others question whether their employers have a clear plan for using the technology responsibly.

Job stability remains a frequent topic of concern. Around one in five respondents worry about being replaced by automated systems. At the same time, many are trying to keep pace. About a third have started new training programs, and some plan to continue formal education to strengthen their skills.

Even those who feel safe in their current roles report tension. Several participants linked AI use to lower work quality, citing mistakes and inconsistent results from over-automation.

When AI replaces conversation

Beyond productivity, the report touches on a quieter trend: people substituting digital tools for social interaction. Nearly one in five workers said they hide their AI use from coworkers. A smaller group even personalize their tools with names or human-like traits.

Interaction patterns are changing as well. Close to one fifth now speak with AI programs more often than they do with colleagues, and many prefer those exchanges. Remote employees appear most affected. Nearly thirty percent of them report that AI communication has become more frequent than contact with teammates, and one quarter say they find those interactions easier.

A small share of respondents also describe forming emotional ties with digital assistants, from friendship to affection. While those numbers remain low, they suggest that AI’s presence is quietly reshaping social behavior at work.

Younger employees adapt fastest but worry most

Generation Z, the first to grow up around AI, stands out in the data. Many of them use automation fluently but admit to side effects. Around three in ten believe that constant use of AI tools is making them less capable. Some have already taken second jobs after automation displaced earlier work. Others are enrolling in higher education to strengthen their prospects.

Nearly half of this group say they are becoming more dependent on AI in everyday decisions. A significant number also use it to handle anxiety or stress, while a smaller portion describe friendly or emotional relationships with the technology.

Finding balance in an automated world

The survey results show that AI is neither feared nor fully trusted. Workers are learning to live with it while questioning its limits. Employers, experts suggest, should treat the issue as one of balance rather than efficiency. That means training staff to understand how the technology works, encouraging continuous learning, and maintaining space for human discussion and teamwork.

Artificial intelligence can enhance performance, but it cannot replace the insight and empathy that come from human experience. Workplaces that remember this distinction may adapt more smoothly to the next phase of automation.

About the study

The survey was conducted in August 2025 among 1,007 full-time employees across the United States who use AI in their work. Participants ranged in age from 19 to 77, with an average of 41. Half identified as male, nearly half as female, and one percent as nonbinary or undisclosed. Work arrangements included 25 percent remote, 36 percent in person, and 39 percent hybrid.

Read next: 2025 Blogging Report: AI Use Explodes While Average Article Length Slides


by Asim BN via Digital Information World

Sunday, October 5, 2025

2025 Blogging Report: AI Use Explodes While Average Article Length Slides

If you’ve felt like blog posts are getting shorter, you’re right. Orbit Media’s 12th Annual Blogger Survey (808 marketers, twelve years of tracking) shows clear shifts in how people create and promote content. Fewer long reads, faster publishing, and almost universal use of AI tools now define the landscape.

Blogging Still Works, Just Less Dramatically

Around 80% of marketers say their blog brings measurable results. That’s good news, but only about 21% call those results “strong.” The middle ground dominates. Sixty percent say blogging helps, but not massively. The rest either see no impact or aren’t sure. So blogging still works, but the big wins are harder to reach.

Word Counts Down, but Depth Still Pays Off

The average article now lands near 1,333 words. Two years ago, that number was higher. For nearly a decade, writers kept stretching posts longer each year. Now the trend is reversing.

Even so, long-form content still outperforms. Almost four in ten creators who go past 2,000 words report strong results. Those shorter pieces publish faster, but the data shows that length still links tightly with performance.

Publishing Less, Saving Time

Most writers post between two and four times a month. Weekly publishing is falling fast. Yet those who keep a higher pace (several posts a week) see the best returns, around 37% strong results.

Time per post has dropped too. On average, people spend three hours and twenty-five minutes on a piece, down from just over four hours in 2022. AI editing tools have clearly shaved minutes from the process.

AI Everywhere Now

Two years ago, most marketers hadn’t touched AI. In 2025, nearly all have. Only about five percent still work without it. The majority use AI to brainstorm, outline, or clean up text. Few trust it to write entire posts.



The data points to balance again. Teams that blend AI help with human editing perform almost as well as multi-editor setups. Full automation lags behind. Machines can speed things up, but they can’t yet match human nuance.

Visuals and Research Keep Readers Around

Nearly every article has images now, 88% of them. Sixty percent use charts or data, a quarter include video, and a small number use audio. That last group is tiny but surprisingly effective, about 30% reporting strong results.

Visual volume matters too. Posts with at least seven images perform roughly three times better than those with one. And original research has become a quiet differentiator. Almost half of marketers now publish their own data, and about a quarter of them hit strong outcomes.

Formats and Focus

How-to articles still dominate, making up about three-quarters of all posts. Lists and guides come next. But deep, data-backed formats continue to win on results. Around 27% of those writing reports, long tutorials, or detailed explainers rank their work as high-impact. Effort still matters.

Promotion Habits Shifting

Social media remains the top channel, used by 93% of marketers. About a third rely on SEO and another third on email. Paid promotion and influencer outreach trail far behind, but the payoff there is better.


Those running paid campaigns hit strong results about 30% of the time. Influencer collaboration delivers similar gains. Only nine percent of respondents do it regularly, but that small crowd performs best overall.

Search Getting Tougher

More than half of marketers say organic traffic is slipping. Sixty-three percent list it as their biggest challenge. Engagement follows right after at 56%. AI-generated search summaries and zero-click results are part of the issue.

Even with that pressure, SEO discipline still pays off. Marketers who always do keyword research reach strong results 32% of the time. Skipping it cuts that figure nearly in half.

Analytics and Guest Posts Still Matter

Tracking every post correlates directly with better results. One-third of creators measure performance for every piece, and they show a 32% strong-results rate. Occasional checkers barely reach 13%.

Guest posting helps too. About 37% publish externally and double the success rate of those who stay inside their own sites. The takeaway is simple: what gets measured improves, and what gets shared spreads.

Four Habits That Predict Success

Looking at the full dataset, four practices line up consistently with stronger outcomes:

  • Writing longer posts (over 2,000 words)
  • Using multiple visuals
  • Working with influencers
  • Tracking analytics regularly

Each performs far above the 21% benchmark for strong results.

Where Blogging Stands in 2025

After twelve years of tracking, the pattern is stable. Blog posts are shorter, AI tools are normal, and success depends less on volume and more on structure. The workflow has matured. Marketers now chase measurable efficiency, not just creativity. Blogging hasn’t faded, it’s just become a data habit.

Notes: This post was edited/created using GenAI tools.

Read next: Survey Finds Few Americans Turn to Chatbots for News


by Irfan Ahmad via Digital Information World

Saturday, October 4, 2025

Survey Finds Few Americans Turn to Chatbots for News

Artificial intelligence chatbots are gaining users in the United States, but news is not the main reason people use them. A recent Pew Research Center survey shows that most adults still avoid relying on tools like ChatGPT or Gemini for news updates.

How Often People Use Chatbots for News

Only a small share of U.S. adults report getting news this way. About 2% say they do it often and 7% say they do it sometimes. Another 16% say they rarely use chatbots for news, while 75% say they never do. Fewer than 1% prefer chatbots over other options such as television, websites, or social platforms.

Younger Adults Engage More

People under 50 are more likely to check news with chatbots. Twelve percent in that age group say they use them at least sometimes, compared with 6% of those 50 and older. Younger users also report seeing more inaccurate stories. Nearly six in ten of those aged 18 to 29 say they have come across news from chatbots that seemed wrong, compared with just over a third of those 65 and older.

Mixed Views on Reliability

Experiences with chatbot news vary. Around one-third of users say it is hard to judge what is true, while a quarter say they find it easier to sort out facts. Four in ten are unsure. About half of chatbot news users say they sometimes encounter information they think is inaccurate. A smaller group say they see it often, while another group say it rarely or never happens.

Context in the Media Landscape

Search engines and social platforms remain far more influential for news. A Pew survey last year showed nearly a quarter of U.S. adults often get news through search engines, most commonly Google. TikTok has also expanded its role, with one in five adults now getting news there, up from 3% five years ago.

By comparison, studies of ChatGPT use show that people mainly turn to it for practical help. Many use it for learning, homework, or everyday advice rather than following the news cycle.




Notes: This post was edited/created using GenAI tools.

Read next: 

• TikTok Faces Scrutiny Over Exposure of Minors to Pornographic Content in UK

• AI Chatbots Use Emotional Pressure to Keep People From Logging Off
by Irfan Ahmad via Digital Information World

Gmail Users Face Security Cut-Off as Google Retires Old Email Tools in 2026

From January 2026, Gmail will stop delivering mail through two long-standing features that many people still rely on. The change targets Gmailify, which added Google’s own filters and search tools to outside accounts, and POP, a decades-old protocol that let emails be pulled into Gmail from other providers.

For everyday users, the announcement means some familiar options will simply vanish. Anyone who set up Gmail to fetch messages from Outlook, Yahoo, or other services using POP will lose that connection. Those who upgraded outside accounts with Gmailify will also find that Google’s spam protection and inbox sorting no longer apply. Nothing already stored in Gmail will be removed, but the flow of new mail will break unless settings are updated in advance.

The reason behind this shift lies in security and standards. POP, the older of the two features, dates back to a time when email systems were far simpler and less protected. It sends login details and content in ways that can expose information if not shielded properly, and it has never supported modern safeguards such as multifactor checks. IMAP, which most providers now offer, is more flexible and secure, and Google is steering everyone toward it. Gmailify, on the other hand, was more about convenience than safety, and Google appears ready to retire it in order to streamline Gmail around one consistent model.

For those affected, the fix is not complicated but it requires action. External accounts need to be reconnected using IMAP, which most major services already support. Mobile users can still attach Outlook, Yahoo, or other accounts inside the Gmail app, but the extra Gmail-only perks will no longer be available. People using work or education accounts may also need help from administrators to ensure continuity.

Google’s decision may frustrate those who valued the simplicity of POP or the enhancements of Gmailify, but the direction is clear: Gmail is consolidating around modern protocols and moving away from older systems that no longer meet its security standards. With the deadline set for January 2026, users have more than a year to prepare... and those who do not adjust risk finding that their Gmail inbox suddenly goes quiet.


Image: appshunter/unsplash

Read next: Apple’s Removal of ICEBlock Highlights Growing U.S. Government Influence Over Big Tech
by Asim BN via Digital Information World

Friday, October 3, 2025

Apple’s Removal of ICEBlock Highlights Growing U.S. Government Influence Over Big Tech

Apple’s decision to remove the immigration-tracking app ICEBlock from its App Store has placed the United States in a debate more commonly associated with other nations. The move followed a request from the Department of Justice, led by Attorney General Pam Bondi, who argued that the tool endangered federal agents.

ICEBlock, created by developer Joshua Aaron earlier this year, enabled users to crowdsource reports of Immigration and Customs Enforcement activity. Supporters framed it as a form of public accountability, while critics viewed it as a direct obstacle to law enforcement. The app had no Android version because, according to its developers, anonymity and push notifications could not be supported on that platform without maintaining user data. This meant iOS was the sole channel for the service, leaving Apple’s action effectively decisive.

The significance extends beyond the fate of one app. In recent months, the federal government has leaned on Apple and Google twice: once to remove TikTok amid disputes over its ownership, and now with ICEBlock. Each time, the platforms complied. This reliance on private companies to execute policy decisions shows how easily Washington can shape digital access when it chooses to act through the technology sector.

The comparison with Apple’s earlier removals abroad is difficult to avoid. In 2019, the company took down an application used during Hong Kong’s protests after Beijing expressed concern. Similar cases have occurred in Saudi Arabia and Russia, where governments pressured Apple to remove politically sensitive content. Critics now point to the parallels, warning that the United States is adopting tactics it once condemned in other countries.

What makes ICEBlock different from the Hong Kong case is that no alternative path remains open. When HKmap.live was pulled, the service could still be used as a website saved to an iPhone’s home screen. ICEBlock lacks that option, and without an Android version, the removal cuts off all new users. Existing installations continue to work, but updates are blocked and redownloads are impossible.

From Apple’s perspective, the decision follows a long pattern of risk calculation. The company has resisted governments in the past, most notably during its standoff with the FBI in 2016 over access to a locked iPhone, but it has also shown willingness to bend under pressure when the political or commercial costs of resistance appear too high. Observers argue that this case falls into the latter category, particularly given recent security incidents involving federal officers that have heightened sensitivities around public tracking.

The broader issue is structural. Apple’s control of distribution through its App Store gives the government an indirect but powerful lever. When officials apply pressure, Apple has limited room to maneuver without risking confrontation that could damage its business. For critics of concentrated corporate power, this episode reinforces the concern that a handful of firms hold the gateways through which civic information flows.

It also highlights the limits of expecting moral stands from corporations. Their primary obligation lies with shareholders, and their responses tend to align with reputational and financial considerations rather than abstract principles. In practice, that means decisions like the removal of ICEBlock are framed less by questions of rights or liberties, and more by calculations of risk, liability, and long-term business stability.

The outcome is that the government now knows it can lean on large platforms to implement controversial measures without passing new laws. Once such leverage has been demonstrated, there is little reason to assume it will not be used again. Whether that influence remains confined to security matters or extends further into civic and political disputes will determine the long-term consequences of Apple’s decision.


Notes: This post was edited/created using GenAI tools and reviewed by human editor(s) for accuracy. Image: DIW-Aigen.

Read next:

What Worries Parents The Most When It Comes To Teens And Technology

• Buffer Study Finds X Premium Users Gain Clear Reach Advantage
by Irfan Ahmad via Digital Information World

What Worries Parents The Most When It Comes To Teens And Technology

The mobility of wearable tech, smartphone enhancements, and social media/AI advancements has increased younger generations' technological literacy. Each generation has been more exposed to technology than the last, inside and outside the house. According to a study by Pew Research, 95% of children have access to a smartphone. Pair that with the fact that many high schools now have kids learning on laptops, and you’ll be hard-pressed to find a time when teens aren’t interacting with some form of technology. Parents have to be more vigilant than ever when it comes to monitoring their teens’ usage of various devices.

A 2025 All About Cookies survey found that screen time reduction is an ongoing challenge for most families. The AAC team asked parents their opinions on when they feel their child should use certain things, like AI, and when they can own a phone, as well as manage a public social media account.

Parents' Feelings About Their Teens Using Certain Tech

With the rise of AI chatbots, many people are turning to places like ChatGPT and Grok for assistance with a variety of things. These tools have their benefits, but usage at a young age could lead to over reliance on them. There have been numerous cases in recent months of students using AI to complete assignments, as well as many people utilizing AI chatbots to replace human interaction. Other serious cases include people turning to AI for mental health advice, which has led to recent lawsuits from parents who have lost their teens due to using ChatGPT in place of a licensed mental health professional.

The same report found that one in four parents surveyed (22%) felt that their teen should never use an AI chatbot like Grok or ChatGPT. Even parents who were more forgiving of AI usage felt that their child should be at least 16 years old before interacting with LLMs, the highest median age among all categories, tied with having your own social media account.
Parents were most opposed to their teens using AI over things like having their own social media and smart devices, showing that right now, AI use has not been willingly accepted as a normal part of the digital world like social media by the past generation, as of yet.

The Most Dangerous Social Media Platforms According to Parents

While social media has been accepted as something that teens will use at some point, that doesn’t mean that there won't be levels of concern among parents for their teens putting themselves out there on social media platforms.

Unanimously, the app that parents felt posed the highest risk to their teens was TikTok.

Thirty-eight percent of parents identified the video-based social media platform as the most dangerous. That is more than double the percentage of concern among all other social media platforms. Snapchat, a messaging app and media sharing platform, was the second biggest concern for parents, chosen by 14%. The nature of this concern could be due to the app's disappearing message system as well as its emphasis on sharing pictures.
For months, it seemed as if TikTok was a privacy concern for many, with various congressional hearings in America calling for it to be banned from U.S. app stores due to how the app collected users' data. It seems that concern has died down in the current news cycle, but parents haven’t seemed to forget, as evidenced by the above data.

Another recent development surrounding TikTok is the executive order signed by the president readying the app to be sold to a group of American investors. The long theorized TikTok deal could play a big part in the potential impacts on how parents view the app moving forward, but as of earlier this month, parents felt that TikTok posed more danger to their teens than any other app by a large margin.

Apps like Discord and Twitch, which are also synonymous with usage among younger generations, came in quite low on the parent-worry scale, with only 5% of parents feeling Discord was a danger and 2% having worries about Twitch.

Parents Weigh In On Whether Their Teen Is Addicted to Their Phone

TikTok is well known for its addictive algorithm that has been designed to keep people scrolling and engaged with its platform. This goal to keep users on their devices could be a reason why parents are worried about these apps. According to the AACP , children are spending 7 ½ hours a day watching or using screens. This prompted the All About Cookies team to see how parents felt about the amount of time their teens were spending on their phones.
Research shows younger generations could use a digital detox, and parents are recognizing a pattern. An alarming 60% have been worried that their teen is addicted to their phone.

With most children averaging 8 hours in school and 7.5 hours on their phones, they’re essentially averaging all of their free time on their phones that could be spent doing activities or with friends and family. So it comes as no surprise that many parents feel their children may be addicted to their phones

How Parents Feel About Phone Bans

With many parents having fears that their teen could be addicted to their phone, it's no surprise that many of them support the ongoing phone bans that are cropping up in high schools across America. Recently, 14 different states have active laws or executive orders that prohibit or limit phone usage in schools.

Research indicates two-thirds (68%) of parents support banning phones in high schools to some extent.


Children spend 9-10 months of the year in school, which means parents have next to no control over what they're doing for the majority of the day, most of the year. The lack of insight and control over their child during this time may lead parents to support phone bans in an attempt to keep their teens from becoming distracted in school, as well as limit their time using these devices.

Potentially more shocking is that less than one-fourth (13%) of parents at least somewhat oppose these phone bans. Many parents choose to give their children a cell phone as a way to stay connected to them during times when they’re apart. 2025 figures reinforce that parents are becoming concerned that their teens’ phone use will hinder their learning. These findings could be pointing to a potential generational shift not only in current Gen-Z/Alpha teens' phone usage, but in their millennial parents' opinions on phone access as well.

Final Thoughts

The technological shifts of this current generation have parents feeling more concerned than optimistic about their children’s phone usage. Many different statistics from the survey conducted by All About Cookies show that a majority of parents are worried about how their teens are interacting with technology. This is important as many larger conversations are being had on the ways to regulate tech, such as chatbots, that some parents are worried about, alongside the aforementioned phone bans. This increased worry could lead to parents taking more precautions with their teens' phone usage, such as increased screentime monitoring or using parental control apps. Technological advancements have strengthened communication and technological literacy, but these advancements are also things that parents and legislators need to be aware of when making decisions on how to navigate teens using tech.

Read next: 

Study Finds X Premium Users Gain Clear Reach Advantage

• Survey Finds Platforms, Not Governments, Should Decide Online Rules

by Irfan Ahmad via Digital Information World