Tuesday, September 30, 2025

2.3 Billion Hungry, One Billion Tonnes Wasted: The Paradox Defining Global Food Security

How much is lost before reaching people

Roughly 30 percent of food produced around the world never gets eaten. About 13 percent disappears between harvest and supermarket shelves. That is where poor storage, wrong harvesting times, bad weather, and weak transport systems take their toll. Fruits and vegetables see the biggest losses, with more than a quarter gone before sale. Meat and animal products lose around 14 percent.

H/T: Statista

Sub-Saharan Africa faces the steepest challenge, with 23 percent lost early in the chain. Asia loses 14 percent, Latin America and the Caribbean 13 percent, North America 10 percent, and Europe only 6 percent. These gaps reflect differences in infrastructure and handling practices.

Waste after the food is sold

Once food reaches shops or homes, another 17 to 19 percent is discarded. UNEP data puts the total waste in 2022 at just over one billion tonnes. This includes 631 million tonnes from households, 290 million from food service, and 131 million from retailers.

Via: Statista

Households are by far the largest source. On average, each person throws out 79 kilograms of food every year. Restaurants and catering add 36 kilograms per capita, while retailers discard 17 kilograms.

Not only rich countries

Waste used to be seen as a problem of wealthier economies. That is no longer the case. Figures show little difference between high income, upper-middle income, and lower-middle income groups. The annual per capita range is narrow, from 81 kilograms in rich countries to 86–88 kilograms in middle income ones. Reliable data is still missing for low income countries, though some in Eastern Europe and the former Soviet Union report relatively low levels.

Country totals

The largest numbers come from the world’s most populous states. China discards 108.7 million tonnes each year. India wastes 78.2 million tonnes. The United States accounts for 24.7 million tonnes, Brazil for 20.3 million, and Indonesia for 14.7 million.

Source: statista

Germany throws out 6.5 million tonnes, while Russia reports 4.8 million. Smaller nations contribute less in total, but their per-person figures can be high. Brazil stands at 94 kilograms per head, Ghana at 84. The Philippines is at the other end of the spectrum with 26 kilograms per person.

Food insecurity and emissions

While food is wasted on such a scale, 2.3 billion people were estimated to face moderate or severe food insecurity in 2024. At the same time, waste is linked to 8 to 10 percent of global greenhouse gas emissions and uses land equal to almost 30 percent of farmland worldwide. The economic loss is valued at more than one trillion dollars a year.

The world’s population is projected to grow from 8.2 billion now to 9.7 billion by 2050. Cutting waste is one of the most direct ways to improve supply without expanding farmland or increasing pressure on ecosystems.

Global efforts

In 2019, the UN declared September 29 as the International Day of Awareness of Food Loss and Waste. Since then, the FAO has tracked supply chain losses, but the figures have barely shifted. Waste data remains patchy and inconsistent, though some individual countries report progress.

Household behavior is harder to shift. Habits, urban lifestyles, and limited food planning skills remain the main drivers. That is why households continue to account for most of the waste, regardless of income level.

H/T: UNRP Food Waste Index Report 2024

Notes: This post was edited/created using GenAI tools.

Read next: AI Answers in Crisis: Reliable at the Extremes, Risky in the Middle


by Irfan Ahmad via Digital Information World

YouTube to pay $24.5 million in Trump settlement over suspended channel

YouTube has agreed to a $24.5 million settlement in the case brought by President Donald Trump after the platform blocked him from posting videos in the aftermath of the Capitol riot in January 2021. The deal, filed in a California federal court, ends years of back and forth between Trump’s lawyers and the Google-owned company, and it brings to a close the last of three lawsuits Trump launched against major social media firms over his account bans.

How the money is divided

Alphabet, YouTube’s parent, will transfer $24.5 million into the trust account of Trump’s lawyers. Of that sum, $22 million is set aside for Trump himself, though the filing shows he has directed the payment to the Trust for the National Mall. The trust is tied not just to preservation of monuments in Washington but also to the large ballroom being planned at the White House. That ballroom is projected to take up 90,000 square feet and is estimated to cost around $200 million, with the paperwork describing it as expected to be completed well before Trump’s current term ends in January 2029.

The balance of the settlement, $2.5 million, will be distributed to the other plaintiffs in the case. These include the American Conservative Union, which organizes the CPAC conference, and author Naomi Wolf, both of whom joined Trump’s legal action in 2021 when the platforms first cut off his accounts.

Settlement terms

The filing makes clear that YouTube and Alphabet are not admitting liability. The agreement specifies that the settlement and dismissal cannot be used as evidence against the company in any other legal or administrative action. The dismissal is “with prejudice,” which means the case cannot be filed again. It was entered under Rule 41 of the Federal Rules of Civil Procedure, a provision that allows cases to be closed voluntarily when both sides sign off.

How the case unfolded

Trump’s YouTube channel was suspended on January 12, 2021, just days after he spoke to supporters before the violence at the Capitol. At the time, YouTube said it was worried about the ongoing potential for violence. The channel wasn’t erased but the suspension stopped him from uploading new videos. That restriction stayed in place for more than two years before being lifted in March 2023.

Trump filed lawsuits against YouTube, Facebook, and Twitter in July 2021, arguing that the bans were unlawful and part of a wider attempt to curb conservative voices online. The YouTube case was slowed by court delays and was administratively closed in 2023. After Trump returned to the White House earlier this year, his legal team moved to reopen the matter, and it eventually led to this week’s agreement.

Other settlements already made

Meta, which owns Facebook, reached its own settlement in January, agreeing to pay $25 million. Most of that sum was directed toward a fund for Trump’s presidential library in Miami. In February, Twitter, now rebranded as X, settled with Trump for around $10 million. Together with YouTube’s deal, the total settlement payments across the three companies come to nearly $60 million.

Political reactions in Washington

The size and nature of the settlements have drawn scrutiny. In August, several Democratic senators sent a letter to Alphabet chief executive Sundar Pichai and YouTube chief executive Neal Mohan. They warned that such deals could create the appearance of political bargaining at a time when the administration is already facing questions over the influence of large technology firms. Their letter suggested that settlements of this kind might even raise concerns under competition and consumer protection law, and possibly federal bribery statutes, if they were seen as linked to policy outcomes.

Where it leaves both sides

For YouTube, the settlement avoids a prolonged court fight and does not require it to change its policies. For Trump, the payout adds to the stream of money flowing from the three companies he sued, while channeling a significant share of it into projects connected with his presidency.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen. 

Read next:

• OpenAI Expands Its Reach: Shopping, Social Video, and Safety Tools for Teens
by Asim BN via Digital Information World

OpenAI Expands Its Reach: Shopping, Social Video, and Safety Tools for Teens

Turning Chat Into a Checkout

OpenAI has begun weaving commerce directly into ChatGPT, a move that positions the chatbot not only as a source of information but also as a new point of sale. U.S. users across free and paid tiers can now buy products from Etsy sellers without leaving a conversation, with access to more than a million Shopify merchants expected in the near future.


The Instant Checkout system allows someone to ask for gift suggestions, browse relevant options surfaced in the chat, and then complete the purchase with a stored card, Apple Pay, or other payment services. The transaction itself runs through the merchant’s own systems, while ChatGPT acts as the go-between, passing on the necessary details securely. Shoppers see the same prices as they would on a merchant’s site, but the seller pays a small transaction fee.

At the core of this feature is the Agentic Commerce Protocol, a standard OpenAI developed alongside Stripe to help AI systems and businesses complete orders together. The company has open sourced the protocol to encourage adoption, offering developers and retailers a straightforward way to connect their systems to agentic shopping flows. While the first release supports single-item transactions, OpenAI says it is working on multi-item carts and regional expansion.

This move pushes the company into direct competition with Google and Amazon, which have long shaped how online retail operates. Search results and marketplace algorithms have historically dictated what products reach customers first. If more people begin to shop through AI conversations, the balance of influence could shift toward the developers of these agents, who then decide what results appear and how fees are structured.

Building a Social Video Platform Around Sora 2

Alongside its e-commerce efforts, OpenAI is preparing to enter the social media space with a new app powered by its Sora 2 video model. Reports suggest the software will resemble TikTok in format, offering a vertical feed of short videos navigated with swipes. What sets it apart is that every clip will be AI-generated rather than uploaded from a user’s camera roll.

The first version of the app is expected to limit clips to around ten seconds, shorter than the typical uploads supported by TikTok. An identity verification tool is also part of the design. If users opt in, the model can generate content featuring their likeness, allowing others to remix their digital persona into different videos. Whenever someone’s likeness is used, a notification will be sent, even if the clip never makes it to the public feed.

To address rights concerns, the system is built to block certain copyrighted materials, although early reports indicate that enforcement may rely on rights holders opting out. The model itself will refuse some prompts entirely, reflecting the growing attention on copyright in generative media.

By focusing on an AI-only feed, OpenAI is testing whether audiences will consume entertainment that has no direct human authorship. The experiment also expands the reach of Sora beyond experimental clips into a social context, creating a new setting for AI-generated culture.

Introducing Parental Controls for ChatGPT

While building out new products for commerce and media, OpenAI is also responding to pressure over safety. The company has rolled out parental controls for ChatGPT across the web, with mobile access coming soon. Parents can now link accounts with their teenagers to set restrictions and monitor use more closely.

The controls cover several areas. Parents can reduce or block access to sexual roleplay, violent scenarios, extreme beauty ideals, and other sensitive themes. They can also disable voice mode, image generation, or the system’s ability to remember past chats. Turning off memory reduces personalization but may strengthen guardrails by preventing conversations from gradually drifting into unsafe territory.

Another option allows parents to stop transcripts from being used to improve OpenAI’s models, giving families greater control over how their data is handled. Quiet hours can be scheduled so that teenagers cannot use ChatGPT during set times of the day. Notifications are also available if the system detects signs of serious safety risks, with alerts sent by email, text message, or app push.

Parents do not gain access to their children’s chat history. The link instead provides settings and alerts, with limited exceptions if urgent risks are identified. Teenagers retain some autonomy as well, since they can disconnect the link, though parents are notified when this happens.

The changes follow months of scrutiny. After a teenager in the U.S. died by suicide earlier this year, with allegations that the chatbot had played a role, lawmakers and grieving families called for stronger protections. OpenAI has since been working on an age-estimation system to better identify underage users and apply safeguards.

A Broader Push Into Consumer Life

These three developments — commerce integration, an AI-driven video app, and parental safety controls — point to OpenAI expanding beyond its roots as a research company into a platform that touches daily life in different ways. Each move carries implications not just for users but also for competitors, regulators, and industries built on existing digital habits.

The shift into shopping challenges the long-standing dominance of search engines and marketplaces in directing retail. The video app tests whether AI can generate an entertainment ecosystem compelling enough to rival human creators. And the safety measures show that the firm cannot avoid the responsibilities that come with shaping how young people interact with intelligent systems.

Together, these initiatives sketch the outline of a company positioning itself at the center of online discovery, culture, and trust. The path forward will depend on how widely these tools are adopted, how well they are managed, and how regulators respond to the risks they introduce. For now, they mark a significant step in OpenAI’s transformation from a lab building models into a consumer-facing force in technology.

Note: This post was edited/created using GenAI tools.

Read next:

• Generative AI Becomes Two-Way Force, Altering Company Marketing and Consumer Product Searches

• Americans Pull Back on Subscriptions as Costs Rise and Habits Shift


by Irfan Ahmad via Digital Information World

Monday, September 29, 2025

Americans Pull Back on Subscriptions as Costs Rise and Habits Shift

A survey from June 2025 shows that Americans are paying for fewer subscriptions than before. The research asked 1,138 adults about their monthly habits and spending. On average, households now have 2.8 subscriptions compared with 4.1 in 2024.

Monthly costs dropped as well. People spent about 59 dollars a month last year. This year, the average is 37 dollars. That means many households are saving roughly 264 dollars over a year by cutting services.

Wasted Spending Remains

Even with fewer accounts, people are still paying for things they don’t use. The study found that each person wastes about 127 dollars a year on unused subscriptions. That figure equals nearly three months of typical subscription spending.

Younger adults are more likely to forget about old accounts. Those between 18 and 24 had fewer subscriptions overall, but they wasted more money compared with older groups. People in the 35 to 54 age bracket held the most subscriptions, while older adults over 55 tended to be more careful.

Streaming Under Pressure

Streaming platforms continue to dominate, with 53 percent of households paying for at least one service. Even so, sign-ups are down from earlier years. Price increases and new rules on password sharing have caused many to cancel.

The shift also changes viewing habits. Around 46 percent of people said they would be more likely to turn to piracy if they cancelled their streaming accounts. That number shows how pricing decisions can influence behaviour outside the platforms themselves.

Food Delivery and Shopping Services

Food delivery subscriptions make up a smaller share. About 15 percent of people pay for passes such as Caviar or Grubhub, though many said they hardly use them.

Retail subscriptions also exist, covering items like cosmetics or household goods. About 10 percent of households keep one of these. The study suggests many of these memberships remain active out of habit rather than frequent use.

Fitness and Dating Apps

Fitness and dating services are less common but still part of the picture. Around 5 percent of people pay for fitness programmes, and 4 percent use dating app subscriptions. Both follow the same pattern seen elsewhere: people sign up, lose interest, and forget to cancel.

Why People Cancel Subscriptions

The survey also asked participants what pushed them to cancel. Rising living costs topped the list, with about a third citing affordability as the main issue. Others said they didn’t use the service enough, dropped accounts after free trials ended, or left because of price hikes. Some found better deals elsewhere, while a smaller group blamed poor customer service.

Survey shows affordability concerns, price hikes, and low use drive Americans to cancel subscriptions more selectively.

Changing Habits Since the Pandemic

During the early 2020s, households signed up quickly for new services. Subscriptions were seen as affordable alternatives to outside entertainment and shopping. That trend is fading. With tighter budgets and higher prices in 2025, many are cancelling accounts and weighing each service more carefully.

The findings show a shift. Subscriptions are still common, but people are more selective. Companies relying on recurring payments face greater pressure to show value, since customers are more willing to walk away.

Note: This post was edited/created using GenAI tools.

Read next:

• Sensitive Data Is Slipping Into AI Prompts, And Few Workers Realize the Risk

• New Research Warns Multitasking Leaves Employees Exposed to Phishing
by Asim BN via Digital Information World

Generative AI Becomes Two-Way Force, Altering Company Marketing and Consumer Product Searches

Generative AI is beginning to change how companies attract attention and how consumers decide what to buy. A survey of more than five hundred business leaders, conducted by Adobe Express team, showed that 34 percent had already received customer inquiries through AI-generated recommendations. Among those who gained business this way, the sales linked to these referrals made up 10.8 percent of their annual revenue on average. For nearly three in ten, the number of AI-driven leads passed fifty in a single year.

Some companies said these recommendations performed better than their older marketing methods. About 39 percent of leaders using generative AI for lead generation said the conversion rate was higher than traditional channels, while another 38 percent found it about the same. Only a minority saw weaker results. Chat-based platforms were the most widely adopted, with more than four out of five business leaders pointing to them as the primary tool for this type of marketing.

Investment in visibility within AI systems is also starting to take shape. Nearly a quarter of the executives surveyed said they were already spending more than ten percent of their marketing budgets on strategies that improve how their businesses appear in AI-generated suggestions. Technology companies were the most likely to commit at this level, with 38 percent of leaders in that sector saying they had made such allocations. The survey further showed that 48 percent of all business leaders expect to increase their spending on AI optimization in the year ahead, with almost six in ten tech leaders planning the same. Half of the total group believe AI systems will replace traditional search engines as their main source of leads within five years.

On the consumer side, the same survey reached just over five hundred people across different age groups. One in five respondents said they already use generative AI weekly to discover products, and younger consumers were leading the trend, with 28 percent of Gen Z doing so. Electronics were the most influenced sector, with 48 percent of shoppers saying AI affected their buying decisions in that category. Travel came next at 37 percent, followed by fashion at 25 percent.

The technology is not yet perfect. Almost six in ten consumers said the recommendations could be more accurate. Still, a smaller group of 13 percent said they would be willing to pay for premium versions of AI shopping tools that promise more reliable suggestions. Among Gen Z, that willingness rose to a quarter of respondents. Loyalty has also been tested, with 12 percent of consumers saying AI prompts had persuaded them to change brands, and nearly one in four Gen Z shoppers reporting the same.

Traditional search engines remain part of the picture, with 42 percent of respondents saying they still rely on them most often when searching for products. At the same time, 38 percent said AI-generated results felt more personalized, while a third said they were about the same as search. More than half, 52 percent, predicted that AI will overtake traditional search engines for product discovery within the next five years.

Consumer trust shows a mixed picture. A large majority, 63 percent, said they trust human reviews more than AI suggestions, while 30 percent said they trust both equally. For many, traditional search is losing ground because of problems like irrelevant advertising, overwhelming numbers of results, and concerns over authenticity. These frustrations are opening the door for AI to compete more strongly.

What emerges is a picture of businesses and consumers moving in parallel. Companies are adjusting their budgets to appear in AI-generated recommendations, and consumers are beginning to shift their product searches toward these tools. With both sides expecting greater use of generative AI in the years ahead, marketing and shopping habits may look very different by the end of the decade.






Read next: Sensitive Data Is Slipping Into AI Prompts, And Few Workers Realize the Risk
by Irfan Ahmad via Digital Information World

Sunday, September 28, 2025

Sensitive Data Is Slipping Into AI Prompts, And Few Workers Realize the Risk

An employee sits at their desk, rushing to finish a proposal. Instead of drafting from scratch, they paste sections of a contract with client names into ChatGPT. Another worker, struggling with a login issue, types their company credentials into Gemini to “see what happens.” In both cases, sensitive information has just been handed to a third-party AI system.

Unfortunately, this type of credential leak is increasingly common. A new survey from Smallpdf of 1,000 U.S. professionals reveals how often employees are funneling confidential data into generative AI tools. For many organizations, it’s a threat that is rapidly growing inside everyday workflows.

The report highlights critical blind spots. For example, over one in four professionals admit to entering sensitive company information into AI, and nearly one in five confess to submitting actual login credentials. As businesses rush to embrace generative AI, these findings show that security, training, and policy are lagging behind adoption.



The Hidden Risks of Everyday AI Use

The past two years have seen generative AI tools like ChatGPT, Gemini, and Claude move from experimental curiosities to daily staples in the workplace. They’re used to draft emails, summarize meetings, and brainstorm strategy documents. But alongside convenience comes exposure. Professionals are pasting sensitive contracts, client details, and even login credentials into systems they don’t fully understand and aren’t entirely secure. Many professionals assume prompts are private. Yet, in reality, every entry can be stored, analyzed, or surfaced in ways beyond their control.

According to the research :

  • 26% of professionals have entered sensitive company information into a generative AI tool.
  • 19% have entered actual login credentials, from email accounts to cloud storage and financial systems.
  • 38% of AI users admit to sharing proprietary product details or internal company financials.
  • 17% say they don’t remove or anonymize sensitive details before entering prompts.
  • Nearly 1 in 10 confess to lying to their employer about how they use AI at work.

Leake of sensitive information to AI is a widespread and growing concern. With over three-quarters of U.S. professionals using AI tools at least weekly, the line between efficiency and exposure has blurred. As adoption accelerates, organizations are learning that the true risks are unfolding inside everyday prompts.

When Your Prompts Become the Leak Surface

One of the most alarming aspects of this trend is that everyday employees are pasting sensitive material into AI chats. Contracts with real client names, internal financials, and passwords are routinely dropped into tools that may feel private but aren’t.

What looks like harmless productivity can turn into data exposure at scale. The survey underscores the pattern: 26% of professionals admit to entering sensitive company information into AI tools, 19% have entered actual login credentials, and 17% don’t bother to anonymize details before they prompt. Many also misunderstand how these systems work, as 24% believe prompts remain private, and 75% say they’d still use AI even if every prompt were permanently stored.

The trust employees place in familiar interfaces like chat boxes, browser extensions, and built-in copilots has become a new attack surface. Without clear policies and training, convenience is becoming the newest attack vector, and routine prompts are becoming the breach.

Prompt Hygiene: The Achilles’ Heel

Most workplaces embraced generative AI before they built guardrails for it. That gap is where sensitive data slips out.

The survey reveals:

  • 19% of professionals have entered actual login credentials into a generative AI tool.
  • Of those, 47% entered a personal email, 43% a work email, 25% a cloud-storage login, and 18% a bank or financial account.
  • 17% don’t remove or anonymize sensitive details before prompting.
  • 24% believe their AI prompts are private, and 75% say they’d still use AI even if every prompt were permanently stored.
  • 70% report no formal training on safe AI use, and 44% say their employer has no AI policy.

Traditional data-loss defenses weren’t built to monitor chat prompts in real time. Yet many organizations remain stuck, held back by policy gaps, training deficits, and trust in tools that feel safe but aren’t.

The Readiness Gap

Awareness is rising. Preparation isn’t. That’s the most troubling theme in the findings .

Just as AI use becomes routine, many basics are missing:

  • 70% of workers report no formal training on safe AI use.
  • 44% say their employer has no official AI policy; 12% aren’t sure, and 7% haven’t read the policy they do have.
  • About 1 in 10 professionals have little to no confidence they can use AI without breaking rules or risking data.
  • 5% have already faced a warning or disciplinary action for workplace AI use.
  • 8% admit to lying about their AI use, and 7% used ChatGPT after being told not to.

This readiness gap is procedural and cultural. Policies lag behind practice, training lags behind demand, and trust in “helpful” tools is outpacing understanding of their risks. This is leaving employees anxious, inconsistent, and exposed just as AI becomes embedded in everyday work.

A Better Path Forward: From Ad-Hoc to Accountable

What does adapting to the prompt-leak problem look like? It starts with reframing AI use as a governed, privacy-first workflow. Treat every prompt like data in motion and design controls around it.

That could include phish-resistant guardrails for prompts, involving default blocks on credentials, client names, and financials. Additionally, this might include auto-redaction/anonymization before text reaches external models. Furthermore, enterprise controls ought to be prioritized over consumer chat apps. SSO, tenant isolation, retention may be switched off by default, and DLP can be set to scan for PII/IP in real time. Lastly, context-aware approvals identifying sensitive actions (e.g., summarizing contracts or uploading internal financials) can require additional validation or manager sign-off.

Altogether, these controls point to a larger imperative: restructuring ownership so AI risk isn’t siloed. A cross-functional “AI governance guild” (e.g., security, legal, IT, and business leads) should co-own policies, training, and exception handling. Meanwhile, teams can pair AI with secure document workflows (redaction, watermarking, access controls). Distributing responsibility is quickly becoming essential for tools that evolve too quickly for linear, after-the-fact reviews.

A Problem of Technology and Trust

The damage isn’t limited to leaks or fines. It reaches into client confidence, data integrity, and long-term brand equity. The findings point to a different kind of churn: workers who assume prompts are private, leaders who haven’t set boundaries, and customers who recoil when their details show up in the wrong place. Routine AI use can feel like a privacy violation in slow motion when policies lag behind practice.

AI risk exploits software and certainty. Often, people stop trusting systems and companies when a friendly chat box stores contract clauses or a “helpful” assistant accepts passwords without warning. That trust is far harder to rebuild than any stack you can refactor. Once it’s gone, every login, form, and document share starts from a deficit.

Why Most Organizations Will Stay Exposed

If the dangers are so obvious, why do so many teams remain unprepared?

The data points to three overlapping blockers:

  • Policy vacuum and training deficit. With 44% reporting no official AI policy and 70% receiving no formal training, employees default to improvisation in tools that feel safe but aren’t.
  • Misplaced trust and poor prompt hygiene. Beliefs that prompts are private (24%), combined with weak redaction habits (17% don’t anonymize) and stubborn convenience (75% would use AI even if prompts were permanently stored), keep risky behaviors entrenched.
  • Fragmented ownership and legacy workflows. AI use spreads across teams without clear governance, while document practices (contracts, financials, credentials) remain outside DLP and access controls, making copy-paste the path of least resistance.

These aren’t trivial obstacles, but they are solvable. As the costs of ungoverned AI mount, the price of inaction is climbing faster than most leaders expect.

Looking Ahead

The future of workplace AI will be defined by how quickly organizations shift from casual prompting to governed, privacy-first workflows. Leaders must move beyond ad-hoc guardrails and redesign how sensitive information is handled at the moment of prompt by treating every entry as data in motion, subject to redaction, routing, and audit.

At the root, leaders will be increasingly engaged in rethinking “productivity” in a world where contract snippets, client names, and credentials can be pasted into systems that store everything by default.

This also means resourcing the change. Give security, legal, and IT the mandate and budget to implement enterprise controls over consumer chat apps, deploy DLP that scans prompts, and roll out training that raises baseline literacy for every role. Asking teams to be safer with the same tools and no policy is how leaks become norms.

The story Smallpdf’s data tells is urgent: AI is already embedded in daily work, but the safeguards are not. The question now is whether organizations will modernize governance and prompt hygiene, or keep playing by pre-AI rules while sensitive details keep slipping through the chat box.

Methodology: This analysis draws on a September 2025 survey commissioned by Smallpdf of 1,000 full-time U.S. professionals across industries, job levels, and demographics, designed to understand how workers use generative AI and where sensitive information may be exposed in prompts and document workflows. Responses covered behaviors (e.g., anonymization habits, credential sharing), policy awareness, training, and tool usage frequency to illuminate risk patterns in everyday AI-assisted tasks. 

Read next:

• New Research Warns Multitasking Leaves Employees Exposed to Phishing

• People More Willing to Cheat When AI Handles Their Tasks


by Irfan Ahmad via Digital Information World

Saturday, September 27, 2025

New Research Warns Multitasking Leaves Employees Exposed to Phishing

Workers often switch between emails, meetings, and documents during the day. A study from the University at Albany shows that this constant juggling can reduce attention and make phishing attacks more effective. The research, published in the European Journal of Information Systems, connects heavy mental load with higher chances of missing signs of fraudulent messages.

Phishing emails remain one of the most common tools for cybercriminals. They aim to steal personal details, account credentials, or money. According to Valimail, around 3.4 billion phishing messages are sent every day. IBM estimates that an average incident costs businesses close to $5 million. The findings highlight how small drops in user awareness can translate into major financial risks.

Testing the Effect of Cognitive Load

The study involved close to 1,000 participants. Researchers asked them to complete email reviews while managing different levels of memory tasks. Results showed that when participants carried heavier mental loads, their ability to spot phishing attempts declined sharply. When the mental demand was lighter, accuracy improved.

The experiments suggest that memory and attention play a critical role in phishing detection. If workers are already focusing on difficult tasks, they may fail to notice details such as odd addresses or suspicious links. Divided attention reduces the level of scrutiny people apply to their inbox.

Role of Simple Reminders

The research also tested whether short prompts could help. A brief reminder before checking emails improved performance. Participants became more cautious when they were told that phishing attempts might be present. These reminders did not remove the effect of multitasking, but they reduced the impact.

Messages framed around rewards, such as offers or prizes, were the hardest to resist. People were more likely to believe them unless prompted to take care. In contrast, messages framed as threats, such as warnings about account lockouts, triggered more natural caution even without a prompt.

Training and Realistic Conditions

Many security training programs assume that workers are focused when phishing occurs. The study challenges that assumption. Real working conditions often include noise, interruptions, and simultaneous tasks. The findings suggest that training should reflect these distractions to prepare employees for realistic risks.

Simulated exercises with competing demands may help staff build habits that remain effective under pressure. Without this approach, lessons may not hold up when workers return to busy environments.

Practical Steps for Organizations

The authors highlight several measures that can reduce exposure to phishing:

  • Introduce short alerts in email systems to encourage caution before clicking
  • Design training that includes real-world distractions
  • Teach staff how scammers use both threats and rewards to influence decisions

These steps reflect the idea that people are more vulnerable when attention is stretched thin. A momentary lapse can create an opening for attackers.

Financial Stakes

The cost of a phishing-related breach continues to rise. IBM estimates the average expense at nearly $5 million. Even small improvements in awareness can save companies large sums. Technology filters out many threats, but attackers continue to rely on human error because it cannot be fully automated away.

Shifting the Focus in Cybersecurity

The study shows why understanding human limits is central to defense. Multitasking changes how people judge information. Recognizing this effect can guide organizations in building stronger safeguards. Attention is a finite resource, and in digital workplaces it often gets divided.

The research offers a practical message: protecting information requires more than filters or policies. It requires systems and training that reflect how people actually work. When staff are busy, reminders and context-aware support can help them avoid costly mistakes.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next: YouTube’s Policy Shift Brings Relief for Creators Facing Strict Ad Rules
by Web Desk via Digital Information World

Meta Launches Paid Ad-Free Option for Facebook and Instagram in the UK

Meta is adding a subscription in the UK that lets people use Facebook and Instagram without ads. The rollout follows months of talks with the Information Commissioner’s Office, which pressed the company to give users a clear choice over data use and targeted advertising.

Prices and Accounts

On the web, the subscription is £2.99 a month. On iOS and Android it is £3.99 because of fees charged by Apple and Google. The fee covers all accounts linked through Meta’s Accounts Center. Extra accounts can be added for £2 a month on the web or £3 on mobile.

Oversight and Regulation

The ICO said the new model brings Meta closer to UK data protection rules. People can either keep using the platforms with personalised ads or pay to remove them. The regulator said it will keep an eye on how the change works in practice.

Meta is taking a different route in the UK compared with the European Union. In Europe, regulators fined the company €200m this year for failing to offer a lighter version of targeted ads. Subscription prices in the EU are also higher, starting at more than six euros a month.

Legal and Policy Background

The UK launch comes after Meta settled a case with campaigner Tanya O’Carroll, who argued her rights were breached when the company refused to stop using her data for advertising. Since then Meta has explored ways to give users an option to opt out, with the subscription being the result.

Legal analysts note that the ICO’s stance shows a split from the European Commission. In their view, the UK approach leans toward supporting business growth while still requiring some level of consumer protection.

Business Impact

Meta still defends targeted advertising as the foundation of its free services. The company says ads help people find products and give businesses an affordable way to reach customers. In 2024, its ad systems were linked to billions of pounds in economic activity and hundreds of thousands of jobs in Britain.

Now the choice is left with users, who can decide if an ad-free feed is worth the monthly cost.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next: UN Report Puts Global Tech Firms Under Spotlight for Links to Unlawful Israeli Settlements
by Irfan Ahmad via Digital Information World

Friday, September 26, 2025

UN Report Puts Global Tech Firms Under Spotlight for Links to Unlawful Israeli Settlements

The United Nations has updated its public database of businesses linked to Israeli settlements in the occupied West Bank. The new list identifies 158 companies. Some are local, but others are international names in the technology and travel sectors.

Travel Platforms Named

Several global booking platforms appear in the update, including Airbnb, Booking.com, Expedia, and TripAdvisor. These firms host and process reservations for accommodation inside illegal settlements. By doing so, they provide visibility and revenue to properties that the UN and the International Court of Justice consider unlawful.

The UN’s rights office said companies are expected to ensure their services do not contribute to rights abuses. The focus on high-profile travel platforms signals that digital intermediaries are part of the settlement economy, not only local construction or banking firms.

Telecom and Security Links

The database also highlights technology used in infrastructure and surveillance. Telecom operators Bezeq, Partner Communications, Hot Mobile, and Cellcom were included for supplying digital services to settlement areas. Motorola Solutions, along with its Israeli subsidiary, was listed for providing equipment used in security and monitoring systems.

Such companies form the digital backbone of settlements. Their networks and devices support daily operations and surveillance in disputed areas, which places them directly inside the scope of the UN’s assessment.

International Spread

Most of the companies listed are Israeli, but several come from abroad. Heidelberg Materials of Germany, which supplies building products, was included. Firms registered in Canada, China, France, Luxembourg, the Netherlands, Spain, the United Kingdom, and the United States also appear. Seven companies named in previous updates were removed after evidence showed they were no longer active in settlement-linked activities.

The database will continue to grow. More than 300 other firms remain under review, and further updates are expected.

Legal Context

The update comes after a 2024 advisory opinion from the International Court of Justice. Judges found that Israel’s settlement policies amounted to annexation of occupied territory and violated the Palestinian right to self-determination. The court also stated that states and businesses must avoid supporting activities that maintain such settlements.

The UN Human Rights Office applied its standard methodology, based on international business and human rights principles. The database does not provide a legal judgment, but it identifies companies where evidence showed involvement in one or more of ten specified activities linked to settlements.

Reputational Risk

For global firms, the consequences are reputational as much as legal. Travel platforms operate in highly visible markets where consumer perception matters. Telecom and surveillance firms may be less exposed to direct customer choice, but their role in building settlement infrastructure makes them subject to international scrutiny.

Other Sectors

While technology names stand out, many of the listed companies belong to construction, real estate, banking, mining, and retail. Together, these businesses sustain daily life in settlements. The UN says firms in such contexts have a duty to carry out due diligence, prevent harm, and offer remedies when their activities are linked to abuses.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next:

• Privacy Nightmare: AI Turns Social Media Into Data-Hungry Giants, Study Warns

• Do Customers Prefer People or Machines? Study Shows the Answer Shifts with Context
by Irfan Ahmad via Digital Information World

Privacy Nightmare: AI Turns Social Media Into Data-Hungry Giants, Study Warns

Large language models and machine learning are still new inventions by historical standards, but in everyday culture they already feel familiar. Collectively known as “AI,” these technologies have opened a Pandora’s Box of privacy concerns.

If AI is old-hat, then social media is ancient history. Yet the privacy risks associated with social media usage have only grown since the first modern social media platforms hit the scene around 20 years ago. These risks saw a stepwise increase in recent years, as so-called AI models began to be integrated into and trained on social media platforms.

Researchers at Incogni have prepared a new social media platform privacy ranking for 2025, expanding their criteria to include LLM and so-called generative-AI training concerns. This year’s ranking also stands out for taking a more nuanced approach than previous studies, expanding its scope to the end user’s experience in gathering and analyzing privacy-related information.

The study examined the top 15 social media platforms by monthly user count and ranked them according to 14 criteria across 6 categories:

  • AI integration and training
  • Privacy-related regulatory transgressions
  • Data collection
  • User control and consent
  • Transparency
  • User-friendliness

The results were appropriately weighted and combined to generate an overall privacy ranking, ordering the platforms according to the extent to which they pose a privacy risk to their users.

Somewhat predictably, Meta’s offerings (Facebook, WhatsApp, Instagram, and Facebook Messenger) and ByteDance’s TikTok round out the bottom of the ranking. Surprisingly, though, less popular platforms like Discord, Pinterest, and Quora fared relatively well.

How AI Training Transformed Social Media Privacy Risks: Inside Incogni’s 2025 Ranking

As nice as it is to have an overall ranking like this, it’s the details of analysis that prove most useful in making informed decisions about which platform to trust, if any. For example, Pinterest, positioned as highly as it is overall, might not be the best option for a user who’s particularly concerned about data collection and sharing, as this is an area in which Pinterest performed worst of all.

The correlation between a platform’s overall position and its performance in the “AI and personal data” category is generally much stronger. This category covers criteria regarding whether a platform reserves for itself the right to train its own or other entities’ so-called AI on user data and whether it offers its users a mechanism for opting out of such data usage.

Other than introducing an AI-related category of assessment criteria, researchers added a subjective dimension to their analysis. Subjective but nonetheless quantified (via, for example, the Dale-Chall formula): the “user friendliness and accessibility” criteria capture how difficult it’s likely to be for a user to parse the relevant privacy policy documentation as well as the number of discrete steps a user would need to perform in order to delete their account.

This is an important part of analysis as it helps to ground the study in a typical user’s perspective. Concepts like “user friendliness” and “accessibility” depend heavily on the extent to which actual users can reasonably be expected to gather and process the information they need to make an informed decision. A privacy policy may well contain all the right information, but if it’s impenetrable to a reader with even a college education, then it fails to perform its core function, communicating salient details to an average user.


Darius Belejevas, Head of Incogni, had this to say:

There are no mainstream social media platforms that could, by any stretch of the imagination, be considered privacy-respecting. That said, social media platforms that respect users’ privacy do exist: Mastodon, PixelFed, and the ActivityPub protocol that allows them to federate are great examples, as are projects like Nostr and Matrix. But these platforms all share a common challenge: low uptake among everyday users. In other words: the network effect.

Continuing:

The reality is that people want the connection and distraction that mainstream social media platforms promise. Making privacy safeguards a desirable selling point is one thing we can all do to sway the market towards a more user-friendly future. The first step to doing that is understanding the privacy risks associated with those platforms as they are now.

This study brings to the fore a phenomenon that affects many aspects of this sometimes nebulous concept of “privacy”: its increasing and accelerating complication. As new as the personal-data exploitation boom of the early 2000s was, it now, in retrospect, looks like a decidedly simpler time. Going far beyond harvesting and analyzing personal data as it’s entered into and generated through interactions with these platforms, these companies are now following users around the web, surveilling their devices, and using anything they can find or infer to train various “AI” models.

All this represents a two-fold opening out of the privacy-risk landscape. On one hand, the streams of personal data going into these social media platforms have increased in throughput as they’ve multiplied in number. While, on the other hand, the streams of raw, processed, and inferred personal data leaving the same platforms have bifurcated time and again, and spread out far and wide.

Social media platforms are no longer limited to user interactions when it comes to satisfying their seemingly bottomless desire for personal information. Once they have a user’s data, they no longer limit themselves to exploiting it for marketing purposes. Data is sold to and bought from data brokers, disseminated through LLM outputs, and put to a far greater variety of uses... all often without the user’s informed consent and sometimes even contrary to their expressed wishes.

Studies like the one that resulted in Incogni’s social media privacy ranking are both a way to get the lay of this rapidly evolving landscape and a roadmap for choosing those routes that lead to a brighter future.

The full analysis (including public dataset) can be found here.


by Irfan Ahmad via Digital Information World

OpenAI Introduces ChatGPT Pulse, a Paid Feature That Automates Personalized Briefings

OpenAI has introduced ChatGPT Pulse, a new tool that produces daily personalized reports. The feature is only available to Pro subscribers, who pay $200 a month, and is part of the company’s effort to make ChatGPT work more like an assistant than a chatbot.

How it works

Pulse runs mostly overnight. It processes a user’s chat history, memory settings, and feedback, then compiles a set of five to ten cards the next morning. These cards can include news updates, reminders, or suggestions based on personal context. Each card links to a full report, and users can ask ChatGPT questions about the content.


The feature also works with connected apps such as Gmail and Google Calendar. When switched on, Pulse can highlight important emails or prepare a daily agenda. OpenAI says these integrations are off by default, and users can control how much data is shared.

From Tasks to Pulse

An earlier experiment called Tasks let users set reminders, such as getting news at a specific time. Pulse expands on that idea by running automatically, without waiting for a manual request. OpenAI executives describe it as the next stage in building assistants that can anticipate needs.

Why it is limited to Pro

Pulse requires heavy computing power, which is why it sits behind the Pro subscription. OpenAI has said it is short on server capacity and is working with Oracle and SoftBank to expand its data centers. The company wants to release the feature more widely, starting with Plus subscribers, once it becomes more efficient.

What it shows

Examples shown by OpenAI include sports roundups, travel itineraries, family activity ideas, and restaurant suggestions tailored to dietary preferences. The system can also prepare drafts such as meeting agendas or gift reminders.

Pulse is designed to stop after presenting a limited set of cards. The company says this choice is deliberate, to avoid the constant scrolling pattern of social media feeds.

Looking ahead

For now, Pulse is aimed at individual users, but the company sees it as a step toward more capable AI agents. Future versions could handle tasks such as making bookings or drafting emails for approval, though those features remain in early development.

Other startups are exploring similar tools, including Huxe, which comes from the team behind Google’s NotebookLM. Analysts say the market is still open, as most AI agents today rely on prompts rather than working proactively.

OpenAI stresses that Pulse remains experimental and optional. Its success will depend on whether users find enough value to justify its high subscription cost.

Notes: This post was edited/created using GenAI tools.

Read next: Trump Signs Off on TikTok Deal, But Key Details Remain Unsettled


by Irfan Ahmad via Digital Information World

Thursday, September 25, 2025

Microsoft Ends Israeli Military Unit’s Access to Cloud and AI Services Used in Palestinian Surveillance

Microsoft has withdrawn access to some of its cloud and artificial intelligence services from a unit of the Israeli military after evidence emerged that its technology had been central to a mass surveillance program targeting Palestinians in Gaza and the West Bank.

The decision follows months of scrutiny triggered by investigative reports that revealed how the military’s intelligence wing, Unit 8200, was storing and processing enormous volumes of civilian communications through Microsoft’s Azure platform.

Surveillance Program and Scale

The program relied on the interception of millions of Palestinian phone calls each day. Intelligence officers could capture, replay, and analyze conversations with the help of AI-driven tools hosted on Microsoft’s infrastructure. Sources described the system as capable of handling an immense flow of information, with internal slogans pointing to the goal of recording nearly a million calls per hour.

According to documents cited in investigations, the collected material reached several thousand terabytes in scale and was initially stored in a Microsoft data center located in the Netherlands. That arrangement gave Israeli intelligence officers near-limitless access to analyze the material, with applications ranging from general monitoring of daily life in the occupied territories to the identification of potential targets in Gaza.

Corporate Response and Internal Pressure

Microsoft’s decision came after an independent review ordered earlier this year to assess whether its services were being misused. The company concluded that a military client had violated its rules by using Azure infrastructure for the systematic surveillance of a civilian population. Employees and investors had also raised concerns about the firm’s role in providing technology for military operations, particularly as the humanitarian toll of the Genocide in Gaza has escalated.

The decision was relayed to Israel’s Ministry of Defense in recent days, with Microsoft informing officials that subscriptions linked to Unit 8200 would be terminated. The measures include revoking access to certain cloud storage capabilities and restricting the use of AI-powered services. The company stressed that its global policy forbids enabling mass civilian surveillance and that this principle applies across all regions where it operates.

Data Relocation and Alternative Providers

After the initial reporting earlier this summer, Unit 8200 began transferring large portions of stored communications out of Microsoft’s European servers. Intelligence sources indicated that the data, estimated at thousands of terabytes, was moved to alternative infrastructure, with Amazon Web Services named as a potential new host. Amazon has not publicly commented on whether it has agreed to manage the repository.

The relocation underscored the sensitive nature of hosting military surveillance data on foreign commercial platforms, raising questions within Israel about the risks of relying on overseas providers for operations tied to national security.

Historical Ties and Earlier Reviews

Collaboration between Microsoft and the Israeli military intensified in recent years. In 2021, company executives met with senior commanders of Unit 8200 to discuss technical cooperation, including the creation of a segregated environment within Azure to handle intelligence workloads. Those arrangements were later examined by Microsoft after internal leaks suggested their scale.

An earlier review carried out in mid-2024 had initially cleared the company, with investigators saying they found no proof that Azure tools were being used to harm civilians. However, subsequent evidence gathered by reporters and advocacy groups contradicted those findings, prompting a second inquiry that resulted in this week’s termination.

Reaction from Activists and Workforce

The revelations sparked widespread protests from Microsoft staff as well as campaign groups critical of the company’s ties to Israel’s military. Demonstrations were staged both at US headquarters and at European sites, with a worker-led initiative calling itself “No Azure for Apartheid” pushing for a full severance of contracts with the Israeli defense sector.

Some employees also faced disciplinary action after staging direct protests inside company offices. Organizers described Microsoft’s latest move as a step forward but argued that it addressed only a fraction of the firm’s relationship with Israel’s defense establishment, since other contracts remain in place.

Critics argue that Microsoft’s actions reveal a deep moral failure. They note the company has never condemned Israel’s genocide in Gaza, even while its technology was used to support surveillance tied to military operations there. Nor has it apologized for enabling that system or acknowledged that employees who protested were standing on the right side of history. Instead, it protected contracts and avoided accountability. Activists say this silence shows a corporation unwilling to choose between right and wrong, exposing a culture where profit outweighs morality. For many, the only meaningful response is to boycott Microsoft and other firms that empower such actions, until corporate greed and complicity give way to a new morality that values human life over corporate gain.

Broader Context and Implications

The decision marks the first known case of a major US technology company suspending services previously provided to the Israeli military since the beginning of the Genocide in Gaza. It comes against the backdrop of international criticism over the humanitarian crisis in the territory, where tens of thousands of Palestinian civilians have been killed during nearly two years of bombardment and siege.

Legal experts and human rights monitors have noted that the surveillance project illustrates the degree to which advanced cloud infrastructure from American companies has been integrated into military campaigns. For Microsoft, the move represents both a corporate governance decision and a response to reputational risks, as it seeks to demonstrate consistency in applying its own standards.

Ongoing Reviews

Microsoft has said that its inquiry is still continuing and that additional measures may follow depending on new findings. The company emphasized that the investigation did not involve examining customer data directly but was based on internal records, correspondence, and contractual details. Senior executives also acknowledged that earlier assessments may have been incomplete, partly due to limited transparency from staff working on the Israeli contracts.

While Microsoft’s wider commercial agreements with Israel remain intact, the suspension of specific services linked to Unit 8200 highlights a shift in how global technology firms are forced to balance commercial interests, ethical guidelines, and mounting pressure from employees and civil society. The long-term outcome may depend on whether other cloud providers face similar scrutiny over their role in hosting sensitive military operations.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next

• AI’s Sources of Truth: What Chatbot Citations Reveal About the Future of Health Information

• Why Parental Control Apps Like AirDroid Are Essential in Today’s Digital Landscape

• 5G Networks Show Stability but Still Struggle to Beat 4G
by Irfan Ahmad via Digital Information World

AI’s Sources of Truth: What Chatbot Citations Reveal About the Future of Health Information

AI’s Sources of Truth: What Chatbot Citations Reveal About the Future of Health Information Large language models (LLMs) have rapidly shifted from experimental tools to everyday advisors. For millions of people, asking AI chatbots such as ChatGPT about a migraine or autoimmune disorder feels as natural as typing a query into Google. But instead of returning a list of links, these systems summarize and cite information, raising a pressing question: Where exactly do these chatbots get their medical knowledge?

A new study, AI’s Sources of Truth: How Chatbots Cite Health Information, analyzed 5,472 citations generated by the four leading web-enabled models: ChatGPT, Claude, Gemini, and Perplexity. The findings show both encouraging signs of reliability and some concerning blind spots. More importantly, they suggest how our relationship with healthcare information is being rewritten by AI systems.

The Concentrated Core of AI’s Health Sources

When chatbots answer health questions, their citations are surprisingly concentrated. The most frequently cited domain across all models was PubMed Central, a free archive of biomedical research, which appeared 385 times in the sample. AI systems currently lean heavily on peer-reviewed research that’s openly available.

Rank Website Total mentions
1 pmc.ncbi.nlm.nih.gov 385
2 my.clevelandclinic.org 174
3 www.mayoclinic.org 163
4 www.ncbi.nlm.nih.gov 150
5 www.sciencedirect.com 93

Close behind were some of the internet’s most trusted health websites. The Cleveland Clinic’s patient information portal was cited 174 times, and the Mayo Clinic’s site 163 times. Another top source was the NIH’s National Center for Biotechnology Information (NCBI) site, with 150 mentions. These four show that chatbots gravitate toward established, credible medical knowledge.


Overall, nearly one in three citations (30.7%) in the study came from health media sites. About 23% of references were traced to commercial or affiliate sites (like corporate blogs, product pages, or other pages with a marketing slant). Another roughly 23% were from academic research sources. The chatbots as a group seem to favor accessible, consumer-friendly explanations of health topics. Traditional news articles made up only about 3.7% of citations, and social media or user-generated content only 1.6%. Mainstream journalism and personal anecdotes thus barely register in the bots’ answers.

Fresh, Up-to-Date Information in Answers

When it comes to how current the information is, the chatbots show a strong bias toward recent material. Nearly two-thirds of all cited sources were published in either 2024 or 2025. In fact, the single most common publication year among the citations was 2025, accounting for about 40% of all references. After 2025, the number of citations from older years drops off dramatically.
This recency bias likely reflects both the design of the bots (some have browsing enabled to find current info) and a built-in preference for newer, more relevant data. If you ask about a medical treatment or emerging health issue, the chatbots are inclined to cite something from the last year or two, rather than a decades-old paper. It is a reassuring habit given how quickly medical consensus can change.

Different Chatbots, Different Source Preferences

The most interesting insight from the study is how each AI model has its own style in sourcing information. While all four chatbots broadly favored authoritative, recent, open-access material, the mix of sources varied by platform.


For example, ChatGPT and Claude showed the strongest preference for highly authoritative domains. Around 68% of all citations from ChatGPT came from domains with the highest domain authority rankings (like DR 81–100 on Ahrefs), and Claude was similar at 67.4%. In comparison, Google’s Gemini and Perplexity were a bit less top-heavy: about 56–58% of their citations were from these elite top-rated sites. Gemini and Perplexity dipped more into mid-tier sources (for instance, websites that are reputable but not the absolute top of the internet’s authority food chain), and Perplexity in particular ventured the furthest down the credibility ladder. The study notes that Perplexity cited the largest share of low-authority websites (3.3% of its sources were from domains in the lowest credibility tier).


Looking at content categories: ChatGPT tended to cite health media outlets the most, with 35.8% of its references coming from sites like Mayo Clinic, WebMD, Cleveland Clinic, etc. About 23% of ChatGPT’s citations were academic papers or journals, meaning it still included a fair amount of hard science but leaned more toward those consumer health explainers. Claude, by contrast, was more evenly split, roughly 29.7% health media and 28.9% academic sources, essentially balancing between easy-to-read guides and original research.

Gemini stood out by citing government and NGO sources far more than the others. Nearly a quarter (24.9%) of Gemini’s citations were from official public health sites or nonprofit health organizations. Meanwhile, Perplexity was the real outlier. It’s the only model where commercial content was the number-one source category, making up 30.5% of its citations. Perplexity also cited social or user-generated content more than any other bot. This chatbot is a bit more likely to throw in a Reddit thread, a Quora answer, or a YouTube video as part of an answer.

The Future of Health Search

The shift from Google-style search to AI-powered health assistants is behavioral. Instead of wading through a swamp of links, users now get tailored explanations, neatly cited, with bias toward accessibility and recency.
  1. Trust is being redefined. People may start trusting AI models as much as, if not more than, traditional search engines. Yet each model’s sourcing bias means users could receive subtly different “truths.”
  2. Paywalled research is at risk of invisibility. If LLMs overwhelmingly favor open-access content, cutting-edge but gated science could be sidelined from public discourse.
  3. Media narratives may shape science. With 59% of citations coming from summaries and health media, the interpreters of science could become more influential than the researchers themselves.
  4. Transparency matters. LLMs cite live, working links is a step toward accountability, but users must still validate the credibility and intent of those sources.
Read next: 5G Networks Show Stability but Still Struggle to Beat 4G
by Irfan Ahmad via Digital Information World

Instagram Crosses 3 Billion Users as Growth Reshapes Meta’s Social Platforms

Instagram has surpassed 3 billion monthly active users, reaching one of the biggest milestones in its history and placing the service alongside Facebook and WhatsApp at the top of Meta’s global portfolio.

A Decade of Expansion

The platform’s rise has been steady and unusually consistent. In 2013 Instagram counted around 130 million monthly users. Within a year it had doubled to 250 million, then rose to 400 million in 2015 and 545 million in 2016. By 2017 the app had attracted 800 million people, and in 2018 it passed 1.06 billion. That figure kept climbing: 1.25 billion in 2019, 1.49 billion in 2020, 1.76 billion in 2021, and just over 2 billion in 2022. Growth slowed slightly to 2.14 billion in 2023 and 2.27 billion in 2024, before accelerating sharply to 3 billion in 2025.

From 2013 to 2025, Instagram grew from about 100 million to 3 billion monthly users, an average of 19.2 million new users added each month.

Business and Product Drivers

Meta, then known as Facebook, acquired Instagram in 2012 for $1 billion, a deal that initially raised questions because the app had little revenue and limited reach. Since then, Instagram has become central to Meta’s business. Analysts estimate it will generate more than half of Meta’s advertising revenue in the United States this year.

The strongest growth has come from short-form video, direct messaging, and recommendation-based feeds. Reels, launched in 2020, positioned Instagram against TikTok and YouTube Shorts. Algorithmic recommendations have also boosted activity, though they have sparked frustration among users who prefer content from friends over suggested clips.

Upcoming Adjustments for Users

Meta is now testing new controls that will allow people to fine-tune recommendations. Early prototypes show users being able to add or remove topic categories, changing which Reels or suggested posts they see. The navigation bar will also be updated to place direct messaging at the center of the experience, with the upload button moved elsewhere. These adjustments reflect Instagram’s shift toward private interaction and discovery, rather than its origins as a photo feed.

Policy and Regulation Pressures

Growth has not come without scrutiny. In April 2024 Meta stopped reporting quarterly active user numbers for each app and began focusing instead on overall engagement across its platforms. The company said in July that 3.48 billion people use its family of services daily. At the same time, regulators have continued to examine Meta’s acquisitions of WhatsApp and Instagram. A U.S. antitrust trial has revealed internal discussions showing concern inside Meta that Instagram’s popularity was eroding Facebook’s position.

The company has also faced pressure on child safety. In 2024 Instagram introduced new privacy defaults, making all accounts for under-18 users private unless changed manually. The update was aimed at building safer digital spaces for younger people while meeting regulatory expectations.

Meta’s Balancing Act

Instagram now joins Facebook and WhatsApp in exceeding 3 billion monthly users, but its cultural weight is different. Instagram has become the most influential of Meta’s apps among younger people, while Facebook continues to lose ground with that audience. The uneven momentum has forced Meta to maintain balance: supporting Instagram’s expansion while trying to revive interest in its original network.

With steady user growth over more than a decade and new tools shaping how people interact with content, Instagram has become one of the pillars of Meta’s global reach, as well as a key driver of its future strategy.

Quarter Year MAU (Instagram)
Q3 2013 130 Million
Q3 2014 250 Million
Q3 2015 400 Million
Q3 2016 545 Million
Q3 2017 800 Million
Q3 2018 1060 Million
Q3 2019 1255 Million
Q3 2020 1490 Million
Q3 2021 1765 Million
Q3 2022 2010 Million
Q3 2023 2145 Million
Q3 2024 2270 Million
Q3 2025 3000 Million

Notes: This post was edited/created using GenAI tools.

Read next:

• Making Instagram Content Work: A Closer Look at What Each Post Type Really Does

• YouTube Adds Options to Hide End Screens
by Irfan Ahmad via Digital Information World