Tuesday, September 30, 2025

2.3 Billion Hungry, One Billion Tonnes Wasted: The Paradox Defining Global Food Security

How much is lost before reaching people

Roughly 30 percent of food produced around the world never gets eaten. About 13 percent disappears between harvest and supermarket shelves. That is where poor storage, wrong harvesting times, bad weather, and weak transport systems take their toll. Fruits and vegetables see the biggest losses, with more than a quarter gone before sale. Meat and animal products lose around 14 percent.

H/T: Statista

Sub-Saharan Africa faces the steepest challenge, with 23 percent lost early in the chain. Asia loses 14 percent, Latin America and the Caribbean 13 percent, North America 10 percent, and Europe only 6 percent. These gaps reflect differences in infrastructure and handling practices.

Waste after the food is sold

Once food reaches shops or homes, another 17 to 19 percent is discarded. UNEP data puts the total waste in 2022 at just over one billion tonnes. This includes 631 million tonnes from households, 290 million from food service, and 131 million from retailers.

Via: Statista

Households are by far the largest source. On average, each person throws out 79 kilograms of food every year. Restaurants and catering add 36 kilograms per capita, while retailers discard 17 kilograms.

Not only rich countries

Waste used to be seen as a problem of wealthier economies. That is no longer the case. Figures show little difference between high income, upper-middle income, and lower-middle income groups. The annual per capita range is narrow, from 81 kilograms in rich countries to 86–88 kilograms in middle income ones. Reliable data is still missing for low income countries, though some in Eastern Europe and the former Soviet Union report relatively low levels.

Country totals

The largest numbers come from the world’s most populous states. China discards 108.7 million tonnes each year. India wastes 78.2 million tonnes. The United States accounts for 24.7 million tonnes, Brazil for 20.3 million, and Indonesia for 14.7 million.

Source: statista

Germany throws out 6.5 million tonnes, while Russia reports 4.8 million. Smaller nations contribute less in total, but their per-person figures can be high. Brazil stands at 94 kilograms per head, Ghana at 84. The Philippines is at the other end of the spectrum with 26 kilograms per person.

Food insecurity and emissions

While food is wasted on such a scale, 2.3 billion people were estimated to face moderate or severe food insecurity in 2024. At the same time, waste is linked to 8 to 10 percent of global greenhouse gas emissions and uses land equal to almost 30 percent of farmland worldwide. The economic loss is valued at more than one trillion dollars a year.

The world’s population is projected to grow from 8.2 billion now to 9.7 billion by 2050. Cutting waste is one of the most direct ways to improve supply without expanding farmland or increasing pressure on ecosystems.

Global efforts

In 2019, the UN declared September 29 as the International Day of Awareness of Food Loss and Waste. Since then, the FAO has tracked supply chain losses, but the figures have barely shifted. Waste data remains patchy and inconsistent, though some individual countries report progress.

Household behavior is harder to shift. Habits, urban lifestyles, and limited food planning skills remain the main drivers. That is why households continue to account for most of the waste, regardless of income level.

H/T: UNRP Food Waste Index Report 2024

Notes: This post was edited/created using GenAI tools.

Read next: AI Answers in Crisis: Reliable at the Extremes, Risky in the Middle


by Irfan Ahmad via Digital Information World

YouTube to pay $24.5 million in Trump settlement over suspended channel

YouTube has agreed to a $24.5 million settlement in the case brought by President Donald Trump after the platform blocked him from posting videos in the aftermath of the Capitol riot in January 2021. The deal, filed in a California federal court, ends years of back and forth between Trump’s lawyers and the Google-owned company, and it brings to a close the last of three lawsuits Trump launched against major social media firms over his account bans.

How the money is divided

Alphabet, YouTube’s parent, will transfer $24.5 million into the trust account of Trump’s lawyers. Of that sum, $22 million is set aside for Trump himself, though the filing shows he has directed the payment to the Trust for the National Mall. The trust is tied not just to preservation of monuments in Washington but also to the large ballroom being planned at the White House. That ballroom is projected to take up 90,000 square feet and is estimated to cost around $200 million, with the paperwork describing it as expected to be completed well before Trump’s current term ends in January 2029.

The balance of the settlement, $2.5 million, will be distributed to the other plaintiffs in the case. These include the American Conservative Union, which organizes the CPAC conference, and author Naomi Wolf, both of whom joined Trump’s legal action in 2021 when the platforms first cut off his accounts.

Settlement terms

The filing makes clear that YouTube and Alphabet are not admitting liability. The agreement specifies that the settlement and dismissal cannot be used as evidence against the company in any other legal or administrative action. The dismissal is “with prejudice,” which means the case cannot be filed again. It was entered under Rule 41 of the Federal Rules of Civil Procedure, a provision that allows cases to be closed voluntarily when both sides sign off.

How the case unfolded

Trump’s YouTube channel was suspended on January 12, 2021, just days after he spoke to supporters before the violence at the Capitol. At the time, YouTube said it was worried about the ongoing potential for violence. The channel wasn’t erased but the suspension stopped him from uploading new videos. That restriction stayed in place for more than two years before being lifted in March 2023.

Trump filed lawsuits against YouTube, Facebook, and Twitter in July 2021, arguing that the bans were unlawful and part of a wider attempt to curb conservative voices online. The YouTube case was slowed by court delays and was administratively closed in 2023. After Trump returned to the White House earlier this year, his legal team moved to reopen the matter, and it eventually led to this week’s agreement.

Other settlements already made

Meta, which owns Facebook, reached its own settlement in January, agreeing to pay $25 million. Most of that sum was directed toward a fund for Trump’s presidential library in Miami. In February, Twitter, now rebranded as X, settled with Trump for around $10 million. Together with YouTube’s deal, the total settlement payments across the three companies come to nearly $60 million.

Political reactions in Washington

The size and nature of the settlements have drawn scrutiny. In August, several Democratic senators sent a letter to Alphabet chief executive Sundar Pichai and YouTube chief executive Neal Mohan. They warned that such deals could create the appearance of political bargaining at a time when the administration is already facing questions over the influence of large technology firms. Their letter suggested that settlements of this kind might even raise concerns under competition and consumer protection law, and possibly federal bribery statutes, if they were seen as linked to policy outcomes.

Where it leaves both sides

For YouTube, the settlement avoids a prolonged court fight and does not require it to change its policies. For Trump, the payout adds to the stream of money flowing from the three companies he sued, while channeling a significant share of it into projects connected with his presidency.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen. 

Read next:

• OpenAI Expands Its Reach: Shopping, Social Video, and Safety Tools for Teens
by Asim BN via Digital Information World

OpenAI Expands Its Reach: Shopping, Social Video, and Safety Tools for Teens

Turning Chat Into a Checkout

OpenAI has begun weaving commerce directly into ChatGPT, a move that positions the chatbot not only as a source of information but also as a new point of sale. U.S. users across free and paid tiers can now buy products from Etsy sellers without leaving a conversation, with access to more than a million Shopify merchants expected in the near future.


The Instant Checkout system allows someone to ask for gift suggestions, browse relevant options surfaced in the chat, and then complete the purchase with a stored card, Apple Pay, or other payment services. The transaction itself runs through the merchant’s own systems, while ChatGPT acts as the go-between, passing on the necessary details securely. Shoppers see the same prices as they would on a merchant’s site, but the seller pays a small transaction fee.

At the core of this feature is the Agentic Commerce Protocol, a standard OpenAI developed alongside Stripe to help AI systems and businesses complete orders together. The company has open sourced the protocol to encourage adoption, offering developers and retailers a straightforward way to connect their systems to agentic shopping flows. While the first release supports single-item transactions, OpenAI says it is working on multi-item carts and regional expansion.

This move pushes the company into direct competition with Google and Amazon, which have long shaped how online retail operates. Search results and marketplace algorithms have historically dictated what products reach customers first. If more people begin to shop through AI conversations, the balance of influence could shift toward the developers of these agents, who then decide what results appear and how fees are structured.

Building a Social Video Platform Around Sora 2

Alongside its e-commerce efforts, OpenAI is preparing to enter the social media space with a new app powered by its Sora 2 video model. Reports suggest the software will resemble TikTok in format, offering a vertical feed of short videos navigated with swipes. What sets it apart is that every clip will be AI-generated rather than uploaded from a user’s camera roll.

The first version of the app is expected to limit clips to around ten seconds, shorter than the typical uploads supported by TikTok. An identity verification tool is also part of the design. If users opt in, the model can generate content featuring their likeness, allowing others to remix their digital persona into different videos. Whenever someone’s likeness is used, a notification will be sent, even if the clip never makes it to the public feed.

To address rights concerns, the system is built to block certain copyrighted materials, although early reports indicate that enforcement may rely on rights holders opting out. The model itself will refuse some prompts entirely, reflecting the growing attention on copyright in generative media.

By focusing on an AI-only feed, OpenAI is testing whether audiences will consume entertainment that has no direct human authorship. The experiment also expands the reach of Sora beyond experimental clips into a social context, creating a new setting for AI-generated culture.

Introducing Parental Controls for ChatGPT

While building out new products for commerce and media, OpenAI is also responding to pressure over safety. The company has rolled out parental controls for ChatGPT across the web, with mobile access coming soon. Parents can now link accounts with their teenagers to set restrictions and monitor use more closely.

The controls cover several areas. Parents can reduce or block access to sexual roleplay, violent scenarios, extreme beauty ideals, and other sensitive themes. They can also disable voice mode, image generation, or the system’s ability to remember past chats. Turning off memory reduces personalization but may strengthen guardrails by preventing conversations from gradually drifting into unsafe territory.

Another option allows parents to stop transcripts from being used to improve OpenAI’s models, giving families greater control over how their data is handled. Quiet hours can be scheduled so that teenagers cannot use ChatGPT during set times of the day. Notifications are also available if the system detects signs of serious safety risks, with alerts sent by email, text message, or app push.

Parents do not gain access to their children’s chat history. The link instead provides settings and alerts, with limited exceptions if urgent risks are identified. Teenagers retain some autonomy as well, since they can disconnect the link, though parents are notified when this happens.

The changes follow months of scrutiny. After a teenager in the U.S. died by suicide earlier this year, with allegations that the chatbot had played a role, lawmakers and grieving families called for stronger protections. OpenAI has since been working on an age-estimation system to better identify underage users and apply safeguards.

A Broader Push Into Consumer Life

These three developments — commerce integration, an AI-driven video app, and parental safety controls — point to OpenAI expanding beyond its roots as a research company into a platform that touches daily life in different ways. Each move carries implications not just for users but also for competitors, regulators, and industries built on existing digital habits.

The shift into shopping challenges the long-standing dominance of search engines and marketplaces in directing retail. The video app tests whether AI can generate an entertainment ecosystem compelling enough to rival human creators. And the safety measures show that the firm cannot avoid the responsibilities that come with shaping how young people interact with intelligent systems.

Together, these initiatives sketch the outline of a company positioning itself at the center of online discovery, culture, and trust. The path forward will depend on how widely these tools are adopted, how well they are managed, and how regulators respond to the risks they introduce. For now, they mark a significant step in OpenAI’s transformation from a lab building models into a consumer-facing force in technology.

Note: This post was edited/created using GenAI tools.

Read next:

• Generative AI Becomes Two-Way Force, Altering Company Marketing and Consumer Product Searches

• Americans Pull Back on Subscriptions as Costs Rise and Habits Shift


by Irfan Ahmad via Digital Information World

Monday, September 29, 2025

Americans Pull Back on Subscriptions as Costs Rise and Habits Shift

A survey from June 2025 shows that Americans are paying for fewer subscriptions than before. The research asked 1,138 adults about their monthly habits and spending. On average, households now have 2.8 subscriptions compared with 4.1 in 2024.

Monthly costs dropped as well. People spent about 59 dollars a month last year. This year, the average is 37 dollars. That means many households are saving roughly 264 dollars over a year by cutting services.

Wasted Spending Remains

Even with fewer accounts, people are still paying for things they don’t use. The study found that each person wastes about 127 dollars a year on unused subscriptions. That figure equals nearly three months of typical subscription spending.

Younger adults are more likely to forget about old accounts. Those between 18 and 24 had fewer subscriptions overall, but they wasted more money compared with older groups. People in the 35 to 54 age bracket held the most subscriptions, while older adults over 55 tended to be more careful.

Streaming Under Pressure

Streaming platforms continue to dominate, with 53 percent of households paying for at least one service. Even so, sign-ups are down from earlier years. Price increases and new rules on password sharing have caused many to cancel.

The shift also changes viewing habits. Around 46 percent of people said they would be more likely to turn to piracy if they cancelled their streaming accounts. That number shows how pricing decisions can influence behaviour outside the platforms themselves.

Food Delivery and Shopping Services

Food delivery subscriptions make up a smaller share. About 15 percent of people pay for passes such as Caviar or Grubhub, though many said they hardly use them.

Retail subscriptions also exist, covering items like cosmetics or household goods. About 10 percent of households keep one of these. The study suggests many of these memberships remain active out of habit rather than frequent use.

Fitness and Dating Apps

Fitness and dating services are less common but still part of the picture. Around 5 percent of people pay for fitness programmes, and 4 percent use dating app subscriptions. Both follow the same pattern seen elsewhere: people sign up, lose interest, and forget to cancel.

Why People Cancel Subscriptions

The survey also asked participants what pushed them to cancel. Rising living costs topped the list, with about a third citing affordability as the main issue. Others said they didn’t use the service enough, dropped accounts after free trials ended, or left because of price hikes. Some found better deals elsewhere, while a smaller group blamed poor customer service.

Survey shows affordability concerns, price hikes, and low use drive Americans to cancel subscriptions more selectively.

Changing Habits Since the Pandemic

During the early 2020s, households signed up quickly for new services. Subscriptions were seen as affordable alternatives to outside entertainment and shopping. That trend is fading. With tighter budgets and higher prices in 2025, many are cancelling accounts and weighing each service more carefully.

The findings show a shift. Subscriptions are still common, but people are more selective. Companies relying on recurring payments face greater pressure to show value, since customers are more willing to walk away.

Note: This post was edited/created using GenAI tools.

Read next:

• Sensitive Data Is Slipping Into AI Prompts, And Few Workers Realize the Risk

• New Research Warns Multitasking Leaves Employees Exposed to Phishing
by Asim BN via Digital Information World

Generative AI Becomes Two-Way Force, Altering Company Marketing and Consumer Product Searches

Generative AI is beginning to change how companies attract attention and how consumers decide what to buy. A survey of more than five hundred business leaders, conducted by Adobe Express team, showed that 34 percent had already received customer inquiries through AI-generated recommendations. Among those who gained business this way, the sales linked to these referrals made up 10.8 percent of their annual revenue on average. For nearly three in ten, the number of AI-driven leads passed fifty in a single year.

Some companies said these recommendations performed better than their older marketing methods. About 39 percent of leaders using generative AI for lead generation said the conversion rate was higher than traditional channels, while another 38 percent found it about the same. Only a minority saw weaker results. Chat-based platforms were the most widely adopted, with more than four out of five business leaders pointing to them as the primary tool for this type of marketing.

Investment in visibility within AI systems is also starting to take shape. Nearly a quarter of the executives surveyed said they were already spending more than ten percent of their marketing budgets on strategies that improve how their businesses appear in AI-generated suggestions. Technology companies were the most likely to commit at this level, with 38 percent of leaders in that sector saying they had made such allocations. The survey further showed that 48 percent of all business leaders expect to increase their spending on AI optimization in the year ahead, with almost six in ten tech leaders planning the same. Half of the total group believe AI systems will replace traditional search engines as their main source of leads within five years.

On the consumer side, the same survey reached just over five hundred people across different age groups. One in five respondents said they already use generative AI weekly to discover products, and younger consumers were leading the trend, with 28 percent of Gen Z doing so. Electronics were the most influenced sector, with 48 percent of shoppers saying AI affected their buying decisions in that category. Travel came next at 37 percent, followed by fashion at 25 percent.

The technology is not yet perfect. Almost six in ten consumers said the recommendations could be more accurate. Still, a smaller group of 13 percent said they would be willing to pay for premium versions of AI shopping tools that promise more reliable suggestions. Among Gen Z, that willingness rose to a quarter of respondents. Loyalty has also been tested, with 12 percent of consumers saying AI prompts had persuaded them to change brands, and nearly one in four Gen Z shoppers reporting the same.

Traditional search engines remain part of the picture, with 42 percent of respondents saying they still rely on them most often when searching for products. At the same time, 38 percent said AI-generated results felt more personalized, while a third said they were about the same as search. More than half, 52 percent, predicted that AI will overtake traditional search engines for product discovery within the next five years.

Consumer trust shows a mixed picture. A large majority, 63 percent, said they trust human reviews more than AI suggestions, while 30 percent said they trust both equally. For many, traditional search is losing ground because of problems like irrelevant advertising, overwhelming numbers of results, and concerns over authenticity. These frustrations are opening the door for AI to compete more strongly.

What emerges is a picture of businesses and consumers moving in parallel. Companies are adjusting their budgets to appear in AI-generated recommendations, and consumers are beginning to shift their product searches toward these tools. With both sides expecting greater use of generative AI in the years ahead, marketing and shopping habits may look very different by the end of the decade.






Read next: Sensitive Data Is Slipping Into AI Prompts, And Few Workers Realize the Risk
by Irfan Ahmad via Digital Information World

Sunday, September 28, 2025

Sensitive Data Is Slipping Into AI Prompts, And Few Workers Realize the Risk

An employee sits at their desk, rushing to finish a proposal. Instead of drafting from scratch, they paste sections of a contract with client names into ChatGPT. Another worker, struggling with a login issue, types their company credentials into Gemini to “see what happens.” In both cases, sensitive information has just been handed to a third-party AI system.

Unfortunately, this type of credential leak is increasingly common. A new survey from Smallpdf of 1,000 U.S. professionals reveals how often employees are funneling confidential data into generative AI tools. For many organizations, it’s a threat that is rapidly growing inside everyday workflows.

The report highlights critical blind spots. For example, over one in four professionals admit to entering sensitive company information into AI, and nearly one in five confess to submitting actual login credentials. As businesses rush to embrace generative AI, these findings show that security, training, and policy are lagging behind adoption.



The Hidden Risks of Everyday AI Use

The past two years have seen generative AI tools like ChatGPT, Gemini, and Claude move from experimental curiosities to daily staples in the workplace. They’re used to draft emails, summarize meetings, and brainstorm strategy documents. But alongside convenience comes exposure. Professionals are pasting sensitive contracts, client details, and even login credentials into systems they don’t fully understand and aren’t entirely secure. Many professionals assume prompts are private. Yet, in reality, every entry can be stored, analyzed, or surfaced in ways beyond their control.

According to the research :

  • 26% of professionals have entered sensitive company information into a generative AI tool.
  • 19% have entered actual login credentials, from email accounts to cloud storage and financial systems.
  • 38% of AI users admit to sharing proprietary product details or internal company financials.
  • 17% say they don’t remove or anonymize sensitive details before entering prompts.
  • Nearly 1 in 10 confess to lying to their employer about how they use AI at work.

Leake of sensitive information to AI is a widespread and growing concern. With over three-quarters of U.S. professionals using AI tools at least weekly, the line between efficiency and exposure has blurred. As adoption accelerates, organizations are learning that the true risks are unfolding inside everyday prompts.

When Your Prompts Become the Leak Surface

One of the most alarming aspects of this trend is that everyday employees are pasting sensitive material into AI chats. Contracts with real client names, internal financials, and passwords are routinely dropped into tools that may feel private but aren’t.

What looks like harmless productivity can turn into data exposure at scale. The survey underscores the pattern: 26% of professionals admit to entering sensitive company information into AI tools, 19% have entered actual login credentials, and 17% don’t bother to anonymize details before they prompt. Many also misunderstand how these systems work, as 24% believe prompts remain private, and 75% say they’d still use AI even if every prompt were permanently stored.

The trust employees place in familiar interfaces like chat boxes, browser extensions, and built-in copilots has become a new attack surface. Without clear policies and training, convenience is becoming the newest attack vector, and routine prompts are becoming the breach.

Prompt Hygiene: The Achilles’ Heel

Most workplaces embraced generative AI before they built guardrails for it. That gap is where sensitive data slips out.

The survey reveals:

  • 19% of professionals have entered actual login credentials into a generative AI tool.
  • Of those, 47% entered a personal email, 43% a work email, 25% a cloud-storage login, and 18% a bank or financial account.
  • 17% don’t remove or anonymize sensitive details before prompting.
  • 24% believe their AI prompts are private, and 75% say they’d still use AI even if every prompt were permanently stored.
  • 70% report no formal training on safe AI use, and 44% say their employer has no AI policy.

Traditional data-loss defenses weren’t built to monitor chat prompts in real time. Yet many organizations remain stuck, held back by policy gaps, training deficits, and trust in tools that feel safe but aren’t.

The Readiness Gap

Awareness is rising. Preparation isn’t. That’s the most troubling theme in the findings .

Just as AI use becomes routine, many basics are missing:

  • 70% of workers report no formal training on safe AI use.
  • 44% say their employer has no official AI policy; 12% aren’t sure, and 7% haven’t read the policy they do have.
  • About 1 in 10 professionals have little to no confidence they can use AI without breaking rules or risking data.
  • 5% have already faced a warning or disciplinary action for workplace AI use.
  • 8% admit to lying about their AI use, and 7% used ChatGPT after being told not to.

This readiness gap is procedural and cultural. Policies lag behind practice, training lags behind demand, and trust in “helpful” tools is outpacing understanding of their risks. This is leaving employees anxious, inconsistent, and exposed just as AI becomes embedded in everyday work.

A Better Path Forward: From Ad-Hoc to Accountable

What does adapting to the prompt-leak problem look like? It starts with reframing AI use as a governed, privacy-first workflow. Treat every prompt like data in motion and design controls around it.

That could include phish-resistant guardrails for prompts, involving default blocks on credentials, client names, and financials. Additionally, this might include auto-redaction/anonymization before text reaches external models. Furthermore, enterprise controls ought to be prioritized over consumer chat apps. SSO, tenant isolation, retention may be switched off by default, and DLP can be set to scan for PII/IP in real time. Lastly, context-aware approvals identifying sensitive actions (e.g., summarizing contracts or uploading internal financials) can require additional validation or manager sign-off.

Altogether, these controls point to a larger imperative: restructuring ownership so AI risk isn’t siloed. A cross-functional “AI governance guild” (e.g., security, legal, IT, and business leads) should co-own policies, training, and exception handling. Meanwhile, teams can pair AI with secure document workflows (redaction, watermarking, access controls). Distributing responsibility is quickly becoming essential for tools that evolve too quickly for linear, after-the-fact reviews.

A Problem of Technology and Trust

The damage isn’t limited to leaks or fines. It reaches into client confidence, data integrity, and long-term brand equity. The findings point to a different kind of churn: workers who assume prompts are private, leaders who haven’t set boundaries, and customers who recoil when their details show up in the wrong place. Routine AI use can feel like a privacy violation in slow motion when policies lag behind practice.

AI risk exploits software and certainty. Often, people stop trusting systems and companies when a friendly chat box stores contract clauses or a “helpful” assistant accepts passwords without warning. That trust is far harder to rebuild than any stack you can refactor. Once it’s gone, every login, form, and document share starts from a deficit.

Why Most Organizations Will Stay Exposed

If the dangers are so obvious, why do so many teams remain unprepared?

The data points to three overlapping blockers:

  • Policy vacuum and training deficit. With 44% reporting no official AI policy and 70% receiving no formal training, employees default to improvisation in tools that feel safe but aren’t.
  • Misplaced trust and poor prompt hygiene. Beliefs that prompts are private (24%), combined with weak redaction habits (17% don’t anonymize) and stubborn convenience (75% would use AI even if prompts were permanently stored), keep risky behaviors entrenched.
  • Fragmented ownership and legacy workflows. AI use spreads across teams without clear governance, while document practices (contracts, financials, credentials) remain outside DLP and access controls, making copy-paste the path of least resistance.

These aren’t trivial obstacles, but they are solvable. As the costs of ungoverned AI mount, the price of inaction is climbing faster than most leaders expect.

Looking Ahead

The future of workplace AI will be defined by how quickly organizations shift from casual prompting to governed, privacy-first workflows. Leaders must move beyond ad-hoc guardrails and redesign how sensitive information is handled at the moment of prompt by treating every entry as data in motion, subject to redaction, routing, and audit.

At the root, leaders will be increasingly engaged in rethinking “productivity” in a world where contract snippets, client names, and credentials can be pasted into systems that store everything by default.

This also means resourcing the change. Give security, legal, and IT the mandate and budget to implement enterprise controls over consumer chat apps, deploy DLP that scans prompts, and roll out training that raises baseline literacy for every role. Asking teams to be safer with the same tools and no policy is how leaks become norms.

The story Smallpdf’s data tells is urgent: AI is already embedded in daily work, but the safeguards are not. The question now is whether organizations will modernize governance and prompt hygiene, or keep playing by pre-AI rules while sensitive details keep slipping through the chat box.

Methodology: This analysis draws on a September 2025 survey commissioned by Smallpdf of 1,000 full-time U.S. professionals across industries, job levels, and demographics, designed to understand how workers use generative AI and where sensitive information may be exposed in prompts and document workflows. Responses covered behaviors (e.g., anonymization habits, credential sharing), policy awareness, training, and tool usage frequency to illuminate risk patterns in everyday AI-assisted tasks. 

Read next:

• New Research Warns Multitasking Leaves Employees Exposed to Phishing

• People More Willing to Cheat When AI Handles Their Tasks


by Irfan Ahmad via Digital Information World

Saturday, September 27, 2025

New Research Warns Multitasking Leaves Employees Exposed to Phishing

Workers often switch between emails, meetings, and documents during the day. A study from the University at Albany shows that this constant juggling can reduce attention and make phishing attacks more effective. The research, published in the European Journal of Information Systems, connects heavy mental load with higher chances of missing signs of fraudulent messages.

Phishing emails remain one of the most common tools for cybercriminals. They aim to steal personal details, account credentials, or money. According to Valimail, around 3.4 billion phishing messages are sent every day. IBM estimates that an average incident costs businesses close to $5 million. The findings highlight how small drops in user awareness can translate into major financial risks.

Testing the Effect of Cognitive Load

The study involved close to 1,000 participants. Researchers asked them to complete email reviews while managing different levels of memory tasks. Results showed that when participants carried heavier mental loads, their ability to spot phishing attempts declined sharply. When the mental demand was lighter, accuracy improved.

The experiments suggest that memory and attention play a critical role in phishing detection. If workers are already focusing on difficult tasks, they may fail to notice details such as odd addresses or suspicious links. Divided attention reduces the level of scrutiny people apply to their inbox.

Role of Simple Reminders

The research also tested whether short prompts could help. A brief reminder before checking emails improved performance. Participants became more cautious when they were told that phishing attempts might be present. These reminders did not remove the effect of multitasking, but they reduced the impact.

Messages framed around rewards, such as offers or prizes, were the hardest to resist. People were more likely to believe them unless prompted to take care. In contrast, messages framed as threats, such as warnings about account lockouts, triggered more natural caution even without a prompt.

Training and Realistic Conditions

Many security training programs assume that workers are focused when phishing occurs. The study challenges that assumption. Real working conditions often include noise, interruptions, and simultaneous tasks. The findings suggest that training should reflect these distractions to prepare employees for realistic risks.

Simulated exercises with competing demands may help staff build habits that remain effective under pressure. Without this approach, lessons may not hold up when workers return to busy environments.

Practical Steps for Organizations

The authors highlight several measures that can reduce exposure to phishing:

  • Introduce short alerts in email systems to encourage caution before clicking
  • Design training that includes real-world distractions
  • Teach staff how scammers use both threats and rewards to influence decisions

These steps reflect the idea that people are more vulnerable when attention is stretched thin. A momentary lapse can create an opening for attackers.

Financial Stakes

The cost of a phishing-related breach continues to rise. IBM estimates the average expense at nearly $5 million. Even small improvements in awareness can save companies large sums. Technology filters out many threats, but attackers continue to rely on human error because it cannot be fully automated away.

Shifting the Focus in Cybersecurity

The study shows why understanding human limits is central to defense. Multitasking changes how people judge information. Recognizing this effect can guide organizations in building stronger safeguards. Attention is a finite resource, and in digital workplaces it often gets divided.

The research offers a practical message: protecting information requires more than filters or policies. It requires systems and training that reflect how people actually work. When staff are busy, reminders and context-aware support can help them avoid costly mistakes.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next: YouTube’s Policy Shift Brings Relief for Creators Facing Strict Ad Rules
by Web Desk via Digital Information World