Saturday, November 8, 2025

Website Loading Animations Work Best At Mid-Range Speeds, Research Finds

Loading screens work better at mid-range speeds. Stanford researchers tested how fast animations should move during website waits, and the answer surprised them.

Yu Ding from Stanford's business school got annoyed watching a CNN logo linger on his TV screen. That irritation sparked research into what keeps users engaged when they wait for content to load.

Wait Times Still Plague Digital Experiences

Survey data from 195 people showed 45% left a site or app after hitting unexpected waits. Mobile pages averaged 9-second load times in 2023 despite 90% US broadband coverage.

Geography makes it worse. Websites load five to eight times slower in China compared to domestic access. African download speeds trail global averages substantially. Under 50% of Latin American households have broadband.

Google found that bumping load time from 1 to 3 seconds increases bounce probability by 32%. Stretch that to 10 seconds and bounce probability jumps 123%. Sites exceeding 3 seconds shed 53% of mobile traffic.

Testing Across Different Speeds

The research team ran experiments with 1,457 participants across three initial studies. Animation speeds ranged from 10,000 milliseconds per rotation (slow) down to 400 milliseconds (fast). Moderate speeds hit 2,000 milliseconds per rotation.

Wait times varied from 7 to 30 seconds depending on the experiment. Devices included both computers and mobile phones.

Results stayed consistent. Moderate speed animations produced shorter perceived wait times than static images, blank screens, slow animations, or fast animations. The pattern held across different animation types and wait durations.

One experiment used a colored wheel animation with a 17-second wait on computers. Another tested square-shaped animations during two 7-second waits on mobile devices. A third tried ring-shaped animations with two 9-second waits on phones.

All three showed the same outcome.




.

Facebook Campaign Tests Real Clicks

A field test using Facebook ads reached 3,874 users who clicked through to sunscreen information. Each person faced a 20-second wait before the page loaded, with animation speed randomly assigned.

The moderate speed group had 44.5% click-to-landing rates. Static images got 37.5%. Fast animations reached 38.9%.

That means 18.7% more people waited through moderate animations versus static images. Compared to fast animations, moderate speeds beat them by 14.4%.

Inattention patterns backed this up. People viewing moderate animations clicked away from the browser window 74% of the time during waits. Static image viewers did this 86.9% of the time. Fast animation viewers hit 79.1%.

A separate conversion test invited people to complete a voluntary second survey after finishing an initial study. The second survey required a 30-second wait with no extra payment.

Completion rates: 67.2% for moderate animations, 49.6% for fast, 32.2% for static images.

Why Moderate Speed Works

Fast moving objects blur when they exceed certain speeds. The human visual system stops processing individual movements, turning rapid motion into streaks. Research in visual perception established this years ago.

Moderate speeds stay distinct. Each rotation remains visible and trackable. This grabs attention without overwhelming the eye.

An experiment with 147 undergraduates proved the attention angle. Students solved 10 math problems while animations played. Those watching moderate speed animations answered fewer problems correctly (4.69 on average) than people seeing static images (5.62 correct) or fast animations (5.67 correct).

The moderate speed group also reported paying more attention to animations. On a 7-point scale, they scored 5.22 for attention versus 3.24 for static and 4.45 for fast.

Stress levels rose with any animation compared to static images, but didn't differ between moderate and fast speeds. Boredom showed no correlation with animation speed. Motivation stayed flat across all conditions.

Attention drove the whole effect.

Product Ratings Shift Too

A mobile shopping test with 361 university students mimicked Amazon's interface. Students browsed 10 backpacks, viewed details for products they wanted, then picked a favorite. Six randomly selected students would win their chosen backpack.

Each product detail page showed a 7-second animation before loading. Animation speed varied by user.

Students rated products they viewed higher after moderate animations (63.02 on a 100-point scale) compared to static images (58.06) or fast animations (59.88). Products they didn't click to view showed no rating differences across animation speeds, all hovering near 29 points.

The effect only touched products people actively engaged with.

What Sites Actually Use

Research assistants catalogued 100 popular websites. Thirty-two showed nothing during waits. Four used progress bars. Five displayed static text or images. The remaining 59 used repeated animations with average wait times of 5.71 seconds.

Mobile apps leaned heavier on animations. Out of 59 apps examined, 57 used repeated animations. Average wait times stretched to 11.16 seconds on mobile.

Animation speeds across these sites ranged from 333 milliseconds to 6,161 milliseconds per rotation. Average speed hit 1,219 milliseconds. Most companies picked speeds without testing.

Data from a chat service company illustrated wait consequences at scale. The firm handles over 1.3 million monthly chats for 4,000+ businesses. Analysis covered 4.53 million chat sessions over eight months.

Wait time before agent connection averaged 13.17 seconds. Every additional 5 seconds of waiting reduced customer engagement. Message sending dropped 1.75%. Activity engagement fell 1.53%. Customers became 8.64% less likely to receive agent messages.

When Speed Stops Mattering

Two scenarios eliminated the animation speed effect entirely.

First, telling users exact wait duration upfront. When 1,159 participants saw "Please wait for around 9 seconds," animation speed no longer influenced perceived wait time. Uncertainty about duration is required for the effect to manifest.

Second, animations combining multiple speeds. Testing with 1,148 users showed that when a fast circular shape paired with a slower color change, speed effects disappeared. The dual attention elements cancelled out speed advantages.

Atypical animations did the same thing. While standard circular loading wheels showed strong speed effects across 1,135 users, unusual animations like mixing bowls with stirring motions made speed irrelevant. Novelty captured attention regardless of pace.

Post-tests confirmed these animations scored as significantly more uncommon than typical circular designs.

Network Congestion Won't Disappear

AI expansion stresses networks further. High performance computing systems with heavy message passing already experience 40% increases in execution time when networks congest. Technology advances but so does demand on infrastructure.

The research found no interaction effects between animation speed and either age or gender across experiments. Effects held consistently across demographics.

Ding and his co-researcher Ellie Kyung from Babson College published findings in the Journal of Consumer Research. They recommend companies test within their own contexts rather than applying universal millisecond targets.

Optimal speeds vary by use case. News sites might need different approaches than shopping platforms. But the core principle applies broadly: animation speed affects click-through rates, conversion rates, and product evaluations in measurable ways.

Most firms ignore this completely or pick speeds arbitrarily. That leaves easy optimization opportunities untapped when implementation costs nothing extra.

Notes: This post was edited/created using GenAI tools.

Read next: ChatGPT and Copilot Lead the Corporate AI Race as Claude, Perplexity, and DeepSeek Lag Behind


by Asim BN via Digital Information World

ChatGPT and Copilot Lead the Corporate AI Race as Claude, Perplexity, and DeepSeek Lag Behind

Generative AI has officially gone mainstream in the corporate world, but the race for enterprise dominance has turned out to be uneven. The latest Wharton–GBK Collective 2025 study shows that while companies are using AI tools more widely than ever, only a few platforms have captured serious ground. ChatGPT and Microsoft’s Copilot now lead business adoption by a wide margin, while Claude, Perplexity, and DeepSeek remain far behind despite their technical promise.

Across three years of tracking, the Wharton Human-AI Research program found that 82% of business leaders now use generative AI at least once a week, up ten points from last year. Nearly half use it every day, a seventeen-point jump in just twelve months. That scale of usage shows how fast AI has shifted from a pilot phase into routine office work. Data analysis, document summarization, and report creation have become the most common tasks. Together they account for over 70% of all reported use cases, a clear sign that generative tools are now embedded into daily workflows rather than isolated experiments.

The tools companies choose tell an even clearer story. ChatGPT leads with 67% of enterprises using it, while Microsoft Copilot follows at 58%, thanks mainly to its tight integration with Office, Teams, and Windows. Google’s Gemini, though improving, stands at 49%. Far lower down the list, Anthropic’s Claude hovers near 18%, roughly the same level as Perplexity and DeepSeek, both struggling to find relevance in large corporate settings.


What makes the difference is not novelty but proximity. Copilot’s integration within Microsoft’s existing software ecosystem gives it an edge that newer entrants cannot yet match. ChatGPT benefits from its early start and brand familiarity, which still carry weight in procurement decisions. By contrast, Claude’s appeal among developers and researchers has not translated into corporate usage. DeepSeek, a relative newcomer with strong open-source credentials, ranks lowest in overall visibility, while Perplexity remains more popular among individual users than formal enterprises.

Beyond usage, spending patterns confirm that AI has become a core investment area. The report shows nearly three-quarters of companies now track structured ROI metrics, measuring profitability, throughput, and productivity. About 74% already report positive returns, and four in five expect measurable gains within two to three years. Budgets reflect that optimism: 88% of executives expect to raise AI spending in the next twelve months, with 62% planning increases of ten percent or more. Tier-one firms with revenues above two billion dollars dominate overall spending, but smaller and mid-sized businesses report faster ROI due to simpler integration.

Industry differences remain sharp. Technology, telecom, and banking continue to lead adoption, each with more than 90% of leaders using AI weekly. Professional services are close behind. Manufacturing and retail trail, at 64% and 72%, despite their wide operational use cases. Retail’s lag is especially notable given its dependence on marketing, logistics, and pricing, areas where AI could easily enhance efficiency.

The shift toward measurable value has changed how firms allocate budgets. On average, 30% of enterprise AI technology spending now goes to internal R&D, signaling that companies are moving beyond off-the-shelf models to build customized tools. Meanwhile, roughly 70% of AI subscriptions are paid directly by employers, often through existing cloud agreements with Microsoft Azure, Google Cloud, or AWS. Seamless integration has become the top factor for IT leaders selecting vendors.

Still, the human side of the equation poses the biggest constraint. While 89% of leaders say AI enhances employee skills, 43% warn that over-reliance could weaken proficiency. Formal training budgets have slipped eight points year over year, and confidence in training as a path to fluency dropped fourteen. Many organizations have responded by appointing Chief AI Officers (now present in 60% of enterprises) to manage strategy, governance, and workforce adaptation.

Wharton’s data also reveal a cultural divide. Senior executives tend to be more optimistic, with 56% of vice presidents and above believing their organizations are moving faster than peers, compared with 28% of mid-managers who see adoption as slower and more cautious. That perception gap matters because mid-level managers often decide where AI actually gets applied.

After three years of tracking, the report describes the current phase as one of “accountable acceleration.” The experiment era is over. Enterprises have learned what works, budgets are tied to measurable results, and AI usage now spans every major business function. ChatGPT and Copilot sit firmly at the center of this shift, benefiting from scale and integration, while Claude, Perplexity, and DeepSeek face the hard truth that innovation alone doesn’t guarantee adoption.

The pattern echoes earlier waves of enterprise technology: early access and ecosystem fit usually beat raw capability. If 2025 belongs to ChatGPT and Copilot, the next test will be whether corporate builders can turn these tools into lasting productivity systems rather than just convenient assistants.

Notes: This post was edited/created using GenAI tools.

Read next:

• Elon Musk Says AI Is Already Writing the Obituary for Routine Work

• Google Warns of Rising AI-Driven Scams Targeting Users Across Gmail, Play, and Messages
by Irfan Ahmad via Digital Information World

Friday, November 7, 2025

Elon Musk Says AI Is Already Writing the Obituary for Routine Work

Elon Musk has painted a clear picture of how artificial intelligence is transforming the modern office. The changes are not gradual; according to Musk, digital desk jobs are disappearing faster than many realize. While the average worker might still be tied to spreadsheets and emails, AI systems are quietly taking over tasks once considered secure. Analysts note that roles involving repetitive digital work are especially exposed, while positions requiring physical labor or human interaction remain largely intact.

The trend Musk describes is not theoretical. Economists and tech researchers have flagged similar patterns. Entry-level white-collar positions, particularly those centered on data entry, scheduling, or standard reporting, face the greatest pressure. Some projections suggest that up to half of these jobs could vanish within five years if AI adoption accelerates as expected. Physical jobs, from cooking to farming, continue largely untouched because they rely on tasks that machines cannot easily replicate.

Musk likens the pace of change to a “supersonic tsunami,” a metaphor that underscores both the speed and inevitability of AI adoption in office environments. The comparison draws attention to the shock many industries may feel as automation penetrates functions that have relied on human judgment for decades. IT teams, customer service departments, and administrative roles are already seeing AI tools replace hours of routine work in minutes.

Even so, Musk emphasizes that not all work disappears. The shift creates demand for new types of roles, though they differ from traditional positions. Digital skills remain important, but the focus moves from repetition to oversight, problem-solving, and creative input. AI handles the routine calculations, data processing, and report generation, leaving humans to manage exceptions, interpret results, and make strategic decisions. The transition is rapid, but it does not spell the end of employment entirely.

Long-term, Musk envisions a more radical transformation of the economy. In his outlook, AI combined with automation could lead to a scenario where working becomes optional. Resources, wealth, and access to services could reach unprecedented levels, approaching what he describes as a universal high income. This concept goes beyond universal basic income, aiming instead for widespread economic abundance where individuals have freedom to pursue non-work interests while machines manage much of the operational labor.

The implications for businesses are immediate. Companies that adopt AI aggressively may cut costs while maintaining output, but they must also retrain staff for oversight and creative functions. For workers, the warning is clear: digital routine tasks are increasingly replaceable, and adaptation is critical. In sectors like finance, insurance, marketing, and administration, AI-powered software now handles data analysis, report generation, and customer interaction patterns that previously required full-time human employees.

Musk’s perspective aligns with broader industry observations. Tech leaders note that while AI threatens routine work, it also presents opportunities to shift human effort toward more meaningful or complex projects. In practical terms, this means fewer jobs in repetitive desk roles and more positions emphasizing strategy, oversight, and interdisciplinary collaboration. Early adopters of AI in professional services report efficiency gains, sometimes doubling the output of human teams with minimal added personnel.

The debate over AI’s impact on jobs continues, but Musk frames it as both disruptive and transformative. Routine office work, he argues, will not survive in its current form. Those who rely solely on repetitive digital tasks risk obsolescence, while those who integrate AI into decision-making, oversight, and creativity stand to benefit. The message is stark but precise: the digital desk era is fading, and AI is writing its final chapter.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next: Google Warns of Rising AI-Driven Scams Targeting Users Across Gmail, Play, and Messages
by Asim BN via Digital Information World

Google Warns of Rising AI-Driven Scams Targeting Users Across Gmail, Play, and Messages

Google has released a new global advisory revealing a sharp rise in complex online scams that increasingly rely on artificial intelligence to scale their operations. The company’s Trust and Safety teams have mapped six fast-spreading schemes now exploiting users across Gmail, Google Play, and Messages, marking a shift in how digital fraud evolves with new technology.

Recent data from the Global Anti-Scam Alliance underscores the urgency: more than half of adults worldwide, about 57%, encountered at least one scam in the past year, and nearly a quarter reported losing money. Google analysts note that criminal groups have begun to automate deception using generative models and deepfake-style content to impersonate brands, recruit victims, and steal credentials with remarkable precision.

One of the most active categories involves fake job offers. Scammers create detailed replicas of corporate hiring portals, publish false recruiter profiles, and push phishing links through email ads or social channels. Victims are asked to pay small registration or processing fees or download so-called interview software that hides remote-access malware. Beyond money loss, these scams can lead to identity theft or infiltration of employer networks. Google’s anti-fraud systems now block such impersonation campaigns in Gmail and Messages while reinforcing protections around sign-in verification and scam detection.

Another fast-growing tactic is negative-review extortion. Fraudsters organize mass “review bombing” attacks on business listings, then contact the owners through external messaging platforms, demanding payment to stop the harassment. Google Maps’ enforcement tools now allow merchants to report such activity directly so the company can act faster to remove fake reviews and identify the extortion networks behind them.

AI product impersonation has also become a profitable scam format. Cybercriminals are building fake mobile and browser apps disguised as AI tools that promise free access to premium features or early releases. Once installed, they steal passwords, drain digital wallets, or inject malicious code into the user’s system. Many of these operations rely on malvertising and hijacked social media accounts to spread. Google Play and Chrome have stepped up AI-based scanning to remove these clones and warn users before downloads occur.

Other schemes target users searching for privacy solutions. Fake VPN apps and browser extensions circulate on unverified websites and third-party app stores, often mimicking known security brands. Instead of encryption, they deliver spyware or banking trojans that quietly siphon data from devices. Google Play Protect and Chrome’s Safe Browsing tools have added extra checks to detect harmful permissions and block suspicious installations before they launch.

A newer wave of deception is hitting those who have already been scammed. Fraud recovery operations promise to retrieve lost funds but charge upfront fees, pretending to be legal firms, government agencies, or blockchain experts. Many use AI to generate convincing documents and fake identities. Victims, already in distress, lose more money and risk further data theft. Google has expanded scam warnings inside its phone and messaging systems to prevent users from re-engaging with fraudulent contacts.

The approach intensifies during shopping peaks. Seasonal holiday scams surge near Black Friday and Cyber Monday, using counterfeit storefronts, false discount ads, and fake delivery notifications to capture payment information. Google has added fresh filters against counterfeit listings and phishing campaigns that hijack well-known brand names. Enhanced browser protection on newer Pixel devices now provides local AI-based screening against these seasonal traps.

Across all categories, Google urges users to double-check URLs, avoid sideloading unknown software, and treat “too-good-to-be-true” deals or unsolicited recovery offers with suspicion. With the rising use of AI by criminals, staying alert and informed is becoming as essential as the technology that powers online life itself.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next:

• Agencies Shrink, Bots Rise: The Real Story Behind AI’s Creative Coup

• Inside the New AI Slang Era: The Words Defining Tech and Culture in 2025
by Irfan Ahmad via Digital Information World

Agencies Shrink, Bots Rise: The Real Story Behind AI’s Creative Coup

Artificial intelligence has moved from a headline idea to the heartbeat of daily operations in marketing agencies, and the shift is starting to show in the payroll data. A study from Sunup, based on responses from 225 senior marketing and advertising leaders across the United States, paints a clear picture of what’s next: smaller teams, more automation, and a new definition of what creativity looks like inside an agency.

Every agency surveyed confirmed that it already uses AI, and almost six in ten said the technology is now deeply embedded in their workflows. The pattern isn’t just about experimentation anymore; AI is operating at the level of mid-career professionals across copywriting, design, research, and project management. What used to take several people now takes one person and a machine that never clocks out.

That’s creating a sharp slowdown in junior hiring. Nearly half of agencies have already reduced or paused entry-level recruitment, while over half of large and mid-sized firms expect significant headcount cuts within three years. The trend is less severe among small, boutique shops that depend on relationship-driven or creative-first services, but the overall direction is unmistakable: the traditional “ladder” that brought interns into creative roles is disappearing fast.

The effect goes deeper than job numbers. For decades, agency size has been shorthand for credibility, bigger teams meant bigger clients. That measure is losing weight. The Sunup data shows that leaders now prize orchestration over manpower. Agencies are learning that a lean structure combining experienced strategists and advanced AI systems can outperform larger teams built on manual labor. Success is becoming less about how many people work on a campaign and more about how seamlessly human expertise and machine precision interact.

The restructuring is also creating a quiet but visible shift in focus. Agencies that once billed themselves as production engines are repositioning around consultation, strategy, and governance. Roughly one in five firms have already built internal AI task forces to manage adoption and policy, usually led by executives, tech specialists, and heads of strategy or HR. Agencies with these cross-functional teams are notably ahead in embedding AI across daily operations, showing how structure often dictates speed of adaptation.

The hiring landscape is changing, too. Around 75 percent of agencies are now recruiting for AI- or automation-related roles, and half are seeking data scientists or machine learning engineers -- jobs that used to sit outside the creative industry. Traditional job titles are being replaced by hybrid ones: creative technologists, AI innovation leads, and brand technologists who bridge art and analytics. The once-hyped “prompt engineer” role, meanwhile, is already fading as firms realize that knowing how to “talk” to AI matters less than knowing how to use it intelligently.

Upskilling has become a survival skill. Larger agencies lean on self-paced learning platforms to keep their staff current, while smaller ones favor collaborative training sessions and live demos. Either way, the lesson is the same, standing still means falling behind. The marketers who thrive will be those who understand both data and design, both storytelling and system logic.




The report closes with a simple truth often lost in hype cycles: this isn’t just an automation story. AI is pushing agencies to redefine their core identity, from labor-heavy organizations into senior-led partners where human judgment still shapes the output. Machines may handle the repetition, but humans still carry the responsibility for direction, tone, and ethics.

As this balance settles, the agency of the future won’t be measured by headcount but by how fluently people and algorithms collaborate. And in that equation, the smartest teams might not be the biggest -- just the ones still led by humans who know when to trust the machine and when to question it.

Notes: This post was edited/created using GenAI tools. 

Read next: Inside the New AI Slang Era: The Words Defining Tech and Culture in 2025
by Asim BN via Digital Information World

AI Agents Struggle in Simulated Markets, Easily Fooled by Fake Sellers, Microsoft Study Finds

AI assistants are being trained to handle purchases and digital errands for people, but Microsoft’s latest research shows that these systems remain far from reliable. The company built an experimental platform called Magnetic Marketplace to test how modern AI agents behave in a simulated economy. Instead of becoming efficient digital shoppers, many of them made poor choices, got distracted by fake promotions, and sometimes fell for manipulation.

The simulation brought together 100 virtual customers and 300 virtual businesses. On paper, it sounds like a practical way to study real-world digital transactions, where one agent buys food, books, or services from another. Microsoft’s team loaded the environment with models from OpenAI, Google, and several open-source projects, including GPT-4o, GPT-5, Gemini-2.5-Flash, OSS-20b, and Qwen3. Each model acted either as a buyer or seller, negotiating through a controlled online market. The results were revealing.

When agents were asked to order something as simple as a meal or home repair, their decision-making showed deep weaknesses. As the range of available choices grew, performance fell sharply. In one test, GPT-5’s average consumer welfare score dropped from near 2,000 to around 1,100 when exposed to too many options. Gemini-2.5-Flash saw its score decline from about 1,700 to 1,300. Agents that had to navigate long lists or compare hundreds of sellers lost their focus and often settled for “good enough” matches rather than ideal ones.


The study described this as a kind of “paradox of choice.” More options did not mean better results. In many runs, agents reviewed only a small fraction of available businesses, even when hundreds were open for selection. Some models like GPT-4o and GPT-4.1 maintained slightly steadier performance, staying near 1,500 to 1,700 points, but they too struggled when markets became crowded. Claude Sonnet 4’s score collapsed from 1,800 to just 600 under heavier loads.

Another problem emerged around speed. In this artificial economy, selling agents that responded first dominated the market. Microsoft measured a 10 to 30 times advantage for early replies compared to slower ones, regardless of product quality. This behavior hints at a potential flaw in future automated markets, where quick manipulation could outweigh fair competition. Businesses might end up competing on who responds fastest instead of who offers the best value.

Manipulation also proved alarmingly effective. Microsoft’s researchers tested six different persuasion and hacking strategies, ranging from false awards and fabricated reviews to prompt injection attacks that tried to rewrite an agent’s instructions. The results varied by model. Gemini-2.5-Flash resisted most soft manipulations but gave in to strong prompt injections. GPT-4o and some open-source models like Qwen3-4b were far more gullible, sending payments to fake businesses after reading false claims about certifications or customer numbers.

Even simple psychological tricks worked. When presented with phrases that invoked authority or fear, such as fake safety warnings or references to “award-winning” restaurants, several agents switched their choices. These behaviors highlight major security concerns for future AI marketplaces, where automated systems may end up trading with malicious agents that pretend to be trustworthy.

The researchers also noticed bias in how agents selected from search results. Some open-source models tended to pick businesses listed at the top or bottom of a page, showing positional bias unrelated to quality. Across all models, there was a pattern known as “first-offer acceptance.” Most agents picked the first reasonable offer they received instead of comparing multiple ones. GPT-4o and GPT-5 displayed this same bias, even though they performed better overall.

When taken together, the findings show that these AI agents are not yet dependable for financial decisions. The technology still requires close human supervision. Without it, users could end up with wrong orders, biased selections, or even security breaches. Microsoft’s team acknowledged that their simulation represented static conditions, while real markets constantly change. Agents and users learn over time, but such adaptation adds another layer of complexity that has not yet been solved.

The Magnetic Marketplace experiment gives a glimpse of what might come next in the evolution of digital economies. It shows that even advanced models can collapse under too much data, misjudge credibility, or act impulsively when overloaded. For now, these systems are better suited as assistants than autonomous decision-makers.

Microsoft’s open-source release of the Magnetic Marketplace offers an important testing ground for developers and researchers. Before AI agents are allowed to manage money, they will need stronger reasoning, improved security filters, and mechanisms to handle complex human-like markets. The results make one thing clear: automation alone cannot guarantee intelligence. Real trust will depend on oversight, transparency, and the ability of these systems to resist persuasion as well as they handle logic.

Notes: This post was edited/created using GenAI tools.

Read next: Your Favorite AI Might Be Cheating Its Exams, Researchers Warn
by Asim BN via Digital Information World

15 Billion Scam Ads Every Day: How Meta’s Platform Turns Fraud Into Billions

Meta’s apps are showing users a staggering number of scam ads every day. Internal documents reveal that Facebook, Instagram, and WhatsApp combined display around 15 billion high-risk scam advertisements daily. These include fake products, illegal gambling, and banned goods. On top of that, users encounter an additional 22 billion “organic” scams, like bogus marketplace listings and false job offers. The scale is enormous, and the people behind it are exploiting the trust users place in brands and public figures.

Revenue Over Regulation

According to internal projections, scam ads could account for roughly 10 percent of Meta’s yearly revenue, amounting to around $16 billion. Yet the company has long taken a cautious approach to enforcement. Advertisers suspected of running scams are only removed if the system is 95 percent sure of fraud. Otherwise, they may continue running ads, sometimes racking up hundreds of strikes without suspension. For larger advertisers, suspected of misconduct, Meta even charges higher ad rates. The system is designed to deter some activity while still maintaining revenue.

Meta’s ad-personalization tools, meant to serve content based on user interests, end up pushing more scam ads toward users who interact with them. Those clicks feed into more exposure, creating a cycle that benefits the platform financially. In late 2024, the company anticipated earning roughly $7 billion from high-risk ads alone, part of that 10 percent estimate.

Balancing Act

Meta’s internal documents show a delicate balance between enforcement and revenue. The company has aimed to gradually cut the share of revenue from scams and banned goods, targeting a drop from 10.1 percent in 2024 to 7.3 percent by the end of 2025. Internal memos stress moderation, ensuring enforcement does not hurt overall projections or investments, especially in artificial intelligence, where the company is spending billions.

The documents also make clear that Meta prioritizes removing fraudulent ads when regulators are watching closely. Other areas receive lighter enforcement, allowing some advertisers to continue until stricter oversight forces action. Even as new systems reduce user complaints, the documents suggest that enforcement remains calibrated to protect revenue while appearing to address risk.

Real-World Consequences

The impact of scam ads is tangible. Meta’s platforms were reportedly involved in a third of all successful U.S. scams in 2025. Users lose money, trust, and sometimes access to accounts. In one instance, a hacked account used to promote cryptocurrency scams defrauded multiple people. Internal reviews show that historically, the majority of user reports of scams went unaddressed or were incorrectly dismissed. Fraudsters take advantage of gaps in the enforcement system, exploiting users with fake financial offers and phony promotions from public figures.

Steps Toward Change

Meta has expanded teams to monitor scam activity and improved automated detection. In 2025, the company removed over 134 million scam ads, cutting global user complaints by about 58 percent. Penalty-based bidding systems were introduced, charging likely fraudsters more to participate in ad auctions. Early results show a decline in scam reports and a modest drop in ad revenue. While these measures are a step forward, documents indicate the company remains cautious, mindful of revenue losses.

Regulators Loom Large

Authorities in the U.S., U.K., and other regions are scrutinizing Meta’s handling of fraudulent advertising. Fines could reach up to $1 billion, but internal figures show revenue from high-risk ads exceeds anticipated penalties. The discrepancy highlights the tension between profit and user protection. Meta continues to weigh enforcement costs against business priorities, even as its platforms play a major role in the global scam ecosystem.

Meta faces a difficult choice. Cut scam ad revenue and potentially hinder its ambitious AI projects, or let high-risk ads continue, maintaining billions in income but leaving users exposed. The internal records suggest the company is trying to thread that needle, making cautious moves that preserve financial gains while slowly tightening controls. The next few years will test whether Meta can reduce the flood of scams while keeping investors satisfied.


Notes: This post was edited/created using GenAI tools. Image: Julio Lopez / Unsplash

Read next:

• Healthy Habits of a Billion-Dollar Founder: What Canva's Melanie Perkins Knows About Focus

• How AI, Influencers, and Video Are Rewriting Marketing Playbooks for 2026
by Irfan Ahmad via Digital Information World