Saturday, September 6, 2025

Anthropic Settles Author Lawsuit With $1.5 Billion Deal

Anthropic has agreed to pay at least $1.5 billion to authors in a settlement over the use of pirated books in training its artificial intelligence systems. If approved by a federal judge in San Francisco next week, it would be the largest payout on record in a US copyright case. The agreement closes a year-long dispute that tested how far AI developers can go in using creative material without permission.

The case centered on claims that Anthropic downloaded millions of books from online piracy sites to feed its chatbot Claude. The company must now pay authors around $3,000 for each book included in the settlement. In total, about half a million works are expected to qualify. The final amount could increase if more claims are submitted. Anthropic has also agreed to delete the files it copied.

Background of the dispute

The lawsuit began in 2024 when three writers, Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, accused the company of using their books without consent. The case was expanded to represent all US authors whose works appeared in the datasets. In June, the court ruled that Anthropic could train its models on legally purchased books but said the company would still face trial over its reliance on pirated sources.

Judge William Alsup stated that Anthropic had obtained more than seven million pirated titles. These included nearly two hundred thousand books from the Books3 dataset, along with millions more from Library Genesis and Pirate Library Mirror. The ruling created a path for a December trial, but the settlement avoids that step and brings an early conclusion.

Industry significance

This agreement arrives at a time when AI developers face growing pressure over copyright. Music labels, news outlets, and publishing houses have all raised similar complaints. At the same time, some companies have begun signing licensing deals with AI firms, offering access to data in return for payment. The Anthropic case stands out because it sets a financial benchmark and forces one of the leading AI players to admit past practices carried legal risk.

Other disputes involving Anthropic

Anthropic has been the target of multiple lawsuits. Earlier this year, Reddit said the company’s systems accessed its platform more than 100,000 times after restrictions were in place. Universal Music also filed a suit in 2023, claiming that Anthropic had used copyrighted lyrics without permission. These cases highlight the wider legal challenges facing AI firms as they compete to expand training material.

What happens next

A court hearing scheduled for September 8th will decide if the settlement is approved. If it goes forward, authors will be able to check whether their works are listed through a dedicated website and submit claims for payment. The decision will serve as a signal to the industry that creative material cannot be taken freely for AI training without facing financial consequences.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next: EU Regulators Punish Google’s Ad Monopoly With Multi-Billion Euro Fine
by Asim BN via Digital Information World

EU Regulators Punish Google’s Ad Monopoly With Multi-Billion Euro Fine

The European Commission has fined Google €2.95 billion, equal to about $3.46 billion, after ruling that the company gave its own advertising exchange and tools unfair advantages. Officials said the conduct restricted competition and raised costs for advertisers and publishers across Europe.

How the Case Developed

The decision followed years of investigation into the company’s display advertising services. These systems sit behind much of the banner advertising seen on websites and apps. The Commission said Google’s publisher server passed inside information to its own exchange, helping it beat rival bids. At the same time, the company’s ad buying platforms steered business mainly through its exchange, reducing opportunities for competitors.

According to regulators, this setup locked businesses into Google’s network and reinforced its position in the market. It also allowed the company to collect higher fees across the supply chain. Google now has sixty days to outline changes. If its proposals fall short, regulators may consider stronger remedies including the possible sale of part of its adtech operations.

A Record of Repeat Offenses

The fine was based on the scale and length of the abuse, as well as Google’s past record. In 2017, the company was fined €4.34 billion for practices linked to Android devices. The following year, it was fined €2.42 billion over its shopping search service, and in 2019 it was fined €1.49 billion for blocking rival ad services.

The case adds to wider pressure in Europe. Earlier this week, France’s data authority fined Google €325 million for showing ads in Gmail without consent and breaking cookie rules.

Political Response in Washington

The decision quickly drew reaction in the United States. Hours after the penalty was announced, President Donald Trump said he would consider opening a trade investigation to counter what he described as unfair treatment of American firms. His warning came a day after hosting technology leaders at the White House. At that meeting, he signaled backing for U.S. companies facing disputes with European regulators.

Trump also referred to earlier cases involving Apple, which has faced large tax and competition claims in the bloc. He said the penalties risked draining investment and jobs from the United States.

What Comes Next

Google said it will appeal. The company maintains that the findings are wrong and that the required changes could hurt European businesses that rely on its tools. The outcome of the appeal remains uncertain, but the ruling represents one of the most serious challenges yet to its advertising model in Europe.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next: 

• How AI Is Quietly Rewriting the Rules of Ecommerce

• Google Tied to $45 Million Israeli Propaganda Push Amid Gaza Genocide
by Irfan Ahmad via Digital Information World

Friday, September 5, 2025

How AI Is Quietly Rewriting the Rules of Ecommerce

AI isn't new to ecommerce anymore. It's now powering the engine behind product descriptions, customer support, SEO, and more. A new report from Liquid Web reveals just how far this change reaches, 623 online store owners shared how tools like chatbots and automated content are altering the way digital stores function.

The research shows a clear tipping point. For medium and small businesses, AI is not a shortcut by itself. It's driving real results in traffic, conversions, and customer experience.




AI-Generated Product Descriptions Are the New Norm

Nearly half (47%) of ecommerce brands are using AI-generated product copy, and it's paying off:

  • 48% saw more clicks and impressions
  • 29% got more positive customer feedback
  • 28% saw direct revenue increase
  • 24% had fewer complaints, and 17% saw fewer returns

WooCommerce store owners are at the forefront of this trend, 58% of them already use AI content tools. Of them, 63% experienced increased listing engagement, and 41% directly attributed revenue increase to AI-generated descriptions.

Why? It's not just speed, it's consistency. AI helps brands achieve a single voice across thousands of SKUs. Instead of depending on multiple copywriters, AI delivers consistent messaging that builds trust.

AI-optimized content also plays nicely with search engines. Tools now generate metadata and link directly with SEO platforms, which can enhance rankings and keep listings neat and to the point.

Multilingual support is a huge win for stores selling globally. AI tools can translate or localize listings in real time, which helps businesses expand reach without expanding the content team.

And when speed is of the essence, like new product releases or seasonal promotions, AI allows stores to act fast without sacrificing quality.

Chatbots Are Turning Conversations Into Conversions

AI chatbots are proving to be a best bet for ecommerce. Already, 27% of stores use them for sales or support. Of these:

  • 75% saw at least a 20% lift in leads or sales
  • 46% reported better customer satisfaction
  • 35% got more product inquiries
  • 30% saw higher conversion rates

WooCommerce users once more lead the way in adoption, 56% noticed boosted leads or sales, and 62% had greater customer satisfaction upon implementation. One quarter of stores reduced customer support costs by 25% with chatbots.

And they're not just closing support tickets. Chatbots nowadays recommend sizes, upsell related items, bring promotions to the forefront, and guide users through complex choices, all in real-time.

They're also helping businesses collect and act on customer data. That feedback loop enables brands to tune messaging, learn about buyer behavior, and optimize the sales funnel.

Some bots now integrate with backend systems, delivering order status, real-time inventory details, and escalation to human reps when needed. And with the rise of voice shopping, these bots are starting to process voice queries as well, especially on mobile.

AI Scraping Is Now a Real Threat

Not everything is good. AI is also generating anxiety about content scraping. One in three ecommerce stores have blocked AI bots from accessing their site content, citing data harvesting and model training issues.

However, the majority of stores have done nothing:

  • 76% have done nothing
  • 13% are considering blocking AI bots
  • 11% are considering unblocking

Scraping is changing traffic flows:

  • 17% saw more direct visits from AI-powered search tools
  • 12% experienced more visits, but lower conversion quality
  • 11% experienced more engagement through AI-driven discovery

For some, that visibility trade-off is acceptable, being featured in AI-created product suggestions or responses may be rewarded further down the line. For others, especially those who offer proprietary or niche products, the negative appears greater than the gain.

Responses vary by platform. Magento and WooCommerce merchants are taking action, some have inserted firewalls or paywalls, others are testing limited-access APIs for bots. These responses indicate mounting concern regarding how public ecommerce content is scraped, repackaged, and monetized by third-party systems.

There's also a growing ethics argument. Should AI tools profit from content that ecommerce brands are investing time and money into creating? Without permission or compensation?

As platforms and regulators get up to speed, expect more friction, and more policy shifts around who owns what.

AI Adoption Keeps Growing

Ecommerce AI adoption is up 270% since 2019. With a compound growth rate of 38%, it's taking hold fast, especially among smaller businesses.

Most survey respondents were micro or small brands, which suggests that AI isn't behind enterprise paywalls anymore. Tools are getting cheaper, easier to implement, and designed for non-technical users.

Marketing, technology, and retail are leading the way. WooCommerce and Magento are flexible, making it easier to insert AI into everything from content to analytics and inventory management.

Pioneers aren't just implementing one tool. They're building full AI stacks, stacking automated content with chatbots, recommendation engines, and predictive inventory planning. Each tool feeds the others, making shops smarter and more reactive.

Even older retail brands expanding into ecommerce are trying out AI-driven tools, whether upsell reminders, personalized landing pages, or behavior-triggered email sequences.

And platforms like Shopify and BigCommerce now incorporate AI capabilities into core offerings, which will further fuel adoption in the future.

Balancing Growth With Risk

As AI tools become more common, ecommerce brands are starting to think about the long game. Some are locking down content or adding CAPTCHAs to restrict scraping. Others are investing in custom content and gated product details.

  • 13% added new security measures
  • 18% now gate or restrict content
  • 12% monetize traffic from AI-based tools despite concerns

Some brands are experimenting with licensing models, both charging AI platforms for access or requesting attribution. Others are using watermarking or audit software to track where their data eventually lands.

Brand protection is also becoming a real issue. If an AI system mischaracterizes a product, or uses old data that was scraped, the store could be blamed, despite having had no involvement.

That's prompting some teams to turn content into a protected asset, not just a marketing channel. It's also prompting more ecommerce leaders to get in on the ground floor of regulation around privacy, content ownership, and data scraping.

Final Thoughts: The Next Chapter of AI in Ecommerce

Ecommerce isn't playing around with AI anymore, it's building with it. From faster content creation to more responsive support, the benefits are piling up.

AI, however, is not plug-and-play. It introduces new questions of control, ownership, and transparency. Brands need to weigh the advantages against added complexity.

The future of ecommerce innovation won't be the flashiest storefront or biggest ad budget. It will be smart, integrated systems that improve every phase of the buyer journey.

As the Liquid Web study shows, small and midsized companies aren't waiting in the wings. They're getting in early, experimenting fast, and pushing boundaries.

For the trailblazers, the question isn't if AI belongs in ecommerce. It's how to implement it ethically, and how to win trust along the way.

Read next:

• AI Is Disrupting Hiring, And Trust Is the First Casualty

• Google Play VPNs Exposed: Illusion of Choice Masks Common Security Weaknesses


by Irfan Ahmad via Digital Information World

French Regulator Targets Google, Shein Over Consent Failures

France’s data regulator has announced record fines against Google and Shein for breaching rules on online cookies. The Commission Nationale de l’Informatique et des Libertés (CNIL) said both companies failed to meet requirements on user consent and transparency.

Heavy Sanctions for Two Major Platforms

Google received a fine of 325 million euros, the largest ever imposed by the regulator. Shein was ordered to pay 150 million euros. Both companies have millions of users in France, which the CNIL said increased the scale of the violations. The penalties fall among the most severe in Europe under current data protection laws.

What the Regulator Found

Cookies are small data files stored by websites on users’ browsers. While they can support routine functions such as remembering settings, they are also central to advertising systems that profile users. The CNIL found that both firms set cookies before obtaining valid consent. In Shein’s case, investigators said the company collected extensive browsing data from about 12 million users each month without offering clear explanations or simple tools to opt out.

Shein has since updated its consent framework to comply with French and European law. The company still plans to appeal, calling the fine disproportionate.

Google faced broader criticism. The regulator noted that users creating a Google account encountered a cookie wall, which effectively required acceptance of tracking in order to proceed. While such designs are not always unlawful, the CNIL said the process lacked sufficient detail to allow informed choice.

Wider Concerns Around Google

The decision also highlighted Google’s practice of inserting ads between emails in Gmail. Authorities said this counted as direct commercial solicitation that should have required prior agreement from users. An estimated 53 million people in France were affected.

Google has already faced earlier penalties for similar issues, paying 100 million euros in 2020 and 150 million in 2021. In the latest case, prosecutors originally sought a 520 million euro penalty, but the final amount was set lower while still ranking as the regulator’s largest fine to date.

Compliance Deadlines

Both companies must now bring their systems into line with European data rules. Google has six months to make changes, and Shein faces the same requirement. Failure to comply would trigger additional daily fines of 100,000 euros, which would apply to both Google and its Irish unit.

France’s Ongoing Approach

The regulator described the sanctions as part of a broader effort that has been running for five years. Its strategy has focused on high-traffic sites and services where data practices affect millions of people. By targeting global platforms such as Google and Shein, the CNIL is continuing to signal that European rules on privacy and consent will be enforced with financial weight.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next: Google Tied to $45 Million Israeli Propaganda Push Amid Gaza Genocide
by Irfan Ahmad via Digital Information World

Google Tied to $45 Million Israeli Propaganda Push Amid Gaza Genocide

New disclosures have shed light on a $45 million agreement between Google and the Israeli prime minister’s office, adding to scrutiny of how major US technology companies are involved in the Gaza conflict. The contract, DropSiteNews, which began in June 2025 and runs through the end of the year, commits Google’s advertising systems to promote official Israeli messaging at a time when international monitors are reporting famine and widespread civilian suffering.

The arrangement is led by Lapam, Israel’s Government Advertising Agency, which reports directly to Prime Minister Benjamin Netanyahu’s office. Internal records describe the deal as a hasbara campaign, a Hebrew term for government-run propaganda. Documents show Google’s YouTube and its Display & Video 360 platform as the main outlets, though funds were also routed to other networks. These included $3 million spent on X, about $2.1 million through Outbrain and Teads, and additional amounts directed to Meta services.
One of the most visible outputs was a YouTube video released by Israel’s foreign ministry late in the summer. The clip told viewers there was no shortage of food in Gaza and dismissed claims of hunger as false. It drew more than 7 million views, a reach bolstered by paid promotions under the government’s deal with Google. The timing drew attention because only days earlier the UN had confirmed that northern Gaza had entered famine, while aid groups warned that conditions were worsening elsewhere. Gaza’s health ministry reported that more than 360 people, including over 130 children, had already died of hunger or related causes since the blockade began in March.


The intent behind the strategy was acknowledged openly. At a Knesset hearing on March 2, hours after restrictions on food, fuel, and medicine were enforced, lawmakers questioned military officials not about humanitarian risks but about plans for digital campaigns. Records show senior figures assuring them that counter-messaging was already underway. By June, the $45 million contract was signed and promotions had begun at scale.
The campaign went beyond denying famine. Ads also targeted international organisations. Some accused the UN Relief and Works Agency of blocking aid deliveries, while others circulated claims that the Hind Rajab Foundation, a Palestinian advocacy group, was linked to extremist ideology. These accusations lacked evidence but still spread widely across Google’s networks. Misbar, an Arab fact-checking group, later characterised the operation as a propaganda surge built on disinformation to justify military action.

The reach extended outside Gaza. Documents confirm that part of the contract funded ads framing Israel’s twelve-day bombing of Iran, known as Operation Rising Lion, as defensive. Independent monitors say the strikes killed more than 430 civilians. Under the agreement, ads described the assault as essential for security in Israel and the West.

This campaign fits a broader pattern. Members of Netanyahu’s coalition have publicly advocated the use of deprivation as a weapon, arguing that Gaza’s population should be cut off from food, water, and electricity until they gave in or left. While humanitarian groups condemned such statements, the ad campaigns on American platforms promoted a different narrative that played down or denied the consequences.

Google’s role has added to ongoing controversy over its connections to Israel’s military infrastructure. The company is already under criticism for Project Nimbus, a $1 billion cloud contract it shares with Amazon that serves government agencies including the defence ministry. Human rights groups argue that the new advertising deal shifts Google’s role from infrastructure provider to active participant in shaping public perception of the war.
The reaction has reached inside the company. Leaked reports from employee forums show co-founder Sergey Brin dismissing a UN inquiry that accused Google of profiting from genocide, calling it antisemitic. His remarks deepened unease among staff about leadership’s stance. For workers concerned about ethical lines, the revelation that Google platforms are carrying paid campaigns denying famine while aid agencies issue urgent warnings has become a central issue.

The disclosures raise wider questions about the role of technology companies when their platforms are used to broadcast official narratives that clash with verified humanitarian evidence. With the contract due to run until December, pressure on Silicon Valley firms over their involvement in state-led campaigns is likely to intensify in the months ahead.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next:

• Tree Planting Overhyped: Study Warns Forests Cannot Replace Fossil Fuel Cuts

Digital Legacy AI Founder Glenn Devitt Revolutionizes Storage for Posthumous Data Access
by Asim BN via Digital Information World

Thursday, September 4, 2025

Digital Legacy AI Founder Glenn Devitt Revolutionizes Storage for Posthumous Data Access

Over three million of the 19 million Bitcoin in existence have vanished forever—not stolen by hackers, but lost when owners died without sharing access codes. Family photos disappear into locked cloud accounts. Business assets become inaccessible digital ghosts. The $19 trillion silver tsunami of generational wealth transfer now includes digital legacies that simply evaporate when people pass away.


Glenn Devitt, a former U.S. Army Special Operations Intelligence veteran and founder of Digital Legacy AI, has developed patented technology that revolutionizes the storage and transfer of digital assets after death. His breakthrough system ensures that digital memories, cryptocurrency wallets, and online accounts can pass securely to verified heirs without the catastrophic data loss plaguing modern inheritance.

"Your legacy is not what you did. It's what you learned," Devitt explained, describing his philosophy behind creating technology that preserves knowledge and memories for future generations rather than letting them disappear into digital oblivion.

The Posthumous Data Crisis

Traditional storage systems operate on a simple premise: owners control access during their lifetime, and access dies with them. This binary approach—100% control or complete lockout—creates a digital inheritance disaster as more wealth and memories move online.

Current solutions fail spectacularly. Password managers become useless when the master password dies with the user. Cloud storage companies freeze accounts pending lengthy legal processes. Cryptocurrency exchanges require documentation that grieving families often lack. Business accounts become inaccessible, destroying value overnight.

The problem extends beyond financial assets. Family photos stored on personal devices, voice messages from loved ones, and decades of digital correspondence are permanently lost. A generation of digital natives will leave behind locked smartphones and encrypted hard drives that contain irreplaceable memories their children can never access.

Professional estate planners report an increasing number of client requests for digital inheritance assistance, but they lack standardized tools for managing online assets. Traditional legal frameworks weren't designed for blockchain wallets, social media accounts, or cloud-based businesses that exist entirely in digital space.

How Glenn Devitt's Intelligence Background Drives Storage Innovation

Devitt's approach to posthumous data storage stems directly from his experience in military intelligence, where secure information management could determine the success or failure of a mission. His 11 years in U.S. Army Special Operations Intelligence, including deployments where he earned two Bronze Star Medals, taught him that critical information must remain both absolutely secure and reliably accessible when needed.

His specialization in counterintelligence and digital forensics provided the security framework that now protects digital legacies with institutional-grade protocols. Military operations require information systems that function under extreme conditions—exactly the reliability needed for inheritance systems that may not activate for decades.

"I was really good at working open source intelligence back then or creative ways of getting data," Devitt noted, describing capabilities that now inform his approach to secure data management and automated inheritance processes.

Following military service, Devitt joined the Department of Homeland Security's H.E.R.O. program, developing computer forensics expertise that revealed how digital evidence could be extracted and preserved under challenging conditions. His subsequent creation of the Black Box Project at Stop Soldier Suicide demonstrated his ability to analyze complex digital patterns and create systems that predict critical events before they occur.

That experience analyzing digital footprints and building predictive algorithms now drives Digital Legacy AI's approach to understanding when and how digital assets should transfer to beneficiaries .

Revolutionary Patent Technology Breakthrough

Devitt's patented system addresses the fundamental challenge that has plagued digital storage: maintaining absolute security during an owner's lifetime while enabling verified access after death. His breakthrough creates automated processes that can authenticate death certificates and transfer assets directly to verified heirs without exposing sensitive information to unauthorized access.

The technology transforms digital storage from static repositories into dynamic inheritance systems. Rather than simply holding data until someone dies, the platform actively manages the transition process through secure verification protocols that confirm identity, validate legal authority, and execute predetermined distribution instructions.

Unlike existing solutions that depend on third-party custodians or vulnerable password systems, Devitt's framework creates decentralized control where users maintain complete authority over assets while ensuring seamless transfer to authenticated beneficiaries. The system can automatically close financial accounts, transfer business assets, and distribute personal content through intelligent agents that execute pre-programmed instructions.

Air-gapped storage keeps critical data isolated from internet access until verification triggers a controlled release, while multi-factor authentication ensures that only verified heirs have access to secured information after proper documentation. The platform integrates with Social Security Administration systems to provide real-time death verification, eliminating delays that currently plague inheritance processes.

Silver Tsunami Impact and Market Transformation

The timing of Devitt's storage revolution coincides with unprecedented generational wealth transfer. As baby boomers pass away over the next decade, an estimated $19 trillion in assets will change hands—the largest wealth transfer in American history . Significant portions of this wealth now exist in digital formats that traditional inheritance systems cannot handle effectively.

Digital asset markets approaching $2.5 trillion encompass cryptocurrencies, NFTs, online business accounts, and intellectual property stored in cloud systems. Baby boomers increasingly hold these assets but often lack the technical knowledge to manage complex inheritance protocols, creating a perfect storm of potential loss.

Current inheritance frameworks typically address single asset types—cryptocurrency services that cannot handle business accounts, or cloud storage inheritance that cannot manage blockchain assets. Devitt's unified approach handles diverse digital asset categories while maintaining distinct security standards for each type, preventing the fragmented solutions that currently frustrate estate planners and families.

The platform's household model creates network effects that could accelerate adoption as the silver tsunami intensifies. When one family member joins, parents, siblings, and children typically follow to share photos, access financial information, and maintain connected digital legacies. This organic growth pattern addresses the scale needed to handle millions of inheritance transfers over the coming decade.

Glenn Devitt's Vision for Industry Transformation

Devitt's transition from Delitor Inc., his government contracting firm, to Digital Legacy AI represents more than a business pivot—it's a fundamental shift in how society approaches digital permanence. "I can only grow my services so far, but the bigger markets are the consumer bases where people are consuming a product," he explained, describing the move from specialized consulting to mass-market technology.

His military background provides unique credibility for developing inheritance systems that must function reliably across decades. Military operations taught him that effective systems require redundancy, security, and automated processes that work without human intervention—exactly the characteristics needed for posthumous data management.

The platform launches with accessible storage tiers while maintaining enterprise-grade security protocols. Users pay for storage and management services during their lifetime, but beneficiaries receive permanent access to inherited content without ongoing fees, ensuring long-term preservation regardless of economic changes.

As regulatory frameworks develop around digital inheritance and blockchain technology matures, Devitt's early patent protection positions Digital Legacy AI to become the standard for secure posthumous data access. The technology transforms digital storage from a liability that families struggle to access into an asset that enhances rather than complicates inheritance processes.

For the millions of families preparing to navigate digital inheritance over the coming decade, Devitt's storage revolution offers a bridge between traditional estate planning and the digital-first world that increasingly defines modern wealth. Rather than accepting that digital legacies will disappear, families can now ensure that memories, assets, and knowledge transfer successfully across generations through systems designed by a veteran who understood that protecting what matters most requires both technical precision and human understanding.

[Partner Content]

Read next: AI Is Disrupting Hiring, And Trust Is the First Casualty


by Irfan Ahmad via Digital Information World

AI Is Disrupting Hiring, And Trust Is the First Casualty

Generative AI is shaking up white-collar work, and recruiting already feels the pain. What was once a sphere committed to optimizing efficiency and job-candidate fit has taken a sharp turn. The bigger worry now? Application fraud done by AI.

A Software Finder survey released recently paints the picture starkly: recruiters are being hit by a barrage of fabricated resumes, AI-generated portfolios, and deepfake interviews. As these aspects grow more realistic, the entire hiring process, based on honesty and identity, is under threat.




Recruiters Already Get Fakes

The survey gathered opinions from 874 recruitment professionals. What they had to say confirms what most suspected: AI-powered falsification already dominates.

  • 72% have had AI-generated resumes
  • 51% have received fabricated work samples or portfolios
  • 15% had deepfake or face-swapped video interviews
  • 17% had changed voices or audio filters

Even with those statistics, 75% of recruiters feel confident that they can identify AI-aided candidates themselves. But it may be wishful thinking. Almost half already mark or eliminate candidates over suspected AI use, and 40% rejected applicants over identity issues.

Some applicants are using AI just to tighten up spelling or formatting. Others are taking advantage of the entire system, faking identities, creating fake voices, or submitting portfolios that they never created. The list of tactics is long and expanding fast.

Where It Hurts Most: Tech, Marketing, and Creative Jobs

Some industries are more at risk than others. According to the statistics, recruiters in some of the leading industries are seeing more AI abuse:

  • Tech: 65% say it's most under siege
  • Marketing: 49% said exposed
  • Creative/design: 47% say they're being tampered with frequently

These roles become reliant on digital deliverables, portfolios, campaigns, sample code. It is so easy to forge these using AI. It only takes a designer minutes to build an AI-made portfolio. A coder can supply code copied from GitHub Copilot. Marketers are providing AI-created ad copy or brand decks.

And it's not only the materials. The presentation is professional-looking. Add that to remote interviews and aggressive hiring schedules, and it's no wonder that AI-hybrid candidates are falling through.

These technologies are no longer niche. Browser applications can mimic speech, imitate facial expressions, and create entire fake profiles. What used to demand advanced skill is now achieved with a decent Wi-Fi network.

Detection Tools Are Still Playing Catch-Up

Even though the threat is growing, most companies are not set to spot it effectively. This is how things stand today:

  • Only 31% use software to spot AI or deepfake material
  • 66% still use manual screening
  • 53% have third-party background checks
  • Only a third possess applicant tracking systems (ATS) that can spot AI-based deceit

And training? That's very thin. Close to half of HR professionals haven't received any training on how to spot AI fake news. Only 15% indicate their company will offer such training in the near term.

Things might improve, as 40 percent of companies report that they intend to spend on detection software within the next year. But for the moment, the disparity is evident: AI is developing more rapidly than the software intended to halt it.

So why the lag? Budgets, uncertainty, and risk. HR leaders worry about false positives, or that tools won’t keep pace with AI’s evolution. Others aren’t even sure what counts as unethical AI use. Should a resume rewritten by ChatGPT be disqualified? Should candidates be required to disclose that? Most companies don’t have a policy, and that leaves too much open to interpretation.

Should Job Platforms and Lawmakers Step In?

Hiring managers won't shoulder this responsibility alone. Most think platforms and regulators must assist in getting the standards tighter and verifying more. This is where agreement is building:

  • 65% would support mandating live-only interviews
  • 54% would want stricter background checks
  • 39% would support third-party video identity verification
  • 37% would prefer biometric or facial verification as protection

And it's not all on employers. A majority think platforms such as LinkedIn, Indeed, and others should be doing more:

  • 65% say platforms should help identify AI-generate candidates
  • 62% favor mandatory disclosure of AI use on applications
  • 56% would pay extra for recruitment software with in-app fraud detection

The conversation is evolving. Recruiters now don't consider this just an HR issue. It's becoming a systemic problem that platforms, vendors, and governments need to address in unison.

And the law may already be lagging. With AI software becoming less expensive and more realistic, authenticating a candidate's identity may eventually need legislative control, since single employers cannot stay current themselves.

Resume Faking Heads the List of Risks

For all AI-facilitated dishonesty, resume forgery is the greatest threat:

  • 63% of recruiters identify AI-supercharged resumes as the greatest risk
  • 37% consider deepfaked video interviews to be more perilous

That's probably because recruitment continues to rely on paper products. Resumes, cover letters, writing samples, all easily doctored or faked with AI.

But video manipulation is coming up fast. More and more companies are embracing remote interviews and asynchronous video platforms. As that movement keeps going, AI-enhanced voice and face manipulation will become more common, and harder to detect.

Even seasoned recruiters will say it's becoming increasingly difficult to catch deepfakes. That increases the chances of ill hires, legal issues, and image damage. An imposter employee brought on board under false claims can waste time, resources, and company culture.

And the barrier to entry keeps dropping. High-quality fakery no longer requires special software. Most of it runs in your browser, or through apps on your phone. What was once rare is now becoming the new norm.

Trust Is the Real Casualty

A whopping 88% of recruiters believe AI fraud will reshape hiring practices within five years. But be honest, it's already happening now.

Recruiters claim to rely on intuition, but the truth is more ambiguous. Few have had any formal training. Tools are missing or insufficient. Internal procedures are hazy at best. And AI-generated content is becoming increasingly difficult to tell from the real thing.

As AI improves at simulating people, it's easier to deceive. And that strikes at the core of hiring: trust.

Here's what businesses can begin to do today:

  • Deploy detection software that identifies red flags early
  • Train hiring managers and recruiters to spot suspicious activity
  • Develop internal policies on what types of AI applications are permissible
  • Consult with platforms and attorneys to establish wiser policies

All AI usage is not nefarious. Some applicants utilize it to correct grammar or rephrase a synopsis. Other applicants, however, utilize it to create completely fictitious professions. Clear policies help recruiters to define the line and hold standards even across the board.

The more profound change? Redefining what "authentic" means to us.

Zoom interviews and hunches won't do the job anymore. If AI can forge resumes, faces, voices, and even work histories, authentication needs to be part of the hiring process, not an afterthought.

The old way of hiring was supposed to be about finding the best candidate. Today, it's also about ensuring the best candidate is actually real.

Read next: 

Tree Planting Overhyped: Study Warns Forests Cannot Replace Fossil Fuel Cuts

• Google’s Danny Sullivan Reminds Site Owners That SEO Basics Still Count in the Age of AI Search


by Irfan Ahmad via Digital Information World