Friday, September 5, 2025

French Regulator Targets Google, Shein Over Consent Failures

France’s data regulator has announced record fines against Google and Shein for breaching rules on online cookies. The Commission Nationale de l’Informatique et des Libertés (CNIL) said both companies failed to meet requirements on user consent and transparency.

Heavy Sanctions for Two Major Platforms

Google received a fine of 325 million euros, the largest ever imposed by the regulator. Shein was ordered to pay 150 million euros. Both companies have millions of users in France, which the CNIL said increased the scale of the violations. The penalties fall among the most severe in Europe under current data protection laws.

What the Regulator Found

Cookies are small data files stored by websites on users’ browsers. While they can support routine functions such as remembering settings, they are also central to advertising systems that profile users. The CNIL found that both firms set cookies before obtaining valid consent. In Shein’s case, investigators said the company collected extensive browsing data from about 12 million users each month without offering clear explanations or simple tools to opt out.

Shein has since updated its consent framework to comply with French and European law. The company still plans to appeal, calling the fine disproportionate.

Google faced broader criticism. The regulator noted that users creating a Google account encountered a cookie wall, which effectively required acceptance of tracking in order to proceed. While such designs are not always unlawful, the CNIL said the process lacked sufficient detail to allow informed choice.

Wider Concerns Around Google

The decision also highlighted Google’s practice of inserting ads between emails in Gmail. Authorities said this counted as direct commercial solicitation that should have required prior agreement from users. An estimated 53 million people in France were affected.

Google has already faced earlier penalties for similar issues, paying 100 million euros in 2020 and 150 million in 2021. In the latest case, prosecutors originally sought a 520 million euro penalty, but the final amount was set lower while still ranking as the regulator’s largest fine to date.

Compliance Deadlines

Both companies must now bring their systems into line with European data rules. Google has six months to make changes, and Shein faces the same requirement. Failure to comply would trigger additional daily fines of 100,000 euros, which would apply to both Google and its Irish unit.

France’s Ongoing Approach

The regulator described the sanctions as part of a broader effort that has been running for five years. Its strategy has focused on high-traffic sites and services where data practices affect millions of people. By targeting global platforms such as Google and Shein, the CNIL is continuing to signal that European rules on privacy and consent will be enforced with financial weight.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next: Google Tied to $45 Million Israeli Propaganda Push Amid Gaza Genocide
by Irfan Ahmad via Digital Information World

Google Tied to $45 Million Israeli Propaganda Push Amid Gaza Genocide

New disclosures have shed light on a $45 million agreement between Google and the Israeli prime minister’s office, adding to scrutiny of how major US technology companies are involved in the Gaza conflict. The contract, DropSiteNews, which began in June 2025 and runs through the end of the year, commits Google’s advertising systems to promote official Israeli messaging at a time when international monitors are reporting famine and widespread civilian suffering.

The arrangement is led by Lapam, Israel’s Government Advertising Agency, which reports directly to Prime Minister Benjamin Netanyahu’s office. Internal records describe the deal as a hasbara campaign, a Hebrew term for government-run propaganda. Documents show Google’s YouTube and its Display & Video 360 platform as the main outlets, though funds were also routed to other networks. These included $3 million spent on X, about $2.1 million through Outbrain and Teads, and additional amounts directed to Meta services.
One of the most visible outputs was a YouTube video released by Israel’s foreign ministry late in the summer. The clip told viewers there was no shortage of food in Gaza and dismissed claims of hunger as false. It drew more than 7 million views, a reach bolstered by paid promotions under the government’s deal with Google. The timing drew attention because only days earlier the UN had confirmed that northern Gaza had entered famine, while aid groups warned that conditions were worsening elsewhere. Gaza’s health ministry reported that more than 360 people, including over 130 children, had already died of hunger or related causes since the blockade began in March.


The intent behind the strategy was acknowledged openly. At a Knesset hearing on March 2, hours after restrictions on food, fuel, and medicine were enforced, lawmakers questioned military officials not about humanitarian risks but about plans for digital campaigns. Records show senior figures assuring them that counter-messaging was already underway. By June, the $45 million contract was signed and promotions had begun at scale.
The campaign went beyond denying famine. Ads also targeted international organisations. Some accused the UN Relief and Works Agency of blocking aid deliveries, while others circulated claims that the Hind Rajab Foundation, a Palestinian advocacy group, was linked to extremist ideology. These accusations lacked evidence but still spread widely across Google’s networks. Misbar, an Arab fact-checking group, later characterised the operation as a propaganda surge built on disinformation to justify military action.

The reach extended outside Gaza. Documents confirm that part of the contract funded ads framing Israel’s twelve-day bombing of Iran, known as Operation Rising Lion, as defensive. Independent monitors say the strikes killed more than 430 civilians. Under the agreement, ads described the assault as essential for security in Israel and the West.

This campaign fits a broader pattern. Members of Netanyahu’s coalition have publicly advocated the use of deprivation as a weapon, arguing that Gaza’s population should be cut off from food, water, and electricity until they gave in or left. While humanitarian groups condemned such statements, the ad campaigns on American platforms promoted a different narrative that played down or denied the consequences.

Google’s role has added to ongoing controversy over its connections to Israel’s military infrastructure. The company is already under criticism for Project Nimbus, a $1 billion cloud contract it shares with Amazon that serves government agencies including the defence ministry. Human rights groups argue that the new advertising deal shifts Google’s role from infrastructure provider to active participant in shaping public perception of the war.
The reaction has reached inside the company. Leaked reports from employee forums show co-founder Sergey Brin dismissing a UN inquiry that accused Google of profiting from genocide, calling it antisemitic. His remarks deepened unease among staff about leadership’s stance. For workers concerned about ethical lines, the revelation that Google platforms are carrying paid campaigns denying famine while aid agencies issue urgent warnings has become a central issue.

The disclosures raise wider questions about the role of technology companies when their platforms are used to broadcast official narratives that clash with verified humanitarian evidence. With the contract due to run until December, pressure on Silicon Valley firms over their involvement in state-led campaigns is likely to intensify in the months ahead.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next:

• Tree Planting Overhyped: Study Warns Forests Cannot Replace Fossil Fuel Cuts

Digital Legacy AI Founder Glenn Devitt Revolutionizes Storage for Posthumous Data Access
by Asim BN via Digital Information World

Thursday, September 4, 2025

Digital Legacy AI Founder Glenn Devitt Revolutionizes Storage for Posthumous Data Access

Over three million of the 19 million Bitcoin in existence have vanished forever—not stolen by hackers, but lost when owners died without sharing access codes. Family photos disappear into locked cloud accounts. Business assets become inaccessible digital ghosts. The $19 trillion silver tsunami of generational wealth transfer now includes digital legacies that simply evaporate when people pass away.


Glenn Devitt, a former U.S. Army Special Operations Intelligence veteran and founder of Digital Legacy AI, has developed patented technology that revolutionizes the storage and transfer of digital assets after death. His breakthrough system ensures that digital memories, cryptocurrency wallets, and online accounts can pass securely to verified heirs without the catastrophic data loss plaguing modern inheritance.

"Your legacy is not what you did. It's what you learned," Devitt explained, describing his philosophy behind creating technology that preserves knowledge and memories for future generations rather than letting them disappear into digital oblivion.

The Posthumous Data Crisis

Traditional storage systems operate on a simple premise: owners control access during their lifetime, and access dies with them. This binary approach—100% control or complete lockout—creates a digital inheritance disaster as more wealth and memories move online.

Current solutions fail spectacularly. Password managers become useless when the master password dies with the user. Cloud storage companies freeze accounts pending lengthy legal processes. Cryptocurrency exchanges require documentation that grieving families often lack. Business accounts become inaccessible, destroying value overnight.

The problem extends beyond financial assets. Family photos stored on personal devices, voice messages from loved ones, and decades of digital correspondence are permanently lost. A generation of digital natives will leave behind locked smartphones and encrypted hard drives that contain irreplaceable memories their children can never access.

Professional estate planners report an increasing number of client requests for digital inheritance assistance, but they lack standardized tools for managing online assets. Traditional legal frameworks weren't designed for blockchain wallets, social media accounts, or cloud-based businesses that exist entirely in digital space.

How Glenn Devitt's Intelligence Background Drives Storage Innovation

Devitt's approach to posthumous data storage stems directly from his experience in military intelligence, where secure information management could determine the success or failure of a mission. His 11 years in U.S. Army Special Operations Intelligence, including deployments where he earned two Bronze Star Medals, taught him that critical information must remain both absolutely secure and reliably accessible when needed.

His specialization in counterintelligence and digital forensics provided the security framework that now protects digital legacies with institutional-grade protocols. Military operations require information systems that function under extreme conditions—exactly the reliability needed for inheritance systems that may not activate for decades.

"I was really good at working open source intelligence back then or creative ways of getting data," Devitt noted, describing capabilities that now inform his approach to secure data management and automated inheritance processes.

Following military service, Devitt joined the Department of Homeland Security's H.E.R.O. program, developing computer forensics expertise that revealed how digital evidence could be extracted and preserved under challenging conditions. His subsequent creation of the Black Box Project at Stop Soldier Suicide demonstrated his ability to analyze complex digital patterns and create systems that predict critical events before they occur.

That experience analyzing digital footprints and building predictive algorithms now drives Digital Legacy AI's approach to understanding when and how digital assets should transfer to beneficiaries .

Revolutionary Patent Technology Breakthrough

Devitt's patented system addresses the fundamental challenge that has plagued digital storage: maintaining absolute security during an owner's lifetime while enabling verified access after death. His breakthrough creates automated processes that can authenticate death certificates and transfer assets directly to verified heirs without exposing sensitive information to unauthorized access.

The technology transforms digital storage from static repositories into dynamic inheritance systems. Rather than simply holding data until someone dies, the platform actively manages the transition process through secure verification protocols that confirm identity, validate legal authority, and execute predetermined distribution instructions.

Unlike existing solutions that depend on third-party custodians or vulnerable password systems, Devitt's framework creates decentralized control where users maintain complete authority over assets while ensuring seamless transfer to authenticated beneficiaries. The system can automatically close financial accounts, transfer business assets, and distribute personal content through intelligent agents that execute pre-programmed instructions.

Air-gapped storage keeps critical data isolated from internet access until verification triggers a controlled release, while multi-factor authentication ensures that only verified heirs have access to secured information after proper documentation. The platform integrates with Social Security Administration systems to provide real-time death verification, eliminating delays that currently plague inheritance processes.

Silver Tsunami Impact and Market Transformation

The timing of Devitt's storage revolution coincides with unprecedented generational wealth transfer. As baby boomers pass away over the next decade, an estimated $19 trillion in assets will change hands—the largest wealth transfer in American history . Significant portions of this wealth now exist in digital formats that traditional inheritance systems cannot handle effectively.

Digital asset markets approaching $2.5 trillion encompass cryptocurrencies, NFTs, online business accounts, and intellectual property stored in cloud systems. Baby boomers increasingly hold these assets but often lack the technical knowledge to manage complex inheritance protocols, creating a perfect storm of potential loss.

Current inheritance frameworks typically address single asset types—cryptocurrency services that cannot handle business accounts, or cloud storage inheritance that cannot manage blockchain assets. Devitt's unified approach handles diverse digital asset categories while maintaining distinct security standards for each type, preventing the fragmented solutions that currently frustrate estate planners and families.

The platform's household model creates network effects that could accelerate adoption as the silver tsunami intensifies. When one family member joins, parents, siblings, and children typically follow to share photos, access financial information, and maintain connected digital legacies. This organic growth pattern addresses the scale needed to handle millions of inheritance transfers over the coming decade.

Glenn Devitt's Vision for Industry Transformation

Devitt's transition from Delitor Inc., his government contracting firm, to Digital Legacy AI represents more than a business pivot—it's a fundamental shift in how society approaches digital permanence. "I can only grow my services so far, but the bigger markets are the consumer bases where people are consuming a product," he explained, describing the move from specialized consulting to mass-market technology.

His military background provides unique credibility for developing inheritance systems that must function reliably across decades. Military operations taught him that effective systems require redundancy, security, and automated processes that work without human intervention—exactly the characteristics needed for posthumous data management.

The platform launches with accessible storage tiers while maintaining enterprise-grade security protocols. Users pay for storage and management services during their lifetime, but beneficiaries receive permanent access to inherited content without ongoing fees, ensuring long-term preservation regardless of economic changes.

As regulatory frameworks develop around digital inheritance and blockchain technology matures, Devitt's early patent protection positions Digital Legacy AI to become the standard for secure posthumous data access. The technology transforms digital storage from a liability that families struggle to access into an asset that enhances rather than complicates inheritance processes.

For the millions of families preparing to navigate digital inheritance over the coming decade, Devitt's storage revolution offers a bridge between traditional estate planning and the digital-first world that increasingly defines modern wealth. Rather than accepting that digital legacies will disappear, families can now ensure that memories, assets, and knowledge transfer successfully across generations through systems designed by a veteran who understood that protecting what matters most requires both technical precision and human understanding.

[Partner Content]

Read next: AI Is Disrupting Hiring, And Trust Is the First Casualty


by Irfan Ahmad via Digital Information World

AI Is Disrupting Hiring, And Trust Is the First Casualty

Generative AI is shaking up white-collar work, and recruiting already feels the pain. What was once a sphere committed to optimizing efficiency and job-candidate fit has taken a sharp turn. The bigger worry now? Application fraud done by AI.

A Software Finder survey released recently paints the picture starkly: recruiters are being hit by a barrage of fabricated resumes, AI-generated portfolios, and deepfake interviews. As these aspects grow more realistic, the entire hiring process, based on honesty and identity, is under threat.




Recruiters Already Get Fakes

The survey gathered opinions from 874 recruitment professionals. What they had to say confirms what most suspected: AI-powered falsification already dominates.

  • 72% have had AI-generated resumes
  • 51% have received fabricated work samples or portfolios
  • 15% had deepfake or face-swapped video interviews
  • 17% had changed voices or audio filters

Even with those statistics, 75% of recruiters feel confident that they can identify AI-aided candidates themselves. But it may be wishful thinking. Almost half already mark or eliminate candidates over suspected AI use, and 40% rejected applicants over identity issues.

Some applicants are using AI just to tighten up spelling or formatting. Others are taking advantage of the entire system, faking identities, creating fake voices, or submitting portfolios that they never created. The list of tactics is long and expanding fast.

Where It Hurts Most: Tech, Marketing, and Creative Jobs

Some industries are more at risk than others. According to the statistics, recruiters in some of the leading industries are seeing more AI abuse:

  • Tech: 65% say it's most under siege
  • Marketing: 49% said exposed
  • Creative/design: 47% say they're being tampered with frequently

These roles become reliant on digital deliverables, portfolios, campaigns, sample code. It is so easy to forge these using AI. It only takes a designer minutes to build an AI-made portfolio. A coder can supply code copied from GitHub Copilot. Marketers are providing AI-created ad copy or brand decks.

And it's not only the materials. The presentation is professional-looking. Add that to remote interviews and aggressive hiring schedules, and it's no wonder that AI-hybrid candidates are falling through.

These technologies are no longer niche. Browser applications can mimic speech, imitate facial expressions, and create entire fake profiles. What used to demand advanced skill is now achieved with a decent Wi-Fi network.

Detection Tools Are Still Playing Catch-Up

Even though the threat is growing, most companies are not set to spot it effectively. This is how things stand today:

  • Only 31% use software to spot AI or deepfake material
  • 66% still use manual screening
  • 53% have third-party background checks
  • Only a third possess applicant tracking systems (ATS) that can spot AI-based deceit

And training? That's very thin. Close to half of HR professionals haven't received any training on how to spot AI fake news. Only 15% indicate their company will offer such training in the near term.

Things might improve, as 40 percent of companies report that they intend to spend on detection software within the next year. But for the moment, the disparity is evident: AI is developing more rapidly than the software intended to halt it.

So why the lag? Budgets, uncertainty, and risk. HR leaders worry about false positives, or that tools won’t keep pace with AI’s evolution. Others aren’t even sure what counts as unethical AI use. Should a resume rewritten by ChatGPT be disqualified? Should candidates be required to disclose that? Most companies don’t have a policy, and that leaves too much open to interpretation.

Should Job Platforms and Lawmakers Step In?

Hiring managers won't shoulder this responsibility alone. Most think platforms and regulators must assist in getting the standards tighter and verifying more. This is where agreement is building:

  • 65% would support mandating live-only interviews
  • 54% would want stricter background checks
  • 39% would support third-party video identity verification
  • 37% would prefer biometric or facial verification as protection

And it's not all on employers. A majority think platforms such as LinkedIn, Indeed, and others should be doing more:

  • 65% say platforms should help identify AI-generate candidates
  • 62% favor mandatory disclosure of AI use on applications
  • 56% would pay extra for recruitment software with in-app fraud detection

The conversation is evolving. Recruiters now don't consider this just an HR issue. It's becoming a systemic problem that platforms, vendors, and governments need to address in unison.

And the law may already be lagging. With AI software becoming less expensive and more realistic, authenticating a candidate's identity may eventually need legislative control, since single employers cannot stay current themselves.

Resume Faking Heads the List of Risks

For all AI-facilitated dishonesty, resume forgery is the greatest threat:

  • 63% of recruiters identify AI-supercharged resumes as the greatest risk
  • 37% consider deepfaked video interviews to be more perilous

That's probably because recruitment continues to rely on paper products. Resumes, cover letters, writing samples, all easily doctored or faked with AI.

But video manipulation is coming up fast. More and more companies are embracing remote interviews and asynchronous video platforms. As that movement keeps going, AI-enhanced voice and face manipulation will become more common, and harder to detect.

Even seasoned recruiters will say it's becoming increasingly difficult to catch deepfakes. That increases the chances of ill hires, legal issues, and image damage. An imposter employee brought on board under false claims can waste time, resources, and company culture.

And the barrier to entry keeps dropping. High-quality fakery no longer requires special software. Most of it runs in your browser, or through apps on your phone. What was once rare is now becoming the new norm.

Trust Is the Real Casualty

A whopping 88% of recruiters believe AI fraud will reshape hiring practices within five years. But be honest, it's already happening now.

Recruiters claim to rely on intuition, but the truth is more ambiguous. Few have had any formal training. Tools are missing or insufficient. Internal procedures are hazy at best. And AI-generated content is becoming increasingly difficult to tell from the real thing.

As AI improves at simulating people, it's easier to deceive. And that strikes at the core of hiring: trust.

Here's what businesses can begin to do today:

  • Deploy detection software that identifies red flags early
  • Train hiring managers and recruiters to spot suspicious activity
  • Develop internal policies on what types of AI applications are permissible
  • Consult with platforms and attorneys to establish wiser policies

All AI usage is not nefarious. Some applicants utilize it to correct grammar or rephrase a synopsis. Other applicants, however, utilize it to create completely fictitious professions. Clear policies help recruiters to define the line and hold standards even across the board.

The more profound change? Redefining what "authentic" means to us.

Zoom interviews and hunches won't do the job anymore. If AI can forge resumes, faces, voices, and even work histories, authentication needs to be part of the hiring process, not an afterthought.

The old way of hiring was supposed to be about finding the best candidate. Today, it's also about ensuring the best candidate is actually real.

Read next: 

Tree Planting Overhyped: Study Warns Forests Cannot Replace Fossil Fuel Cuts

• Google’s Danny Sullivan Reminds Site Owners That SEO Basics Still Count in the Age of AI Search


by Irfan Ahmad via Digital Information World

Apple Prepares AI Search Tool as Part of Siri Relaunch

Apple is working on a new artificial intelligence search system that is expected to play a central role in its planned overhaul of Siri. The platform, which is being developed under the internal name World Knowledge Answers, is scheduled for release in 2026 alongside broader upgrades to Apple’s voice assistant.

Built for Siri, Safari, and Spotlight

The new tool is designed to expand Siri from a basic fact-checking feature into a system that can generate answers in richer formats. Instead of offering short responses, it will combine text with images, video, and local information, drawing on large language models to produce results that resemble the AI-generated summaries already seen from Google, Microsoft, and newer platforms such as Perplexity. Apple is also preparing to integrate the technology into Safari and Spotlight, which would give it wider reach across iPhones and other devices.

Testing External AI Models

Although Apple is creating its own tool, reports indicate that the company has trialed Google’s Gemini model to support elements of the experience. The final design is still being refined, and it remains unclear how much of the system will rely on Apple’s own models compared with outside partnerships. Apple has also been open to acquisitions in this area, with speculation linking it to potential interest in smaller AI search companies.

Siri’s New Role

The relaunch will mark one of the biggest changes to Siri since its introduction. Apple’s plan is to shift the assistant into what it calls an answer engine, capable of pulling data from across the web while presenting results in a more conversational way. Additional features, such as planning tools and summarization options, are expected to make interactions more useful and precise. A chatbot-style app was reportedly considered but has not been prioritized, with Apple instead embedding the functions within its existing apps and services.

Competing in a Crowded Market

Historically, Apple has not positioned itself as a direct player in web search, instead relying on partnerships to handle that function. The rise of AI-driven tools has shifted that approach. With Safari integration, Apple could bring search more directly under its control while offering users an alternative to rival platforms. The move reflects broader changes in how people are accessing information online, where AI systems increasingly guide search and browsing.

The success of World Knowledge Answers could determine how competitive Apple becomes against established search providers and new AI entrants. For businesses, the change could also alter how visibility is achieved, with placement inside Apple’s system becoming as significant as traditional search rankings.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next: AI Adoption in America: Which States Lead, Which Lag, and Why It Matters
by Irfan Ahmad via Digital Information World

Wednesday, September 3, 2025

AI Adoption in America: Which States Lead, Which Lag, and Why It Matters

Artificial intelligence is becoming an integral part of our everyday lives in 2025. It can boost productivity, save overall man-hours, and teach ordinary people how to do things that fall way outside their job description. From coding to image creation, if prompted properly, AI can do it.

Even some blue-collar workers, who spend most of their workday using their hands, are seeing the benefits of AI integration through automated scheduling and digital assistants that help with paperwork and payroll. Needless to say, almost everyone is using AI in some way, but some areas of the country are utilizing it more than others.

D.C. Uses AI Most, But Rhode Island Is Most Efficient

A new study by Phrasly.ai has revealed that some states are utilizing AI more extensively than others. The study found that our nation’s capital, Washington D.C., is the number one user of AI per capita, spending the most time interfacing with artificial intelligence. Interestingly, the study also found that our nation's capital is not utilizing AI as efficiently as some other states, despite being the leading user of AI.

To the contrary, South Dakota is lagging behind in their AI usage. The study found that South Dakota residents use AI the least per capita, with users only spending slightly over 10 minutes on average per session. Whereas Delaware, the state that spends the most time using AI, spends an average of 17 minutes per session. ​

But as seen in D.C., usage doesn’t always translate to efficiency. For example, Rhode Island residents who used AI were found to use it most efficiently, saving an average of 32 hours and 5 minutes per user per month. That’s almost an entire work week saved by AI users in Rhode Island.

Unfortunately, Washington D.C. AI users were found to only save 7 hours and 29 minutes per month on average, just slightly less than an average 8-hour workday. While one workday doesn’t seem like much in comparison to Rhode Island's workweek savings, it’s still significantly more than the states that are lagging behind.

States like South Dakota, Wyoming, and Kansas were found to be the least efficient, with AI users saving between 49 minutes and 3 hours per month per user. Perhaps these are states that could benefit from AI teaching programs, especially for those using AI in business.

Speaking of efficiency and the integration of AI into business, a separate study by the Infosys Knowledge Institute found that one in two business initiatives using AI were successful. The study also noted that AI success is closely tied to a business's ability to effectively adapt its operations and data infrastructure. To put it another way, AI's impact comes not from layering AI onto existing processes, but from rethinking how those processes work in the first place.

Monday Night Is The New AI Rush Hour

When it comes to AI usage, there are some hours of the day and days of the week that are more popular than others, according to Phrasly’s study. Overall, the U.S. sees the most AI usage on Monday at 8 p.m., which the study speculated is because “people are squeezing in productivity after the traditional 9-to-5 — students finishing those essays due tomorrow, professionals tackling leftover tasks, or maybe just those looking for last minute dinner inspiration.” We might also attribute this to the fact that AI users are seeking to engage in creative work during quieter times of the day, or perhaps they are using AI as a late-night companion.

Despite AI’s limited emotional intelligence, people are increasingly turning to AI for advice and mental health support, according to a study by JMIR Mental Health. The appeal? Well, AI is available 24/7, it’s non-judgmental, and confidential. One might argue that it’s also relatively inexpensive compared to rising healthcare costs in the U.S.

On the state level, Oregon, Vermont, and Delaware are using AI earlier in the day than the rest of the country. All three states are mostly interacting with AI between 9 a.m. and 11 a.m. local time, suggesting that these states use AI for work most commonly based on their peak usage time. Late-night states like Idaho and Pennsylvania are seeing their peak usage at 9 p.m.

Interestingly, no state in the study was found to use AI primarily during the weekend. Other studies have explored this reasoning and have suggested that AI is most beneficial to the average user on work and school days. Other studies have also noted that AI usage significantly drops during the summer months, suggesting that students may be among the biggest users of AI.

This raises the question: Will states that currently lack AI usage continue to struggle well into the future? If the internet boom of the late 1990s and early 2000s taught us anything, it’s clear that states lagging behind in AI usage now will likely continue to lag behind.

Content Generation, Humanization, and Detection

The study also analyzed the three different application types for AI usage: generation, humanization, and detection. Content generation and humanization were most popular among AI users in Virginia, while AI detection was most popular in Wyoming.

This might lead one to believe that the small percentage of Wyoming residents using AI are afraid of being caught. Still, state government officials are encouraging residents to put their fears aside by educating them. In fact, the University of Wyoming has launched an AI Initiative to develop an AI-capable workforce and apply the technology to key state industries, including energy and agriculture. Wyoming residents might be hesitant to use AI, but state officials are not.

Midwestern States Are Lagging Behind

Overall, the study paints a clear picture. AI is no longer an exclusive experiment that’s only accessible to tech professionals. It has become a mainstream tool that is reshaping how Americans work, study, and communicate; yet, AI adoption is anything but uniform. Washington, D.C. leads in overall usage, Rhode Island leads in efficiency, Delaware leads in session length, and much of the Midwest continues to trail behind.

While some people have doubts about AI and others think it’s the next best thing after sliced bread, we can all agree that the adoption gap of AI is real, and it may shape the future of work in the U.S. for many years to come.

Read next: Rising AI Pressure Pushes Professionals Back Toward Human Networks


by Irfan Ahmad via Digital Information World

Court Orders Google to Open Search Data and End Exclusive Deals

Google’s legal battle with U.S. regulators has reached a turning point. A federal court has ruled that the tech giant must stop making exclusive agreements that kept its search service as the default choice on many devices. It will also be required to share parts of its search data with competitors, though it will keep control of Chrome and Android.

A Case Years in the Making

The dispute began in 2020 when the Department of Justice claimed Google had built an illegal monopoly around internet search. After a long trial, the court found last year that the company had used restrictive contracts and huge payments to partners such as Apple to secure its position. Those deals gave Google unmatched visibility and left rival search engines struggling to compete.

Remedies Announced

The remedies unveiled this week aim to loosen that hold without breaking up the company. Google is now barred from exclusive contracts across Search, Chrome, Assistant and Gemini. At the same time, it will have to make certain search index data and user interaction information available to qualified competitors. Advertising data is not included, but the hope is that opening access to core search material will give rivals a chance to improve their services.

The court also ordered Google to provide syndication options that allow other companies to deliver search results and ads while building their own technology. These arrangements must be offered on standard commercial terms.

No Forced Breakup

The government had originally pushed for more drastic measures, including the sale of Chrome and possible limits on Android. The court rejected those proposals, warning that removing those products from Google would cause disruption for consumers and partners. Instead, the focus is on cutting off the practices that reinforced its dominance rather than dismantling the tools themselves.

Google’s Position

Google responded by highlighting changes in the industry, pointing to the rise of artificial intelligence as an alternative way for people to find information. The company has raised concerns about privacy risks in sharing search data and signaled plans to appeal the ruling. Any appeal could delay enforcement for years, leaving the impact uncertain in the near term.

What It Means

The decision marks a new phase in how regulators deal with large technology companies. By keeping Chrome and Android intact but opening search data and ending exclusivity, the court has tried to balance competition with stability. Whether this approach succeeds will depend on how rivals use the access they are given and how long the appeals process drags on.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next: Parental Controls Are Coming to ChatGPT as Safety Questions Grow
by Asim BN via Digital Information World