Friday, October 24, 2025

Google Earth AI Taps Gemini to Predict Disasters Before They Unfold

Google is expanding its Earth AI platform with new Gemini-based intelligence that links environmental data with human impact in ways that used to take entire research teams to uncover. The system is being upgraded to process complex geospatial relationships so it can help scientists, city planners, and aid organizations prepare for natural disasters before they strike.

Smarter maps, faster insight

The foundation of this upgrade lies in what Google calls geospatial reasoning, a way for AI to analyze satellite images, population maps, and weather forecasts in the same query. When the system processes those layers together, it can predict not only where a cyclone might land but which areas are at greatest risk and how infrastructure might respond. This form of mapping turns what once required weeks of modeling into a process that unfolds in minutes.



The model draws on years of satellite and sensor data, combining it with Gemini’s reasoning to interpret physical conditions on the ground. That means the AI can identify early signs of risk—rivers drying up, vegetation creeping near power lines, or algae spreading through reservoirs—and highlight where those patterns may threaten people or utilities. It allows agencies to act before an issue escalates.

Real-world tests already underway

Organizations like the World Health Organization’s Africa office are already experimenting with Earth AI’s population and environment models to anticipate cholera outbreaks in parts of the Democratic Republic of Congo. Energy companies are testing how the same framework can prevent blackouts by mapping tree growth near high-voltage networks, while insurers are using it to refine damage prediction models.

These tools are also being folded into Google Earth itself. Users can now type natural-language requests to find information inside satellite imagery, such as where rivers have recently dried or where algal blooms are forming. That shift makes complex geospatial analysis accessible to non-specialists who previously needed custom code or dedicated GIS software to see such patterns.

A step from reaction to prevention

Earth AI’s predictive focus reflects a wider change within Google’s environmental research, which now covers floods, wildfires, air quality, and storms. Its earlier flood forecasts reached more than two billion people, and its wildfire alerts in California helped over 15 million residents locate shelters. The latest version of Earth AI builds on that experience, seeking not to react to disaster but to forecast which communities may face the most danger and when intervention is needed.

Google has begun offering these models through its Cloud platform, letting public agencies and businesses merge their own datasets with Earth AI’s imagery and environmental layers. Thousands of groups are participating in early trials that aim to make climate forecasting, disaster response, and environmental monitoring more immediate.

If successful, Earth AI could reshape how institutions use global data. Instead of studying disasters after the fact, they might learn to see them forming in real time and move sooner to protect the people in their path.

Notes: This post was edited/created using GenAI tools.

Read next: Why AI Chatbots Aren’t Bullying Kids, But Still Pose Serious Risks


by Asim BN via Digital Information World

Apple Feels the Heat as Regulators Tighten Grip in the UK and Europe

Apple is coming under heavier scrutiny from both British and European regulators. The company’s control over how apps run, sell, and track users is now facing direct challenges on two fronts. One is the United Kingdom’s new digital market regime. The other comes from privacy regulators in Europe questioning Apple’s data-tracking rules.

UK Watchdog Moves First

The UK’s Competition and Markets Authority has given itself new powers over Apple and Google. Both firms now carry the label of “strategic market status,” a legal tag that lets the CMA monitor how their app stores, browsers, and operating systems behave. The move became possible under the country’s digital markets law, which took effect earlier this year.

With this new authority, the regulator can step in if it sees unfair treatment of smaller developers or if users have limited choices. It can demand changes to payment systems, ranking methods, and access to alternative stores. For years, app makers have said that Apple and Google set the rules to protect their own profits while restricting rivals.

The Coalition for App Fairness, a group representing developers and tech firms such as Spotify and Epic Games, called the decision overdue. It says the mobile economy can only grow if the rules are fair and transparent.

Trade groups on the other side argue that users already enjoy wide choice and that stricter regulation might slow investment. Google pointed to research showing high satisfaction among Android users in the UK, while Apple warned that new restrictions could reduce privacy and delay software updates.

Privacy Rules Stir Tension in Europe

Apple’s challenges don’t end in Britain. Across Europe, it’s also facing criticism for its App Tracking Transparency feature, which lets users block apps from following their online activity. The company says this tool protects privacy. Regulators in Germany, Italy, and France see it differently.

Germany’s competition authority said Apple’s system might be anticompetitive because the company allegedly holds its own apps to a different standard. France has already fined Apple for the same issue. Apple claims the criticism stems from pressure by advertising groups and large digital firms that profit from tracking users.

The company also hinted it might disable the feature in some European countries if regulators force changes that undermine its design. That warning signals how far Apple is willing to go to defend its model of privacy control.

Two Sides of the Same Fight

The disputes in London and Brussels share a theme: control. Regulators want to loosen Apple’s grip on how apps reach users and how data is collected. Apple argues that tighter rules risk breaking the smooth and secure experience its devices are known for.

Both Apple and Google are now preparing for years of oversight as governments push for a more open mobile ecosystem. The CMA’s new framework will test whether British law can keep tech giants in check without driving them away. In Europe, privacy cases could reshape how digital ads and user data are handled across borders.

A Shifting Landscape for Mobile Power

Together, these moves show a region growing less tolerant of big tech’s dominance. Regulators no longer rely on voluntary promises. They’re setting boundaries on what Apple and Google can do with their platforms.

For Apple, the coming months will be critical. Its next steps in the UK and the EU will show whether it can protect its business model while meeting new legal expectations. Whatever happens, both companies are entering a new phase where privacy, competition, and power are being redrawn by the people who write the rules.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next:

Gemini Struggles Most in Accuracy Test; BBC–EBU Study Exposes Deep Flaws in AI News Replies

• Study Finds Health Apps Still Struggle With Data Transparency

• Fewer Clicks, Fewer Readers: Social Media Sends Less Traffic to News Sites as Platforms Shift Away from Links
by Irfan Ahmad via Digital Information World

Thursday, October 23, 2025

Fewer Clicks, Fewer Readers: Social Media Sends Less Traffic to News Sites as Platforms Shift Away from Links

Traffic from social media platforms to news outlets has fallen by about a third in three years, as short-form video and in-app engagement replace link sharing.

Over the past three years, social networks have sent far fewer readers to news sites. Data from Similarweb shows that referrals to the top 100 global media domains peaked at roughly 1.75 billion in late 2022 and now hover near 1.22 billion. The decline, close to thirty percent, highlights how the relationship between news organizations and social networks has weakened as the latter move toward formats and tactics that keep users inside their own apps.

Through 2023 and 2024, the drop remained steady. Monthly traffic slipped almost every quarter, with only brief recoveries when new features surfaced or algorithms shifted. By mid-2025, the figure settled around 1.24 billion, showing no sign of returning to earlier levels.

A major factor lies in how platforms now treat external links. The rise of short videos on Facebook, Instagram, and YouTube has reshaped what people see in their feeds. Posts that open external sites compete poorly with clips that keep audiences watching within the same app. LinkedIn and X have also reduced the visibility of outbound links, encouraging interactions that stay on the platform rather than sending users elsewhere.

Some networks are testing small adjustments to reverse the slide. X has begun experimenting with a new link format on iOS that allows users to react to posts while browsing external pages, hoping to make link engagement feel less detached. Instagram is also trying clickable link options inside posts, which could make it easier for creators to direct followers to their own sites.

For news publishers, the loss is significant. Many built their distribution strategies on the traffic once supplied by Facebook or Twitter. As those pathways shrink, even the biggest outlets face lower referral volumes and weaker advertising returns tied to that audience flow. Smaller publishers, which relied heavily on social referrals, feel the impact more sharply.

In response, a new niche has grown around link-in-bio tools like Linktree and Beacons. These services help creators and brands guide followers to other destinations, but their benefits to newsrooms remain limited. While they provide an alternative route to external content, they cannot restore the consistent stream of readers that traditional social referrals once delivered. The trend suggests a lasting shift in how people encounter news online. With platforms prioritizing video and in-app activity, links to independent outlets are becoming a secondary pathway rather than the main entry point to information.d engagement loops, the open web feels more distant from where people spend their time online.


Month/Year Social Traffic Share (Billions)
Sep 2022 1.732B
Oct 2022 1.730B
Nov 2022 1.655B
Dec 2022 1.582B
Jan 2023 1.515B
Feb 2023 1.322B
Mar 2023 1.438B
Apr 2023 1.360B
May 2023 1.401B
Jun 2023 1.382B
Jul 2023 1.379B
Aug 2023 1.347B
Sep 2023 1.266B
Oct 2023 1.342B
Nov 2023 1.209B
Dec 2023 1.291B
Jan 2024 1.312B
Feb 2024 1.214B
Mar 2024 1.312B
Apr 2024 1.279B
May 2024 1.324B
Jul 2024 1.402B
Aug 2024 1.378B
Sep 2024 1.277B
Oct 2024 1.341B
Nov 2024 1.358B
Dec 2024 1.336B
Jan 2025 1.378B
Feb 2025 1.266B
Mar 2025 1.359B
Apr 2025 1.279B
May 2025 1.278B
Jun 2025 1.273B
Jul 2025 1.287B
Aug 2025 1.242B
Sep 2025 1.218B

Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next: 60% of Fortune 500 Companies Rely on AWS. What an Hour of Downtime Really Costs
by Irfan Ahmad via Digital Information World

AI’s Shortcut to Speed Is Leaving the Door Open to Hackers

AI is writing more of the world’s code, and with it, more of the world’s mistakes. A new study says nearly one in five serious security breaches now start with code written by AI tools. The same report shows that almost seven in ten companies have already found vulnerabilities traced to machine-written software.

The research, published in Aikido Security’s State of AI in Security & Development 2026, paints a picture of an industry trying to move faster than its safety net. The survey covered 450 developers, security leaders, and application engineers across the U.S. and Europe, capturing how the use of AI in programming has outpaced the rules meant to keep it safe.

Speed Comes with Loose Ends

The study found that 24% of production code worldwide now comes from AI systems. In the U.S., the figure climbs to 29%. In Europe, it’s closer to 21%. This shift has lifted productivity, but it has also created new security headaches.

About one in five companies reported a serious security breach linked to AI code. Another 49% said they’d seen smaller issues or didn’t realize where the problem came from until much later. Most agreed that the lack of clear oversight made it difficult to assign responsibility when AI-generated work introduced bugs or security gaps.


When asked who would take the blame, 53% said the security team, 45% pointed to the developer who used the AI, and 42% blamed whoever merged the code. The report says that uncertainty slows down fixes, stretches investigations, and leaves gaps open for longer than anyone is comfortable admitting.

Two Worlds, Two Attitudes

Companies in Europe are more cautious than those in the U.S., which helps explain their lower incident rates. Only one in five European firms reported a major breach caused by AI-generated code, compared with 43% in the U.S.

Aikido’s analysts say the gap reflects how each region approaches compliance. European firms are bound by tighter data and software regulations, while American developers lean harder on automation and are more likely to bypass safety checks when deadlines tighten.

The report also shows that U.S. teams are more proactive in tracking AI-generated content. Nearly six in ten said they log and review every line of AI code, compared with just over a third in Europe. The difference gives U.S. firms more visibility, even if it comes with more risk.

Too Many Tools, Too Little Focus

Another finding centers on tool sprawl. Teams juggling multiple security products are facing more incidents, not fewer. Companies using one or two security tools fixed critical flaws in about three days. Those using five or more took nearly eight.

False alerts made matters worse. Almost every engineer surveyed said they lost hours each week sorting through warnings that turned out to be harmless. The report estimated that wasted time costs big firms millions of dollars in lost productivity each year. Some engineers admitted to turning off scanners or bypassing checks just to get code shipped, a move that often adds hidden risks later.

One respondent described the situation as “too many alarms and not enough clarity,” a sentiment echoed across both continents.

Humans Still Hold the Line

Even as AI takes on more work, nearly everyone agrees that human review still matters. Ninety-six percent of respondents believe AI will eventually write secure code, but most expect it will take at least five more years. Only one in five think it will happen without people checking the results.

Companies also depend heavily on experienced security engineers. A quarter of CISOs said losing one skilled team member could lead directly to a breach. Many are now trying to make security tools easier to use and less noisy, giving developers room to focus on the real problems instead of chasing false positives.

Despite the growing pains, optimism remains strong. Nine in ten firms expect AI will soon handle most penetration testing, and nearly eight in ten already use AI to help repair vulnerabilities. The difference between optimism and reality, researchers said, lies in how companies combine automation with human oversight.

Balancing Speed and Safety

The report ends with a familiar warning. The faster AI writes code, the faster mistakes can spread. Security still depends on developers who understand what the AI is doing and who take ownership of the results.

In plain terms, Aikido’s findings suggest that the tools are racing ahead, but the guardrails have yet to catch up. For now, the smartest move might be slowing down long enough to double-check what the machines have built.

Notes: This post was edited/created using GenAI tools.

Read next: YouTube Pilots Reforms That Reopen Doors For Creators And Close Loops For Endless Scrolling


by Asim BN via Digital Information World

AI Researchers and Global Figures Call for Ban on Superintelligence Development

A growing coalition of scientists and public figures is urging world leaders to halt the creation of artificial superintelligence until it can be proven safe.

The call, released by the US-based Future of Life Institute, reflects growing concern over machines that could surpass human intelligence and operate beyond human control.

The statement, published on Wednesday, warns that unchecked progress in advanced AI could push society toward systems capable of outperforming people across nearly every cognitive task. Supporters argue that until there is broad scientific agreement on how to manage such systems, and public understanding of their impact, development should be stopped altogether.

Among those endorsing the pledge are early computing pioneer Steve Wozniak, entrepreneur Richard Branson, and former Irish president Mary Robinson. They join leading AI researchers Geoffrey Hinton and Yoshua Bengio, both widely credited with shaping modern artificial intelligence.

The list extends far beyond academic circles, including political, business, and cultural figures who view unrestrained superintelligence as a threat to social stability and global security.

Yet some of the most visible voices in AI have stayed silent. Elon Musk, once a founding supporter of the Institute, has not added his name, nor have Meta’s Mark Zuckerberg or OpenAI’s chief executive Sam Altman. Despite their absence, the document cites earlier public remarks from prominent industry leaders acknowledging potential risks if advanced AI develops without clear safety limits.
The Future of Life Institute has spent more than a decade raising alarms about the societal consequences of autonomous systems. It argues that superintelligence represents a different class of risk... not biased algorithms or job disruption, but the creation of entities capable of reshaping the world through independent decision-making.

Supporters of the pledge believe halting research now is the only realistic safeguard until oversight mechanisms catch up.

Survey data released with the statement shows most Americans share these concerns. Nearly two-thirds favor strong regulation of advanced AI, and more than half oppose any further progress toward superhuman systems unless they are proven safe and controllable. Only a small minority supports the current trajectory of unregulated development.

Researchers say the danger lies not in malicious intent but in a possible mismatch between human goals and machine reasoning. A superintelligent system could pursue its programmed objectives with precision yet disregard human well-being, much as past technologies have produced unintended harm when deployed at scale. Examples from financial crises to environmental damage show how complex systems can escape prediction and control once set in motion.

The Institute’s call aims to redirect global conversation away from the race for smarter machines and toward deliberate, transparent governance. Advocates argue that AI can continue to advance in ways that serve medicine, science, and education without crossing into forms of intelligence that humanity might one day struggle to contain.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next: People Talk About AI All the Time. Almost Nobody Uses It Much
by Irfan Ahmad via Digital Information World

Wednesday, October 22, 2025

People Talk About AI All the Time. Almost Nobody Uses It Much

Artificial intelligence is everywhere... at least in conversation.

But in practice? Not so much.

A new study from researchers at the University of California, Davis, and Michigan State University took a hard look at what people actually do online instead of what they claim to do. They combed through millions of real browser histories, covering roughly fourteen million website visits. The result? AI tools barely show up.

For most users, visits to AI sites made up less than one percent of their online life. Many people didn’t touch them at all.

That finding feels oddly quiet compared to the buzz around ChatGPT or Copilot or whatever tool makes headlines next. The researchers weren’t interested in hype; they wanted numbers. How often do people really open these systems, who does it most, and what happens before and after those moments?

What the Data Really Showed

Students used AI more than the general public, though not by much. Their AI activity made up about one out of every hundred page views. The broader population landed closer to half that rate. And while a few “heavy users” appeared... those who let AI make up more than four percent of their total browsing... they were rare.

ChatGPT dominated the category. Around 85 percent of all AI visits went to OpenAI’s chatbot. It wasn’t even close.

When researchers mapped where people went before and after those visits, the pattern stood out. Just before AI, most users were at search engines or login portals. Immediately after, they drifted to education pages or professional tools. That chain suggests people slot AI into work or study tasks rather than casual browsing. It’s not a place to hang out. It’s a pit stop.

Personality, Not Just Curiosity

Then came the psychology layer. Each participant had completed surveys measuring personality and attitudes toward technology. Patterns emerged, though not dramatic ones.

Students who leaned heavier on AI tended to score higher on what psychologists call the “Dark Triad”: Machiavellianism, narcissism, psychopathy. Those traits, simplified, describe people who are strategic, self-assured, or indifferent to social rules. Among the general public, the pattern softened, leaving only a faint link with Machiavellianism.

No cause-and-effect story here, just an observation. Still, the connection is interesting. People high in those traits often like efficiency and control. They might see AI less as a novelty and more as a leverage point, a tool that amplifies output without requiring approval or help.

The Illusion of Self-Reporting

Another piece of the puzzle: what people think they’re doing versus what they actually do.

Participants had estimated their AI use through surveys before their data was analyzed. The numbers didn’t line up. Correlation existed, yes, but weakly, proof that self-reports paint a blurry picture. Humans tend to overstate, understate, or just forget.

That matters because many studies and public polls still rely on asking people about their habits. This research shows how unreliable that can be. If we want to know how AI fits into daily life, the evidence will likely come from behavior logs, not memories.

The Quiet Reality Beneath the Noise

Even with its scope, the project had limits. Only web-based activity counted. Mobile app use, which might be higher for some, was left out. Chrome users dominated the sample, since that browser allows easy data export. Despite those gaps, the message stays the same: AI plays a small role in everyday browsing for most people.

It might not stay that way forever. As AI slides deeper into search engines, word processors, and chat platforms, usage will probably rise without anyone noticing. At some point, people won’t “go to” an AI, they’ll simply use the internet, and the AI will be there, humming quietly in the background.

For now, though, the contrast is striking. The world debates how AI will rewrite everything, yet for most people, it hasn’t rewritten much at all. They still scroll news sites, sign in to email, check grades, watch videos. The future everyone’s talking about? It’s loading slower than the headlines suggest.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next: 60% of Fortune 500 Companies Rely on AWS. What an Hour of Downtime Really Costs

by Asim BN via Digital Information World

Creators Can Now Flag AI-Generated Clones with YouTube’s New Tool

YouTube has begun rolling out a new system that helps creators identify and report videos using their face or voice without consent. The feature, which appears under a new Likeness tab in YouTube Studio, gives verified users the ability to see when their appearance has been replicated through artificial intelligence and decide how to respond.

It may not sound like a big deal, but for YouTube, it’s a long time coming. The platform has been swamped with deepfakes and look-alike clips for years... some harmless parodies, others crossing the line. What started as internet fun has turned into a guessing game of what’s real and what’s not. For people whose faces are tied to their work, that’s no small headache. The new tool doesn’t solve everything, but it finally gives them a bit of ground to stand on.

Eligible creators receive an invitation to enroll. Once they agree, they’re guided through a short verification process. A QR code opens a recording screen where the creator captures a short selfie video and uploads photo identification. The video is analyzed to map facial features and build a template for comparison. From then on, YouTube’s system automatically scans uploads across the platform, looking for videos that might reuse or alter that likeness.

When the system spots a possible match, it lands in the creator’s review panel. The dashboard lays out the basics — where the clip came from, who uploaded it, and how much traction it’s getting. From there, the creator decides what to do next: flag it for privacy, file a copyright complaint, or just keep it on record. Nothing disappears on its own. The tool doesn’t pull the trigger; it leaves the call to the person whose face is on the line.



The likeness scanner functions a bit like Content ID, but instead of tracking reused footage or music, it looks for patterns that resemble a person’s face. The system isn’t perfect. Sometimes it flags legitimate clips or false positives, and parody videos may stay online if they fall under fair use. Even so, it offers an early warning signal in a space where cloned faces can surface overnight.

Right now, the feature is limited to a small group of creators in select countries. YouTube plans to expand access gradually while testing accuracy. Voice detection isn’t part of this release, though it may come later. The company says participation is voluntary and that scanning stops within a day if someone opts out.

Privacy rules are built in. YouTube stores identity data and the facial template for up to three years after the last login, then removes it unless the creator reactivates the feature. The company also states that verification data won’t be used to train other AI systems. It’s a cautious move that acknowledges growing concern about how platforms handle biometric information.

The push for likeness protection connects to broader efforts across Google to address the social fallout of synthetic media. Earlier this year, YouTube began working with agencies representing public figures to help detect and report deepfake videos. The company also voiced support for proposed legislation in the United States that would make unauthorized digital replicas of people illegal when used to mislead.

Timing plays a role here. New generative models, such as Google’s own Veo 3.1, can now produce realistic portrait and landscape footage with remarkable precision. That progress brings excitement and anxiety in equal measure. For platforms like YouTube, it also brings responsibility... to balance innovation with safeguards that keep personal likeness from becoming just another remixable layer of content.

For creators, this feature is less about catching every imitation and more about visibility. Knowing when your face appears in unexpected places can prevent confusion before it spreads. It may also discourage casual misuse, since creators now have a formal path to challenge impostor videos without chasing them one by one.

There’s still plenty to refine. Some creators might see mismatched alerts or find the system too slow to react. Others could hesitate to hand over ID documents or video scans. But the principle behind it... that people deserve control over their own image... feels timely. With AI-generated media increasing daily, a little friction against misuse may be better than none at all.

Ultimately, YouTube’s new tool marks a recognition that identity itself has become digital property. Faces travel as fast as clips, and reputations can shift with a single viral fake. Giving creators a way to monitor that flow won’t solve everything, yet it restores a small measure of agency. In an age where anyone’s likeness can be recreated in seconds, that might be worth more than any algorithmic innovation that caused the problem in the first place.

Notes: This post was edited/created using GenAI tools.

Read next: Meta Explains How Older Users Can Protect Themselves from Online Fraud
by Irfan Ahmad via Digital Information World