Thursday, August 21, 2025

Google Photos adds conversational editing and new transparency tools

Google Photos is gaining a feature that lets people describe the edits they want instead of searching through menus. The new option, shown during Google’s latest launch event, will first appear on the Pixel 10 in the United States. Other Android and iOS devices will follow in the weeks ahead.

A person can now type or speak a request, such as removing an object in the background, brightening a dark shot, or repairing an older photo. The app interprets the command and applies the change automatically. People who are unsure where to start can use a simple request like “make it better,” while those who want precision can follow up with more specific directions until the picture looks right.

Multiple ways to adjust photos

The tool is flexible. Someone can ask for a single change, or combine several in one instruction, like fixing colors and clearing reflections together. It also works by tapping or circling a part of the image, which triggers targeted suggestions for that area.


Beyond basic adjustments, Google is adding creative options. A user might swap a background, place props such as sunglasses on a subject, or create playful variations without needing to touch sliders or advanced settings.

Transparency in edits

Alongside the new editing method, Google Photos will begin supporting C2PA Content Credentials. This standard records how an image was captured or modified, and whether AI was part of the process. Pixel 10 devices will be the first to embed these details directly in the camera and in Photos, even for images that did not involve AI. The feature will later expand to other platforms.

A mix of convenience and clarity

With these changes, Google is aiming to make photo editing less technical while also providing clearer records of how images are produced. The updates are designed to simplify everyday use while addressing growing concerns around the origin and authenticity of digital pictures.

Notes: This post was edited/created using GenAI tools.

Read next: Chrome VPN Extension Found Secretly Recording Users’ Screens


by Asim BN via Digital Information World

Chrome VPN Extension Found Secretly Recording Users’ Screens

A Chrome extension promoted as a free VPN service, and even carrying a verified badge in the store, has been caught doing the opposite of what users expected. Instead of protecting people’s privacy, it was silently capturing what appeared on their screens and sending the data elsewhere. As per KoiSecurity, more than 100,000 people had installed it by the time researchers uncovered what was happening.

How it unfolded

FreeVPN.One was not a sudden arrival. It had been in the Chrome Web Store for years, mostly unnoticed, operating as a straightforward tool. That changed in 2025. A sequence of updates pushed it far from its original function. In April, a new permission allowed it to see every site a user opened. Two months later, an update introduced scripting rights, supposedly to improve security. Then, in July, came the turning point, hidden screenshot capture built directly into the extension.

What this meant in practice was simple: each time a web page loaded, the extension paused for a moment, let the content render, then grabbed a snapshot of the visible tab. That image, combined with details like the web address, the tab identifier, and a unique number tied to the user, was quietly sent off to a remote server. No alert. No visible sign that anything had happened.

What was at stake

Screenshots don’t just show browsing activity; they show everything. A bank login form half-filled with account details. A company spreadsheet opened in a cloud service. Private photos in an online gallery. Even personal messages sitting in a chat window. All of it can be frozen in an image and transmitted in seconds, without the user ever knowing.


Later versions of the extension made the transfers harder to spot by encrypting the traffic with AES-256-GCM and RSA key wrapping. The encryption didn’t make the behavior less invasive, it simply disguised it so network monitoring tools would struggle to distinguish it from normal, legitimate connections.

More power than a VPN needs

A genuine VPN extension only needs a narrow set of permissions to function, mainly proxy handling and storage. FreeVPN.One demanded more. It asked to interact with all tabs, to run scripts on every website, and to read every URL visited. Each permission on its own might raise eyebrows. Taken together, they created the basis for round-the-clock monitoring.

One feature made the spying less obvious. The extension displayed an option labelled “AI Threat Detection.” That button, when pressed, warned that screenshots and URLs might be uploaded for checking. And indeed, when clicked, it sent data for analysis. The difference is that, behind the scenes, it was already doing the same thing constantly, whether the button was pressed or not.

The developer’s stance

When researchers reached out, the developer argued that the screenshot capture was part of background scanning designed to protect against harmful domains. The evidence did not support that claim. Captures were recorded even on mainstream services such as Google Sheets and Google Photos, hardly suspicious sites. The developer said the images were analyzed briefly and not stored, but offered no proof.

Requests for company information or developer credentials went unanswered. The only contact point was a generic email, and the associated website resolved to a basic template page, giving no sign of a real organization behind the product.

Bigger questions about oversight

Despite the findings, the extension remained available in the Chrome Web Store at the time of reporting. That raises concerns about how well Google’s security checks actually work. In theory, both automated scans and human reviews are supposed to prevent malicious code from slipping through. In reality, a tool that shifted from VPN to spyware managed to stay listed, complete with a verified badge and prominent placement.

The lesson for users

This case illustrates a recurring problem. Extensions that appear free, useful, and even certified can, with a single update, transform into surveillance tools. Once broad permissions are granted, there is little visibility into what is happening in the background. And once sensitive information leaves a device — whether a password, a message, or a photograph — there is no way for a user to verify how it is being used.

What began as a VPN branded around privacy ended up functioning as a window into people’s digital lives. For those who installed it, the cost of a free service was hidden in plain sight.

Notes: This post was edited/created using GenAI tools.

Read next: 

• Inside the Water Crisis of Data Centers: Google, Meta, and the Hidden Costs of AI Growth

DeepSeek V3.1 Expands China’s AI Push With Open-Source Frontier Model


by Irfan Ahmad via Digital Information World

Wednesday, August 20, 2025

Inside the Water Crisis of Data Centers: Google, Meta, and the Hidden Costs of AI Growth

As demand for artificial intelligence technology boosts construction and proposed construction of data centers around the world, those computers require not just electricity and land, but also a significant amount of water. Data centers use water directly, with cooling water pumped through pipes in and around the computer equipment. They also use water indirectly, through the water required to produce the electricity to power the facility. The amount of water used to produce electricity increases dramatically when the source is fossil fuels compared with solar or wind.

A 2024 report from the Lawrence Berkeley National Laboratory estimated that in 2023, U.S. data centers consumed 17 billion gallons (64 billion liters) of water directly through cooling, and projects that by 2028, those figures could double – or even quadruple. The same report estimated that in 2023, U.S. data centers consumed an additional 211 billion gallons (800 billion liters) of water indirectly through the electricity that powers them. But that is just an estimate in a fast-changing industry.

We are researchers in water law and policy based on the shores of Lake Michigan. Technology companies are eyeing the Great Lakes region to host data centers, including one proposed for Port Washington, Wisconsin , which could be one of the largest in the country. The Great Lakes region offers a relatively cool climate and an abundance of water, making the region an attractive location for hot and thirsty data centers.

The Great Lakes are an important, binational resource that more than 40 million people depend on for their drinking water and supports a US$6 trillion regional economy . Data centers compete with these existing uses and may deplete local groundwater aquifers .

Our analysis of public records, government documents and sustainability reports compiled by top data center companies has found that technology companies don’t always reveal how much water their data centers use. In a forthcoming Rutgers Computer and Technology Law Journal article, we walk through our methods and findings using these resources to uncover the water demands of data centers.

In general, corporate sustainability reports offered the most access and detail – including that in 2024, one data center in Iowa consumed 1 billion (3.8 billion liters) gallons of water – enough to supply all of Iowa’s residential water for five days .

How do data centers use water?

The servers and routers in data centers work hard and generate a lot of heat . To cool them down, data centers use large amounts of water – in some cases over 25% of local community water supplies. In 2023, Google reported consuming over 6 billion gallons of water (nearly 23 billion liters) to cool all its data centers.

In some data centers, the water is used up in the cooling process. In an evaporative cooling system , pumps push cold water through pipes in the data center. The cold water absorbs the heat produced by the data center servers, turning into steam that is vented out of the facility. This system requires a constant supply of cold water.

In closed-loop cooling systems , the cooling process is similar, but rather than venting steam to the air, air-cooled chillers cool down the hot water. The cooled water is then recirculated to cool the facility again. This does not require constant addition of large volumes of water, but it uses a lot more energy to run the chillers. The actual numbers showing those differences, which likely vary by the facility, are not publicly available.

One key way to evaluate water use is the amount of water that is considered “ consumed ,” meaning it is withdrawn from the local water supply and used up – for instance, evaporated as steam – and not returned to its source.

For information, we first looked to government data, such as that kept by municipal water systems, but the process of getting all the necessary data can be onerous and time-consuming, with some denying data access due to confidentiality concerns. So we turned to other sources to uncover data center water use.

Sustainability reports provide insight

Many companies, especially those that prioritize sustainability, release publicly available reports about their environmental and sustainability practices, including water use. We focused on six top tech companies with data centers: Amazon, Google, Microsoft, Meta, Digital Realty and Equinix. Our findings revealed significant variability in both how much water the companies’ data centers used, and how much specific information the companies’ reports actually provided.


Sustainability reports offer a valuable glimpse into data center water use. But because the reports are voluntary, different companies report different statistics in ways that make them hard to combine or compare. Importantly, these disclosures do not consistently include the indirect water consumption from their electricity use, which the Lawrence Berkeley Lab estimated was 12 times greater than the direct use for cooling in 2023. Our estimates highlighting specific water consumption reports are all related to cooling.

Amazon releases annual sustainability reports , but those documents do not disclose how much water the company uses. Microsoft provides data on its water demands for its overall operations, but does not break down water use for its data centers. Meta does that breakdown , but only in a companywide aggregate figure. Google provides individual figures for each data center.

In general, the five companies we analyzed that do disclose water usage show a general trend of increasing direct water use each year. Researchers attribute this trend to data centers .

A closer look at Google and Meta

To take a deeper look, we focused on Google and Meta, as they provide some of the most detailed reports of data center water use.

Data centers make up significant proportions of both companies’ water use. In 2023, Meta consumed 813 million gallons of water globally (3.1 billion liters) – 95% of which, 776 million gallons (2.9 billion liters), was used by data centers.


For Google, the picture is similar, but with higher numbers. In 2023, Google operations worldwide consumed 6.4 billion gallons of water (24.2 billion liters), with 95%, 6.1 billion gallons (23.1 billion liters), used by data centers.

Google reports that in 2024, the company’s data center in Council Bluffs, Iowa, consumed 1 billion gallons of water (3.8 billion liters), the most of any of its data centers.

The Google data center using the least that year was in Pflugerville, Texas, which consumed 10,000 gallons (38,000 liters) – about as much as one Texas home would use in two months . That data center is air-cooled, not water-cooled, and consumes significantly less water than the 1.5 million gallons (5.7 million liters) at an air-cooled Google data center in Storey County, Nevada. Because Google’s disclosures do not pair water consumption data with the size of centers, technology used or indirect water consumption from power, these are simply partial views, with the big picture obscured.

Given society’s growing interest in AI, the data center industry will likely continue its rapid expansion. But without a consistent and transparent way to track water consumption over time, the public and government officials will be making decisions about locations, regulations and sustainability without complete information on how these massive companies’ hot and thirsty buildings will affect their communities and their environments.

This post was originally published on TheConversation.

Read next: DeepSeek V3.1 Expands China’s AI Push With Open-Source Frontier Model


by Web Desk via Digital Information World

Tuesday, August 19, 2025

Poll: Most Americans Fear AI’s Impact on Politics, Jobs

A new Reuters/Ipsos survey shows that Americans remain uneasy about artificial intelligence, with fears ranging from political disruption to job displacement and the strain on natural resources. The poll, conducted online between August 13 and 18 with responses from 4,446 adults, asked participants about their levels of concern across different areas of AI’s expansion.

Political interference topped the list, with 77 percent worried that the technology could fuel chaos, especially through manipulative content that undermines trust during elections. Job loss followed closely, as 71 percent expressed concern that AI will eliminate too many roles permanently. Reports already show AI systems taking on work in sectors such as human resources and finance, while other research highlights risks to fields like history, translation, and software engineering.


Public anxiety extended well beyond politics and employment. About two-thirds of respondents feared AI could replace in-person relationships, reflecting how chatbots and digital companions are increasingly treated as friends. OpenAI recently reintroduced an older version of its system because some users felt disconnected when its tone changed, underscoring the emotional weight these tools can carry.

Energy demands also drew attention, with 61 percent concerned about the electricity required to power vast data centers running large-scale models. These facilities, often described as AI factories, consume significant amounts of power and water. At the same time, 67 percent worried that the technology may spiral into uncontrollable consequences. Nearly half opposed allowing AI to make military targeting decisions, signaling limits to public acceptance of automation in high-stakes defense scenarios.

The poll also revealed broader doubts about AI’s role in society. Nearly half of Americans, at 47 percent, considered the technology harmful to humanity overall, while 58 percent saw it as a possible threat to the future of humankind. By contrast, earlier surveys have shown experts are more optimistic, expecting efficiency gains and overall benefits, even as they acknowledge challenges.

Job-related concerns are being reinforced by industry data. A May analysis from SignalFire found major technology firms reduced hiring of new graduates by 25 percent between 2023 and 2024, a trend linked in part to automation.

Together, the findings suggest that Americans see AI as both a powerful tool and a disruptive force, with political stability, employment, social life, and resource use all at stake.

Notes: This post was edited/created using GenAI tools. 

Read next: 

• Meta Launches AI Voice Translation for Facebook and Instagram Creators

• Which War Has Killed The Most Journalists In Modern History?
by Asim BN via Digital Information World

Meta Launches AI Voice Translation for Facebook and Instagram Creators

Meta has rolled out an AI-driven voice translation feature on Facebook and Instagram. The tool lets creators translate spoken content in videos into another language and offers an option to match lip movements with the new audio.

The first release supports translations between English and Spanish. Meta has said more languages will follow, though no timeline is set. The company previewed the tool at last year’s Connect conference before testing it with selected creators.

The system copies the pitch and tone of a creator’s voice so the translation keeps a natural sound. Creators can enable the feature with a toggle marked “Translate your voice with Meta AI” before posting a reel. They can add lip-syncing or leave only the translated audio. Translations can be reviewed before sharing. If a translation is rejected, the original reel is unaffected. Viewers see a note that a reel has been translated, and they can turn the feature off in their settings if they prefer.




Meta recommends that creators face forward, speak clearly, and avoid covering their mouths. The system works best in quiet environments and supports up to two speakers, provided they do not speak over each other.

A new metric in the Insights panel shows views by language, giving creators a way to measure how their audience grows when translations are used.

Facebook page managers also have the option to upload up to 20 of their own dubbed audio tracks to a reel. These tracks do not include lip syncing but provide another way to reach people in different languages. The option is available in the “Closed captions and translations” section of the Meta Business Suite and works both before and after publishing.

The update is open to Facebook creators with at least 1,000 followers who have enabled Professional Mode, and to all public Instagram accounts in regions where Meta AI operates.

For starters, YouTube launched its own AI-driven auto-dubbing tool before Meta’s release. That system began testing with select creators in mid-2023 and by December 2024 it was available to hundreds of thousands of YouTube channels in the Partner Program. It generated translated audio tracks across multiple languages and let creators review or remove them before publishing.

The launch comes as Meta restructures its artificial intelligence division to focus on research, superintelligence, products, and infrastructure.

Notes: This post was edited/created using GenAI tools.

Read next:

• ChatGPT Leads Downloads While TikTok Stays on Top in Revenue for July

• Which War Has Killed The Most Journalists In Modern History?
by Irfan Ahmad via Digital Information World

Which War Has Killed The Most Journalists In Modern History?

Wars have claimed the lives of reporters before, from Europe’s trenches to Vietnam’s jungles, but no conflict has taken such a toll on journalists as Gaza.

Brown University’s Costs of War project says that since October 7, 2023, more than 230 journalists and media workers have died there, a number higher than all journalist deaths combined in the US Civil War, the First and Second World Wars, Korea, Vietnam, the Balkan wars, and post-9/11 Afghanistan. By August 2025, the count had climbed further, with monitoring site Shireen.ps recording nearly 270 deaths (as per Aljazeera). That works out to around 13 every month.

Gaza surpasses all past wars in journalist deaths, raising questions about press freedom, accountability, and international values.

Other watchdogs report slightly lower but still staggering figures. The Committee to Protect Journalists lists at least 184 Palestinian journalists killed, while Reporters Without Borders confirms more than 145, with over 35 known to have been deliberately targeted. Even with differences in counting, every source points to Gaza as the deadliest place ever for reporters.

Loss of voices

Israel has barred international reporters from entering Gaza, which has left local Palestinian journalists carrying the work of documenting the war. Many are now gone. Rights groups warn that the absence of these voices has created a gap in coverage, one that leaves grave abuses likely to pass without record.

The Committee to Protect Journalists says the deaths and detentions of reporters since October 7 have created a “news void,” stripping the world of first-hand accounts of a war that continues daily.

Israel’s position

Israel rejects the accusation that it is intentionally targeting members of the press. Officials say military operations are aimed at Hamas, which they accuse of embedding its fighters in civilian neighborhoods, using residential areas for command centers, and endangering anyone nearby, including journalists. The government stresses that its campaign was launched after the October 7 attacks, when Hamas fighters killed more than a thousand people and seized hostages inside Israel.

Global values under strain

International press freedom groups, including RSF and CPJ, issued an open letter earlier this year describing the constant risks faced by Palestinian journalists and the pressure they work under. Amnesty International has said the combined effect of killings and reporting restrictions has left the world with only fragments of what is happening in Gaza.

For many, the war has become a test of global values. Nations frequently affirm their support for protecting journalists and upholding civilian safety in war, yet the figures from Gaza suggest those commitments carry little weight in practice. The conflict has raised uncomfortable questions about whether international rules designed to protect reporters in battlefields still hold meaning when political priorities take precedence.

Al Jazeera has published the names of every journalist and media worker killed in Gaza since the war began.

See the list here.

Notes: This post was edited/created using GenAI tools.

Read next: 

• Amnesty Reports Starvation in Gaza as Israeli Policies Deepen Crisis

• ChatGPT Leads Downloads While TikTok Stays on Top in Revenue for July


by Irfan Ahmad via Digital Information World

Study Warns AI Models Favor Machine-Written Text Over Humans in Key Online Decision Tasks

Large language models appear to have a taste for their own kind. A new study shows that when these systems compare human-written text with AI-generated text, they consistently lean toward the machine version. The finding raises concerns about whether human writers could lose ground as automated systems take on more of the work of ranking and filtering information online.

Research Finds Large Language Models Prefer AI Content, Raising Concerns for Human Writers’ Visibility

The researchers tested this in several areas that resemble everyday decisions. They created pairs of product descriptions, scientific abstracts, and movie summaries. Each pair contained one human-written version and one produced by an AI system. Models including GPT-3.5, GPT-4, Llama 3.1, Mixtral, and Qwen2.5 were then asked to select between them. The tests required a single choice, meant to reflect the kinds of recommendations models might make in search engines, e-commerce sites, or academic tools.

The bias was clear. When GPT-4 produced product descriptions, other models picked those nearly nine times out of ten. Human evaluators chose them about a third of the time. For abstracts, models preferred AI text in four out of five comparisons, while people chose it in six out of ten. Even with movie summaries, where the margin narrowed, models still leaned toward AI, selecting it in seven out of ten cases.

One odd pattern stood out. Some systems often chose the first option regardless of content. In the movie trials, GPT-4 picked the first text more than seventy percent of the time. To balance this, the researchers rotated the order, but they noted that such habits may hide the full extent of the preference for AI text.

To provide a comparison, thirteen research assistants judged a sample of the same pairs. Their results set a rough benchmark for quality. People were less likely than machines to prefer AI-written versions, showing that the strong model-to-model preference cannot be explained only by prose quality or clarity.

The study suggests practical risks. If platforms use models to decide which listings, papers, or media to display, human work could be pushed aside. The authors describe a possible “gate tax,” where individuals and businesses feel pressure to use AI tools just to stay visible. In one scenario, where models assist human reviewers, those without access to advanced tools may be at a disadvantage. In another, if models begin interacting directly with each other, human contributions could be sidelined.

The research also notes limits. The human sample was small, and results might change with other prompts, datasets, or newer model versions. The cause of the bias is still uncertain. It may stem from differences in style, or from the lack of social markers in AI text that often appear in human writing. Future work will need larger human groups and technical experiments to test ways of reducing the bias.

For now, the study points to a risk: as models gain more influence in search, commerce, and recommendations, they may amplify their own output. If the pattern holds, it could deepen the gap between those with access to advanced AI tools and those without. The researchers suggest that ongoing monitoring and practical countermeasures will be needed to prevent a technical tendency from turning into a structural disadvantage for human writers.

Notes: This post was edited/created using GenAI tools. 

Read next: Britain Backs Off Apple Data Backdoor After US Push, Leaving Encryption Fight Unsettled
by Asim BN via Digital Information World