Sunday, October 26, 2025

OpenAI Pushes Into Music Creation And Real-time Speech Translation

OpenAI has been moving deeper into audio technology, and the company’s latest projects show how quickly things are shifting from text-based AI into sound.

People familiar with the plans describe work on a system that turns written instructions or sample audio into new music.

The idea sits close to the workflows musicians already use when they score scenes or layer accompaniment behind a recorded voice, though here the machine would handle the creative lift. The release timeline stays unclear. It remains to be seen whether the company packages the tool as a separate product or folds it into apps like ChatGPT or the video platform that generates motion from prompts.

Searching for musical intelligence

Teams involved in the effort reportedly want training data that reflects real musicianship. That drove outreach to students from the Juilliard School who can interpret and annotate professional sheet music. Their markings would teach the system how structures and motifs relate to creative intent, so the model does more than guess at background noise.

OpenAI has experimented with music in earlier stages of its work, although those systems came before the wave of conversational AI that arrived with ChatGPT. Current internal research has leaned toward voices, speech recognition, and expressive audio responses. Competitors such as Google and Suno already offer ways to produce complex songs through text prompts, meaning the race for mindshare in generative music has started well ahead of this push.

A second front: translating speech while someone talks

Another project shown publicly this week focuses on cross-language communication. A demonstration at a London event featured a model tuned for spoken translation that watches for verbs and other key elements before rendering sentences in a new language. That decision gives listeners something that sounds more natural than apps that deliver one translated word at a time. A rollout window in the coming weeks has been suggested, though product placement and naming remain unstated.

The competitive landscape here looks crowded too. Major tech companies in mobile and social already ship multilingual voice tools inside phones, messaging platforms, and smart assistants. OpenAI enters a field where distribution and real-world embedding often matter more than surprise features.

Positioning counts as much as invention

Both projects show a company with broad ambitions, from composing unique music to breaking language barriers in conversations. Although neither effort appears first in its category, their eventual success likely depends on how easily users can access the features inside tools they already trust.

OpenAI has built a reputation around general purpose AI that blends into creative, professional, and personal tasks. This next stretch in audio could widen that role if the execution aligns with expectations from artists, students, and global users who rely on speech. The next few months will show whether these technologies become everyday utilities or remain demonstrations of what future sound creation and translation might look like.


Image: Gavin Phillips / Unsplash

Notes: This post was edited/created using GenAI tools.

Read next: Study Finds People Still Prefer Human Voices Over AI, Despite Realistic Sounding Speech
by Asim BN via Digital Information World

Saturday, October 25, 2025

Study Finds People Still Prefer Human Voices Over AI, Despite Realistic Sounding Speech

People hear synthetic voices everywhere now. They narrate TikTok stories, YouTube tutorial, guide us through customer-support menus, and live inside our smart speakers. With that kind of exposure, researchers wanted to know if we still notice the difference between a real voice and one that came from a machine, and more importantly, how we feel about each one after listening.

Scientists from the Max Planck Institute for Empirical Aesthetics in Germany and the University of Applied Arts Vienna explored the social side of artificial speech. They asked 75 adults in the United States to listen to eight voices repeating the same line. Four voices belonged to real human speakers. Four were generated by modern AI text-to-speech systems pulled from commercial platforms. Each voice tried on several different emotions, including happy, sad, and angry. Participants rated how attractive the voice sounded and whether they would want to interact with the person behind it. They also had to guess whether each voice was human or synthetic.

Machines can fool our ears, though our brains remain suspicious

The group managed to spot real voices correctly most of the time, around 86 percent of the time. Yet they were much worse at recognizing AI. Only about 55 percent of synthetic voices were correctly labeled, meaning that almost half slipped into the “human” category in the listener’s mind. Angry AI voices were the biggest tricksters. People seemed to expect machines to sound flat and emotionless, so anything intense came off as surprisingly human. Older participants especially struggled to tell the difference, a pattern that shows up in other studies as well.

Even after guessing games, many people reported they had suspected there might be computer-generated voices in the mix. That suspicion didn’t help them classify the recordings any better, though.

Happiness helps everyone, but humans still win the popularity contest


Across every emotion, listeners favored the real speakers. Human voices came across as warmer and more appealing, with higher ratings for attractiveness and the desire to interact. Synthetic voices, even when delivered smoothly, still lagged behind. The emotional tone mattered a lot. Happy voices got the best scores, while sad and angry ones fell to the bottom. So whether a voice comes from biological vocal cords or a neural network, positivity still pays.

Personal taste dominates

The study noticed something interesting behind the averages. Participants were very consistent with themselves when rating voices they heard twice. Yet they disagreed with each other wildly. What one person loved, another might find awkward or unappealing. That lack of agreement suggests that voice “attractiveness” is personal and complicated. It depends on emotional meaning, social expectations, and who’s listening just as much as on who’s speaking.

A soundscape shaped by algorithms

Modern voice models have come a long way, especially since the researchers created their test voices back in 2022. The more expressive they become, the easier it is to forget there’s a computer behind the signal. Still, current systems may gravitate toward “average” sounding speech because they learn from huge amounts of generalized data. That might make future digital voices more uniform, even if they improve their technical quality. Scientists behind the study think future evaluations need to focus less on a simple like-or-dislike rating and more on the nuance of emotional reactions, context, and listener background.

Where this leaves us

People sense humanity in something as brief as one spoken sentence. Today’s AI can copy the shape of that expression, enough to trick a listener’s ears. Yet it falls short in delivering the richness that makes a voice feel alive, trustworthy, or simply nice to hear. Human voices still carry an advantage in charm.

Even so, the technology keeps improving. With synthetic voices already blending into everyday life, the next big question isn’t whether they sound real. It’s how we’ll decide which ones we actually want to listen to.

Read next:

Apple’s Latest iOS 26 Update Wipes Clues Investigators Use to Spot Pegasus Spyware

• Many News Articles Are Now Written by AI, According to a New Study Few Readers Know About


by Irfan Ahmad via Digital Information World

Friday, October 24, 2025

Apple’s Latest iOS 26 Update Wipes Clues Investigators Use to Spot Pegasus Spyware

Apple is rolling out its new iOS 26 update to millions of iPhones. Researchers say a quiet change buried deep in the operating system makes it harder to detect whether the device was ever infected with high-end spyware such as Pegasus or Predator. The change affects a little-known system log called shutdown.log, long treated as a kind of forensic footprint that might survive even the most sophisticated attacks.

Investigators at iVerify noticed the shift while studying the update. They explained that shutdown.log once kept a historical record every time the phone was turned off and on again. That history could contain tiny fragments of activity that hinted at a past compromise. iVerify called the update “a serious challenge” for anyone trying to understand if a phone was secretly targeted.

A Log That Helped Uncover Attacks

Shutdown.log sits inside the Sysdiagnose tool that comes with iOS. It has been around for years without much attention. The file does not store messages or photos. Instead, it documents what takes place during a phone’s shutdown sequence. That made it useful for spotting the kinds of low-level processes associated with advanced malware.

In 2021, researchers found that Pegasus infections left recognizable traces inside this log. Those traces were key evidence in public investigations that helped confirm infections on devices belonging to journalists, advocates, and public figures. Pegasus is built by the Israeli company NSO Group. It can infect a phone without the user tapping anything and then unlock almost complete access to private data including calls, messages, location, camera, and microphone.

Developers behind Pegasus quickly adapted once shutdown.log became a focus area. Starting in 2022, the spyware tried to wipe the file entirely. The wipe itself became useful evidence because malware activity tended to overwrite data more aggressively than normal system behavior. Investigators learned to read the absences as a clue. The PDF explains that “a seemingly clean shutdown.log” could serve as its own indicator when paired with other anomalies.

What Changes in iOS 26

The issue flagging concern now is that iOS 26 overwrites shutdown.log automatically on each reboot. Previous versions appended every new shutdown entry to the bottom of the file. This preserved older entries and created a timeline that forensics experts could study. With the new approach, that history is erased every time the phone restarts.

iVerify notes that this clean slate approach could be intended to improve performance or remove clutter. No one outside Apple knows whether the change was designed or simply overlooked. Timing is the problem. Spyware attacks are on the rise. Security researchers and human rights groups warn that the targets are no longer limited to activists. Executives and celebrities are also being watched more closely. The PDF states that the change “could hardly come at a worse time.”

Losing a Layer of Spyware Detection

Predator, a separate spyware family attributed to Cytrox, has shown similar behavior within shutdown.log since at least 2023, according to forensics reports. Analysts believe Predator borrowed Pegasus tactics, including monitoring shutdown activity to hide traces. Both strains are associated with state-linked surveillance operations.

The update means that anyone who installs iOS 26 and then restarts their phone will lose all historical shutdown logs. If evidence ever existed on that device, it will be gone after the first reboot. This affects Pegasus and Predator cases that may have occurred months or years earlier, making it difficult to confirm whether a phone used by a high-risk individual was previously compromised.

What High-Risk Users Can Do Right Now

Researchers recommend saving a sysdiagnose report before installing iOS 26. That preserves the current shutdown.log in case further analysis becomes necessary. iVerify suggests waiting on the update if possible until Apple clarifies the change or adjusts the behavior in a future patch.

Apple has not commented publicly on the shutdown.log shift. It remains uncertain if this is a deliberate security design or something that will be reversed once the implications become better understood.

Why This Matters

The shutdown.log file was never a perfect detection solution, although it helped investigators uncover infections that would have otherwise remained hidden. Losing access to that historical record makes life easier for spyware developers who already push the limits of stealth and persistence. It also places more trust in active scanning and network-based detection, both of which have their own blind spots.

Mobile spyware exists largely to avoid being noticed. A seemingly minor operating system change now risks removing one of the few reliable ways to discover what happened after the fact.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next:

• Many News Articles Are Now Written by AI, According to a New Study Few Readers Know About

• AI Assistants Send Shoppers to Retailers, but Sales Still Belong to Google

by Irfan Ahmad via Digital Information World

Many News Articles Are Now Written by AI, According to a New Study Few Readers Know About

A new academic analysis of thousands of recent newspaper stories in the United States shows how artificial intelligence has begun shaping everyday journalism. The research team studied 186,000 articles across print organizations large and small. They applied an automated detection system to determine when parts of a story were created by machine. The study states that “approximately 9% of newly-published articles are either partially or fully AI-generated.”

It becomes most common where the business side of news keeps shrinking. Smaller local newsrooms face lean operations and fewer hands on deck. Communities depend on these outlets to keep them informed about basic civic life. Yet the paper notes that “AI use in American newspapers is widespread, uneven, and rarely disclosed.” Readers get little indication when something that looks like a journalist wrote it is instead shaped by software.

Local News Sees More AI in the Byline

Regional gaps appear throughout the data. The study reports that automation rises where circulation drops. In larger city papers, AI does not play the same role. The authors observe that “AI use is significantly higher on pages of newspapers without high circulation than at nationally-circulated papers.” In these newsrooms, the volume of daily reporting can overwhelm the available staff. Publishing tools help fill space with quick briefs and standardized information.

Certain topics suit this approach. Articles about the weather depend on formal forecasts. Science updates sometimes follow press announcements and public databases. Health stories often summarize new research. These are areas with clear numerical inputs that feed directly into templated writing. For that reason, the paper points out that “topic distributions show elevated AI-detected content for weather, science and tech, [and] health.”

Spanish-language coverage published by U.S. newspapers also sees more automated text than English versions. The study suggests that translation systems and generative rewriting may be working behind the scenes to support bilingual news production.

Opinion Pages Shift Without Warning

The researchers also analyzed opinion articles from three nationally recognized papers with strong reputations. They include The New York Times, The Washington Post, and The Wall Street Journal. These pages shape how the country thinks about large issues. The data reveals a different pattern from general news reporting. The paper states that “opinion content is 6.4 times more likely to contain AI-generated content than news articles from the same publications.”

This change grows after the end of 2022 when new writing systems became more common. Guest contributors, who do not work permanently in those newsrooms, rely on writing aids more than regular columnists. In total, hundreds of commentary pieces contain at least some detectable machine-written text. The responsibility of shaping public dialogue makes this category important. These pages do not just report. They argue.

Mixed and Hidden Text Blurs Trust

The study’s detection method shows that full articles written entirely by AI remain a small fraction. The more common case looks like a mix. There might be a human interview plus automated rewriting. There might be a reporter’s outline turned into paragraphs by a system tool. The authors note that many flagged articles still include quoted statements from real people.

Yet readers do not know when this blending happens. The paper gives attention to disclosure and finds that clear labeling is rare. Even where newsroom rules promise transparency, execution falls short. Trust in journalism depends on knowing who or what is talking. Hidden authorship makes that judgment difficult.

Pressures Behind the Quiet Shift

Newsroom budgets continue to shrink. Many local publishers operate with few reporters, and some communities have no traditional paper at all. The study points to industry conditions where “news deserts” grow and automated filling appears as a survival mechanism. Tools ease workloads but they change the nature of the work. When software handles simple local updates, it builds habits that can spread into more complicated stories.

The researchers do not claim that AI harms accuracy every time it appears. They do emphasize that people deserve to understand the source of information they rely on. Without that clarity, an audience cannot evaluate context or credibility.

A Developing Landscape

Automation has already become part of the reporting process in this country. Most readers have likely seen stories produced with help from a machine, even if they never noticed it. The findings suggest a future where more articles contain hidden layers of automated writing. Whether those layers support journalists or replace them depends on decisions that news organizations must face soon.

For now, the rise of AI text remains quiet. It is hardest to see in the places where communities can least afford missing facts. Transparency offers a way to protect trust before the trend becomes too familiar to question.





Note: This post was edited/created using GenAI tools. 

Read next:

• AI now generates over half of all online content, reshaping digital publishing and blurring human authorship boundaries

• Erase.com Explains the Hidden ROI of Online Reputation Management

• AI Bots Spark Conversation but Don’t Change Posting Habits
by Irfan Ahmad via Digital Information World

Google Earth AI Taps Gemini to Predict Disasters Before They Unfold

Google is expanding its Earth AI platform with new Gemini-based intelligence that links environmental data with human impact in ways that used to take entire research teams to uncover. The system is being upgraded to process complex geospatial relationships so it can help scientists, city planners, and aid organizations prepare for natural disasters before they strike.

Smarter maps, faster insight

The foundation of this upgrade lies in what Google calls geospatial reasoning, a way for AI to analyze satellite images, population maps, and weather forecasts in the same query. When the system processes those layers together, it can predict not only where a cyclone might land but which areas are at greatest risk and how infrastructure might respond. This form of mapping turns what once required weeks of modeling into a process that unfolds in minutes.



The model draws on years of satellite and sensor data, combining it with Gemini’s reasoning to interpret physical conditions on the ground. That means the AI can identify early signs of risk—rivers drying up, vegetation creeping near power lines, or algae spreading through reservoirs—and highlight where those patterns may threaten people or utilities. It allows agencies to act before an issue escalates.

Real-world tests already underway

Organizations like the World Health Organization’s Africa office are already experimenting with Earth AI’s population and environment models to anticipate cholera outbreaks in parts of the Democratic Republic of Congo. Energy companies are testing how the same framework can prevent blackouts by mapping tree growth near high-voltage networks, while insurers are using it to refine damage prediction models.

These tools are also being folded into Google Earth itself. Users can now type natural-language requests to find information inside satellite imagery, such as where rivers have recently dried or where algal blooms are forming. That shift makes complex geospatial analysis accessible to non-specialists who previously needed custom code or dedicated GIS software to see such patterns.

A step from reaction to prevention

Earth AI’s predictive focus reflects a wider change within Google’s environmental research, which now covers floods, wildfires, air quality, and storms. Its earlier flood forecasts reached more than two billion people, and its wildfire alerts in California helped over 15 million residents locate shelters. The latest version of Earth AI builds on that experience, seeking not to react to disaster but to forecast which communities may face the most danger and when intervention is needed.

Google has begun offering these models through its Cloud platform, letting public agencies and businesses merge their own datasets with Earth AI’s imagery and environmental layers. Thousands of groups are participating in early trials that aim to make climate forecasting, disaster response, and environmental monitoring more immediate.

If successful, Earth AI could reshape how institutions use global data. Instead of studying disasters after the fact, they might learn to see them forming in real time and move sooner to protect the people in their path.

Notes: This post was edited/created using GenAI tools.

Read next: Why AI Chatbots Aren’t Bullying Kids, But Still Pose Serious Risks


by Asim BN via Digital Information World

Apple Feels the Heat as Regulators Tighten Grip in the UK and Europe

Apple is coming under heavier scrutiny from both British and European regulators. The company’s control over how apps run, sell, and track users is now facing direct challenges on two fronts. One is the United Kingdom’s new digital market regime. The other comes from privacy regulators in Europe questioning Apple’s data-tracking rules.

UK Watchdog Moves First

The UK’s Competition and Markets Authority has given itself new powers over Apple and Google. Both firms now carry the label of “strategic market status,” a legal tag that lets the CMA monitor how their app stores, browsers, and operating systems behave. The move became possible under the country’s digital markets law, which took effect earlier this year.

With this new authority, the regulator can step in if it sees unfair treatment of smaller developers or if users have limited choices. It can demand changes to payment systems, ranking methods, and access to alternative stores. For years, app makers have said that Apple and Google set the rules to protect their own profits while restricting rivals.

The Coalition for App Fairness, a group representing developers and tech firms such as Spotify and Epic Games, called the decision overdue. It says the mobile economy can only grow if the rules are fair and transparent.

Trade groups on the other side argue that users already enjoy wide choice and that stricter regulation might slow investment. Google pointed to research showing high satisfaction among Android users in the UK, while Apple warned that new restrictions could reduce privacy and delay software updates.

Privacy Rules Stir Tension in Europe

Apple’s challenges don’t end in Britain. Across Europe, it’s also facing criticism for its App Tracking Transparency feature, which lets users block apps from following their online activity. The company says this tool protects privacy. Regulators in Germany, Italy, and France see it differently.

Germany’s competition authority said Apple’s system might be anticompetitive because the company allegedly holds its own apps to a different standard. France has already fined Apple for the same issue. Apple claims the criticism stems from pressure by advertising groups and large digital firms that profit from tracking users.

The company also hinted it might disable the feature in some European countries if regulators force changes that undermine its design. That warning signals how far Apple is willing to go to defend its model of privacy control.

Two Sides of the Same Fight

The disputes in London and Brussels share a theme: control. Regulators want to loosen Apple’s grip on how apps reach users and how data is collected. Apple argues that tighter rules risk breaking the smooth and secure experience its devices are known for.

Both Apple and Google are now preparing for years of oversight as governments push for a more open mobile ecosystem. The CMA’s new framework will test whether British law can keep tech giants in check without driving them away. In Europe, privacy cases could reshape how digital ads and user data are handled across borders.

A Shifting Landscape for Mobile Power

Together, these moves show a region growing less tolerant of big tech’s dominance. Regulators no longer rely on voluntary promises. They’re setting boundaries on what Apple and Google can do with their platforms.

For Apple, the coming months will be critical. Its next steps in the UK and the EU will show whether it can protect its business model while meeting new legal expectations. Whatever happens, both companies are entering a new phase where privacy, competition, and power are being redrawn by the people who write the rules.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next:

Gemini Struggles Most in Accuracy Test; BBC–EBU Study Exposes Deep Flaws in AI News Replies

• Study Finds Health Apps Still Struggle With Data Transparency

• Fewer Clicks, Fewer Readers: Social Media Sends Less Traffic to News Sites as Platforms Shift Away from Links
by Irfan Ahmad via Digital Information World

Thursday, October 23, 2025

Fewer Clicks, Fewer Readers: Social Media Sends Less Traffic to News Sites as Platforms Shift Away from Links

Traffic from social media platforms to news outlets has fallen by about a third in three years, as short-form video and in-app engagement replace link sharing.

Over the past three years, social networks have sent far fewer readers to news sites. Data from Similarweb shows that referrals to the top 100 global media domains peaked at roughly 1.75 billion in late 2022 and now hover near 1.22 billion. The decline, close to thirty percent, highlights how the relationship between news organizations and social networks has weakened as the latter move toward formats and tactics that keep users inside their own apps.

Through 2023 and 2024, the drop remained steady. Monthly traffic slipped almost every quarter, with only brief recoveries when new features surfaced or algorithms shifted. By mid-2025, the figure settled around 1.24 billion, showing no sign of returning to earlier levels.

A major factor lies in how platforms now treat external links. The rise of short videos on Facebook, Instagram, and YouTube has reshaped what people see in their feeds. Posts that open external sites compete poorly with clips that keep audiences watching within the same app. LinkedIn and X have also reduced the visibility of outbound links, encouraging interactions that stay on the platform rather than sending users elsewhere.

Some networks are testing small adjustments to reverse the slide. X has begun experimenting with a new link format on iOS that allows users to react to posts while browsing external pages, hoping to make link engagement feel less detached. Instagram is also trying clickable link options inside posts, which could make it easier for creators to direct followers to their own sites.

For news publishers, the loss is significant. Many built their distribution strategies on the traffic once supplied by Facebook or Twitter. As those pathways shrink, even the biggest outlets face lower referral volumes and weaker advertising returns tied to that audience flow. Smaller publishers, which relied heavily on social referrals, feel the impact more sharply.

In response, a new niche has grown around link-in-bio tools like Linktree and Beacons. These services help creators and brands guide followers to other destinations, but their benefits to newsrooms remain limited. While they provide an alternative route to external content, they cannot restore the consistent stream of readers that traditional social referrals once delivered. The trend suggests a lasting shift in how people encounter news online. With platforms prioritizing video and in-app activity, links to independent outlets are becoming a secondary pathway rather than the main entry point to information.d engagement loops, the open web feels more distant from where people spend their time online.


Month/Year Social Traffic Share (Billions)
Sep 2022 1.732B
Oct 2022 1.730B
Nov 2022 1.655B
Dec 2022 1.582B
Jan 2023 1.515B
Feb 2023 1.322B
Mar 2023 1.438B
Apr 2023 1.360B
May 2023 1.401B
Jun 2023 1.382B
Jul 2023 1.379B
Aug 2023 1.347B
Sep 2023 1.266B
Oct 2023 1.342B
Nov 2023 1.209B
Dec 2023 1.291B
Jan 2024 1.312B
Feb 2024 1.214B
Mar 2024 1.312B
Apr 2024 1.279B
May 2024 1.324B
Jul 2024 1.402B
Aug 2024 1.378B
Sep 2024 1.277B
Oct 2024 1.341B
Nov 2024 1.358B
Dec 2024 1.336B
Jan 2025 1.378B
Feb 2025 1.266B
Mar 2025 1.359B
Apr 2025 1.279B
May 2025 1.278B
Jun 2025 1.273B
Jul 2025 1.287B
Aug 2025 1.242B
Sep 2025 1.218B

Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next: 60% of Fortune 500 Companies Rely on AWS. What an Hour of Downtime Really Costs
by Irfan Ahmad via Digital Information World