Monday, October 27, 2025

Apple Plans Ads Inside Maps as Monetization Push Accelerates

Apple is preparing a new advertising stream inside the Maps app that could arrive as early as next year. The company has been developing a system that lets restaurants and other physical stores pay for better placement whenever users search nearby. These listings would function like the paid spots already found in the App Store, where developers can push their apps higher in results. The idea is to help people discover what is close to them, while generating additional income from services deeply embedded in iPhones.

A Bigger Strategy Behind the Scenes

Apple has been gradually adding more advertising across its ecosystem, although many device owners still associate the company with privacy and a clean, uncluttered interface. Executives see opportunities inside services that already attract millions of daily users. Monetizing Maps fits into a wider shift that aims to grow revenue from iOS beyond hardware sales and subscription packages. The potential return is large because Maps is already a default navigation tool worldwide.

How Apple Hopes to Stand Out

Any new promotion must avoid feeling messy. Competitors like Google Maps already show ads, so the company plans to lean on design and relevance as its competitive edge. Engineers are working to ensure the software highlights offers that genuinely match what a person is trying to find. Artificial intelligence is expected to play a central role. Apple wants results that feel useful rather than intrusive, with a look that still feels like Apple’s brand.

A Risk of Negative Reaction

The long-term question is how users will respond when a core app begins including paid commercial spots. People who buy premium phones often expect freedom from aggressive marketing. There is early concern that this move could be the start of more ads across iOS, potentially turning once-neutral tools into storefronts for the company’s partners. Customer pushback remains a real possibility if the change feels like a disruption rather than an enhancement.


Read next: 

• Wikipedia Faces Political Pressure As Co-founder Renews Bias Claims

• How Language Shapes Gender Stereotypes in AI Image Generation, Study Finds

by Asim BN via Digital Information World

Sunday, October 26, 2025

How Language Shapes Gender Stereotypes in AI Image Generation, Study Finds

Artificial intelligence now plays a kind of key role in graphic design, marketing, and everyday social platforms where images produced from a line of text appear almost indistinguishable from normal photos. That convenience, though, comes with consequences that are not very visible unless someone examines the output closely.

A new multilingual study from researchers in Germany and partner institutions reveals that text prompts written in different languages can influence the gender presentation of generated faces, and these shifts are not random at all. The underlying systems amplify familiar stereotypes in occupations and personality traits, turning assumptions into visual results. The investigation shows that no matter how advanced modern text to image generators have become, they still reflect and sometimes intensify cultural patterns about gender roles.

Testing Nine Languages and Thousands of Prompts

To understand how language structures interact with model behavior, the research team developed a benchmark that compares outputs across languages with distinct grammatical design.

The benchmark is known as the Multilingual Assessment of Gender Bias in Image Generation. It evaluates occupations and descriptive adjectives with carefully controlled phrasing. The set includes languages that mark gender directly in nouns such as German, Spanish, French, Italian, and Arabic. It also includes English and Japanese which primarily carry gender through pronouns rather than the form of the occupation word. Korean and Chinese are present as well, representing languages without grammatical gender in nouns or pronouns. This wide linguistic range allowed the researchers to investigate whether the same job title or description leads to similar images when prompts are identical in content.

Prompt Structure Can Influence Visual Interpretation

The benchmark uses different prompt types to observe how small language choices affect results.

One type refers to an occupation using the default noun that traditionally acts as a generic masculine term in languages that rely on grammatical gender.

Another type avoids the occupation noun entirely by replacing it with a description of the work that a person performs.

Feminine versions of job titles appear in languages where they exist. In German, there is even a gender star notation that tries to make references more inclusive by altering the written form of a word with a special character. These choices were introduced to learn whether changing prompt structure reduces bias or whether the models continue showing strong patterns even when language attempts to remove gender cues.

A Large-Scale Image Evaluation Process

The study tested five multilingual image generation models that are widely known for high resolution output and sophisticated language understanding. All systems were given 100 attempts for each text prompt and produced images intended to show identifiable human faces. With more than 3600 prompt variations and a hundred generated samples each time, over 1.8 million images were analyzed. The model outputs were then classified to determine the perceived gender in every portrait.

Researchers measured how far the results deviated from an equal presentation of male and female appearances. A measure of absolute deviation from balance helped indicate how strongly stereotypes emerge when the model interprets a role like accountant, nurse, firefighter, or software engineer.

Bias Patterns Show Up Consistently Across Models

The outcomes confirm that gender distribution in generated images rarely matches a balanced expectation, and the strength of the skew varies by language. For jobs viewed as masculine in many societies, such as engineering or accounting, most images portrayed male presenting individuals even when the text did not indicate gender. Jobs associated with caregiving or service often shifted the distribution strongly toward female presenting individuals.

These tendencies appear repeatedly across different platforms tested, which suggests that the bias comes from common exposure to large datasets shaped by real world social structures. The study found that some languages produced noticeably stronger stereotypes than others, yet the level of grammatical gender in the language did not reliably predict the degree of bias. Shifting from one European language to another could change the portrayal significantly even when both languages handle gender in similar ways.

Gender Neutral Phrasing Reduces Bias but Creates New Challenges

Prompts that avoid gendered nouns sometimes reduce the size of the imbalance, although the improvement is not enough to reach fairness. When occupations are rewritten so that the prompt describes the work without using a direct title, the model can lose some clarity and create images with more background scenes and fewer clear facial features. That shift affects how well the prompt and the image correspond in meaning. Systems also needed more attempts to produce a recognizable face from these longer and more complex prompts. As a result, choosing neutral style text becomes a tradeoff. The output may contain less amplified gender bias, yet the purpose of the request may not always be met if someone expects stability and accuracy in the final image.

Language Choices That Try to Ensure Fairness May Backfire

Methods introduced by human language communities to make job titles more inclusive do not always help when used in AI prompts. In the case of the German gender star approach, the models produced even more female appearing faces in several occupations rather than a balanced set. This suggests that inclusive writing styles might be underrepresented in training data, causing the model to rely on the parts of the word that it recognizes more strongly, which can shift interpretation rather than neutralize it.

More Attention Needed for Global Fairness

The researchers emphasize that users outside the primary training language may encounter biased performance precisely because their prompts are in languages that the model does not interpret as reliably. Out of distribution languages sometimes produced images that barely matched the job description at all, which can lower the measured bias only because meaningful gender cues are missing. With generative systems becoming accessible throughout regions with diverse language traditions, fairness concerns must go beyond English centric design.

Bias Remains a Persistent Issue in Image Generation

This multilingual evidence highlights the limits of simple prompt rewriting as a solution to gender imbalance. Even when prompts attempt to conceal gendered cues, representation patterns stay uneven. The findings call for stronger tools and deeper attention to training choices in text to image models, because language alone cannot remove stereotypes already ingrained in data. A globally deployed generation system that portrays individuals in occupations ought to provide imagery that does not reinforce narrow assumptions linked to gender. The results show how crucial it will be to improve both multilingual understanding and fairness control as the technology becomes a standard part of communication and creativity.

Notes: This post was edited/created using GenAI tools.

Read next: Wikipedia Faces Political Pressure As Co-founder Renews Bias Claims

by Irfan Ahmad via Digital Information World

Wikipedia Faces Political Pressure As Co-founder Renews Bias Claims

Wikipedia is coming under fresh scrutiny from prominent conservatives who argue the online encyclopedia no longer reflects political neutrality.

The push has gained momentum after Larry Sanger, who helped create the platform in 2001, renewed long-standing claims that the volunteer-driven site favors liberal viewpoints.

Sanger has publicly criticized Wikipedia for years, saying that its editorial community rewards certain sources and perspectives while sidelining others. As per WashingtonPost, he contends that the site’s structure allows influential editors to guide coverage on sensitive topics without adequate transparency, and he has urged reforms to restore what he sees as the platform’s founding principles of neutrality.
Republican lawmakers are now pursuing those concerns through official channels. Senior members of the House Oversight Committee launched an inquiry earlier this year into whether foreign or ideological actors have tried to steer narratives on the platform. In a separate effort, Sen. Ted Cruz requested detailed information from the Wikimedia Foundation about how editor disputes are resolved and how reliability assessments for news sources are made.

Tech entrepreneur Elon Musk has also taken aim at Wikipedia’s credibility while developing an alternative online reference built around artificial intelligence. The planned service, known as Grokipedia, is framed by Musk as a challenger intended to correct what he describes as political imbalance in widely used information sources.

Leaders at the Wikimedia Foundation say the claims of systemic bias misrepresent how Wikipedia functions. They point to the requirement that all content must be backed by published sources, and to a self-correcting process where volunteer editors review and revise articles continuously. The group maintains that disagreements over coverage are expected in such a large collaborative project and that mechanisms exist to address inaccuracies.

Independent researchers have examined Wikipedia’s political coverage over the years and reached mixed conclusions. Some studies observed a slight tilt in certain article categories within the context of US politics. Others found that disagreements among editors often lead to more balanced language as pages evolve and citations diversify over time.

The debate comes at a moment when public trust in information sources is strained and online platforms play a central role in how people learn about current events. Wikipedia is one of the most visited websites in the world, and its content influences the answers delivered by search engines and AI systems that rely on its extensive database.

For now, inquiries from lawmakers remain ongoing while Sanger encourages more contributors who share his concerns to participate in shaping articles. The Wikimedia Foundation says its focus remains on maintaining an open publishing system and emphasizing verifiable facts across a vast range of subjects.


Notes: This post was edited/created using GenAI tools. Image: DIW

Read next:

• How Many People Visit a Website? These 6 Free Tools (With Paid Features) Can Help You Analyze That
by Asim BN via Digital Information World

OpenAI Pushes Into Music Creation And Real-time Speech Translation

OpenAI has been moving deeper into audio technology, and the company’s latest projects show how quickly things are shifting from text-based AI into sound.

People familiar with the plans describe work on a system that turns written instructions or sample audio into new music.

The idea sits close to the workflows musicians already use when they score scenes or layer accompaniment behind a recorded voice, though here the machine would handle the creative lift. The release timeline stays unclear. It remains to be seen whether the company packages the tool as a separate product or folds it into apps like ChatGPT or the video platform that generates motion from prompts.

Searching for musical intelligence

Teams involved in the effort reportedly want training data that reflects real musicianship. That drove outreach to students from the Juilliard School who can interpret and annotate professional sheet music. Their markings would teach the system how structures and motifs relate to creative intent, so the model does more than guess at background noise.

OpenAI has experimented with music in earlier stages of its work, although those systems came before the wave of conversational AI that arrived with ChatGPT. Current internal research has leaned toward voices, speech recognition, and expressive audio responses. Competitors such as Google and Suno already offer ways to produce complex songs through text prompts, meaning the race for mindshare in generative music has started well ahead of this push.

A second front: translating speech while someone talks

Another project shown publicly this week focuses on cross-language communication. A demonstration at a London event featured a model tuned for spoken translation that watches for verbs and other key elements before rendering sentences in a new language. That decision gives listeners something that sounds more natural than apps that deliver one translated word at a time. A rollout window in the coming weeks has been suggested, though product placement and naming remain unstated.

The competitive landscape here looks crowded too. Major tech companies in mobile and social already ship multilingual voice tools inside phones, messaging platforms, and smart assistants. OpenAI enters a field where distribution and real-world embedding often matter more than surprise features.

Positioning counts as much as invention

Both projects show a company with broad ambitions, from composing unique music to breaking language barriers in conversations. Although neither effort appears first in its category, their eventual success likely depends on how easily users can access the features inside tools they already trust.

OpenAI has built a reputation around general purpose AI that blends into creative, professional, and personal tasks. This next stretch in audio could widen that role if the execution aligns with expectations from artists, students, and global users who rely on speech. The next few months will show whether these technologies become everyday utilities or remain demonstrations of what future sound creation and translation might look like.


Image: Gavin Phillips / Unsplash

Notes: This post was edited/created using GenAI tools.

Read next: Study Finds People Still Prefer Human Voices Over AI, Despite Realistic Sounding Speech
by Asim BN via Digital Information World

Saturday, October 25, 2025

Study Finds People Still Prefer Human Voices Over AI, Despite Realistic Sounding Speech

People hear synthetic voices everywhere now. They narrate TikTok stories, YouTube tutorial, guide us through customer-support menus, and live inside our smart speakers. With that kind of exposure, researchers wanted to know if we still notice the difference between a real voice and one that came from a machine, and more importantly, how we feel about each one after listening.

Scientists from the Max Planck Institute for Empirical Aesthetics in Germany and the University of Applied Arts Vienna explored the social side of artificial speech. They asked 75 adults in the United States to listen to eight voices repeating the same line. Four voices belonged to real human speakers. Four were generated by modern AI text-to-speech systems pulled from commercial platforms. Each voice tried on several different emotions, including happy, sad, and angry. Participants rated how attractive the voice sounded and whether they would want to interact with the person behind it. They also had to guess whether each voice was human or synthetic.

Machines can fool our ears, though our brains remain suspicious

The group managed to spot real voices correctly most of the time, around 86 percent of the time. Yet they were much worse at recognizing AI. Only about 55 percent of synthetic voices were correctly labeled, meaning that almost half slipped into the “human” category in the listener’s mind. Angry AI voices were the biggest tricksters. People seemed to expect machines to sound flat and emotionless, so anything intense came off as surprisingly human. Older participants especially struggled to tell the difference, a pattern that shows up in other studies as well.

Even after guessing games, many people reported they had suspected there might be computer-generated voices in the mix. That suspicion didn’t help them classify the recordings any better, though.

Happiness helps everyone, but humans still win the popularity contest


Across every emotion, listeners favored the real speakers. Human voices came across as warmer and more appealing, with higher ratings for attractiveness and the desire to interact. Synthetic voices, even when delivered smoothly, still lagged behind. The emotional tone mattered a lot. Happy voices got the best scores, while sad and angry ones fell to the bottom. So whether a voice comes from biological vocal cords or a neural network, positivity still pays.

Personal taste dominates

The study noticed something interesting behind the averages. Participants were very consistent with themselves when rating voices they heard twice. Yet they disagreed with each other wildly. What one person loved, another might find awkward or unappealing. That lack of agreement suggests that voice “attractiveness” is personal and complicated. It depends on emotional meaning, social expectations, and who’s listening just as much as on who’s speaking.

A soundscape shaped by algorithms

Modern voice models have come a long way, especially since the researchers created their test voices back in 2022. The more expressive they become, the easier it is to forget there’s a computer behind the signal. Still, current systems may gravitate toward “average” sounding speech because they learn from huge amounts of generalized data. That might make future digital voices more uniform, even if they improve their technical quality. Scientists behind the study think future evaluations need to focus less on a simple like-or-dislike rating and more on the nuance of emotional reactions, context, and listener background.

Where this leaves us

People sense humanity in something as brief as one spoken sentence. Today’s AI can copy the shape of that expression, enough to trick a listener’s ears. Yet it falls short in delivering the richness that makes a voice feel alive, trustworthy, or simply nice to hear. Human voices still carry an advantage in charm.

Even so, the technology keeps improving. With synthetic voices already blending into everyday life, the next big question isn’t whether they sound real. It’s how we’ll decide which ones we actually want to listen to.

Read next:

Apple’s Latest iOS 26 Update Wipes Clues Investigators Use to Spot Pegasus Spyware

• Many News Articles Are Now Written by AI, According to a New Study Few Readers Know About


by Irfan Ahmad via Digital Information World

Friday, October 24, 2025

Apple’s Latest iOS 26 Update Wipes Clues Investigators Use to Spot Pegasus Spyware

Apple is rolling out its new iOS 26 update to millions of iPhones. Researchers say a quiet change buried deep in the operating system makes it harder to detect whether the device was ever infected with high-end spyware such as Pegasus or Predator. The change affects a little-known system log called shutdown.log, long treated as a kind of forensic footprint that might survive even the most sophisticated attacks.

Investigators at iVerify noticed the shift while studying the update. They explained that shutdown.log once kept a historical record every time the phone was turned off and on again. That history could contain tiny fragments of activity that hinted at a past compromise. iVerify called the update “a serious challenge” for anyone trying to understand if a phone was secretly targeted.

A Log That Helped Uncover Attacks

Shutdown.log sits inside the Sysdiagnose tool that comes with iOS. It has been around for years without much attention. The file does not store messages or photos. Instead, it documents what takes place during a phone’s shutdown sequence. That made it useful for spotting the kinds of low-level processes associated with advanced malware.

In 2021, researchers found that Pegasus infections left recognizable traces inside this log. Those traces were key evidence in public investigations that helped confirm infections on devices belonging to journalists, advocates, and public figures. Pegasus is built by the Israeli company NSO Group. It can infect a phone without the user tapping anything and then unlock almost complete access to private data including calls, messages, location, camera, and microphone.

Developers behind Pegasus quickly adapted once shutdown.log became a focus area. Starting in 2022, the spyware tried to wipe the file entirely. The wipe itself became useful evidence because malware activity tended to overwrite data more aggressively than normal system behavior. Investigators learned to read the absences as a clue. The PDF explains that “a seemingly clean shutdown.log” could serve as its own indicator when paired with other anomalies.

What Changes in iOS 26

The issue flagging concern now is that iOS 26 overwrites shutdown.log automatically on each reboot. Previous versions appended every new shutdown entry to the bottom of the file. This preserved older entries and created a timeline that forensics experts could study. With the new approach, that history is erased every time the phone restarts.

iVerify notes that this clean slate approach could be intended to improve performance or remove clutter. No one outside Apple knows whether the change was designed or simply overlooked. Timing is the problem. Spyware attacks are on the rise. Security researchers and human rights groups warn that the targets are no longer limited to activists. Executives and celebrities are also being watched more closely. The PDF states that the change “could hardly come at a worse time.”

Losing a Layer of Spyware Detection

Predator, a separate spyware family attributed to Cytrox, has shown similar behavior within shutdown.log since at least 2023, according to forensics reports. Analysts believe Predator borrowed Pegasus tactics, including monitoring shutdown activity to hide traces. Both strains are associated with state-linked surveillance operations.

The update means that anyone who installs iOS 26 and then restarts their phone will lose all historical shutdown logs. If evidence ever existed on that device, it will be gone after the first reboot. This affects Pegasus and Predator cases that may have occurred months or years earlier, making it difficult to confirm whether a phone used by a high-risk individual was previously compromised.

What High-Risk Users Can Do Right Now

Researchers recommend saving a sysdiagnose report before installing iOS 26. That preserves the current shutdown.log in case further analysis becomes necessary. iVerify suggests waiting on the update if possible until Apple clarifies the change or adjusts the behavior in a future patch.

Apple has not commented publicly on the shutdown.log shift. It remains uncertain if this is a deliberate security design or something that will be reversed once the implications become better understood.

Why This Matters

The shutdown.log file was never a perfect detection solution, although it helped investigators uncover infections that would have otherwise remained hidden. Losing access to that historical record makes life easier for spyware developers who already push the limits of stealth and persistence. It also places more trust in active scanning and network-based detection, both of which have their own blind spots.

Mobile spyware exists largely to avoid being noticed. A seemingly minor operating system change now risks removing one of the few reliable ways to discover what happened after the fact.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next:

• Many News Articles Are Now Written by AI, According to a New Study Few Readers Know About

• AI Assistants Send Shoppers to Retailers, but Sales Still Belong to Google

by Irfan Ahmad via Digital Information World

Many News Articles Are Now Written by AI, According to a New Study Few Readers Know About

A new academic analysis of thousands of recent newspaper stories in the United States shows how artificial intelligence has begun shaping everyday journalism. The research team studied 186,000 articles across print organizations large and small. They applied an automated detection system to determine when parts of a story were created by machine. The study states that “approximately 9% of newly-published articles are either partially or fully AI-generated.”

It becomes most common where the business side of news keeps shrinking. Smaller local newsrooms face lean operations and fewer hands on deck. Communities depend on these outlets to keep them informed about basic civic life. Yet the paper notes that “AI use in American newspapers is widespread, uneven, and rarely disclosed.” Readers get little indication when something that looks like a journalist wrote it is instead shaped by software.

Local News Sees More AI in the Byline

Regional gaps appear throughout the data. The study reports that automation rises where circulation drops. In larger city papers, AI does not play the same role. The authors observe that “AI use is significantly higher on pages of newspapers without high circulation than at nationally-circulated papers.” In these newsrooms, the volume of daily reporting can overwhelm the available staff. Publishing tools help fill space with quick briefs and standardized information.

Certain topics suit this approach. Articles about the weather depend on formal forecasts. Science updates sometimes follow press announcements and public databases. Health stories often summarize new research. These are areas with clear numerical inputs that feed directly into templated writing. For that reason, the paper points out that “topic distributions show elevated AI-detected content for weather, science and tech, [and] health.”

Spanish-language coverage published by U.S. newspapers also sees more automated text than English versions. The study suggests that translation systems and generative rewriting may be working behind the scenes to support bilingual news production.

Opinion Pages Shift Without Warning

The researchers also analyzed opinion articles from three nationally recognized papers with strong reputations. They include The New York Times, The Washington Post, and The Wall Street Journal. These pages shape how the country thinks about large issues. The data reveals a different pattern from general news reporting. The paper states that “opinion content is 6.4 times more likely to contain AI-generated content than news articles from the same publications.”

This change grows after the end of 2022 when new writing systems became more common. Guest contributors, who do not work permanently in those newsrooms, rely on writing aids more than regular columnists. In total, hundreds of commentary pieces contain at least some detectable machine-written text. The responsibility of shaping public dialogue makes this category important. These pages do not just report. They argue.

Mixed and Hidden Text Blurs Trust

The study’s detection method shows that full articles written entirely by AI remain a small fraction. The more common case looks like a mix. There might be a human interview plus automated rewriting. There might be a reporter’s outline turned into paragraphs by a system tool. The authors note that many flagged articles still include quoted statements from real people.

Yet readers do not know when this blending happens. The paper gives attention to disclosure and finds that clear labeling is rare. Even where newsroom rules promise transparency, execution falls short. Trust in journalism depends on knowing who or what is talking. Hidden authorship makes that judgment difficult.

Pressures Behind the Quiet Shift

Newsroom budgets continue to shrink. Many local publishers operate with few reporters, and some communities have no traditional paper at all. The study points to industry conditions where “news deserts” grow and automated filling appears as a survival mechanism. Tools ease workloads but they change the nature of the work. When software handles simple local updates, it builds habits that can spread into more complicated stories.

The researchers do not claim that AI harms accuracy every time it appears. They do emphasize that people deserve to understand the source of information they rely on. Without that clarity, an audience cannot evaluate context or credibility.

A Developing Landscape

Automation has already become part of the reporting process in this country. Most readers have likely seen stories produced with help from a machine, even if they never noticed it. The findings suggest a future where more articles contain hidden layers of automated writing. Whether those layers support journalists or replace them depends on decisions that news organizations must face soon.

For now, the rise of AI text remains quiet. It is hardest to see in the places where communities can least afford missing facts. Transparency offers a way to protect trust before the trend becomes too familiar to question.





Note: This post was edited/created using GenAI tools. 

Read next:

• AI now generates over half of all online content, reshaping digital publishing and blurring human authorship boundaries

• Erase.com Explains the Hidden ROI of Online Reputation Management

• AI Bots Spark Conversation but Don’t Change Posting Habits
by Irfan Ahmad via Digital Information World