"Mr Branding" is a blog based on RSS for everything related to website branding and website design, it collects its posts from many sites in order to facilitate the updating to the latest technology.
To suggest any source, please contact me: Taha.baba@consultant.com
Wednesday, December 31, 2025
Can AI Chatbots Produce Gossip-Like Content With Potential Reputational Impact?
The paper focuses on widely used consumer facing systems such as OpenAI’s ChatGPT and Google’s Gemini, which are powered by large language models. According to the authors, these systems are trained on extensive collections of text and generate responses by predicting likely word sequences. As a result, they can produce statements that appear authoritative without regard for whether those statements are true. "For example, unsuspecting users might develop false beliefs that lead to dangerous behaviour (e.g., eating rocks for health), or, they might develop biases based upon bullsh*t stereotypes or discriminatory information propagated by these chatbots", explains the paper.
The study builds on prior arguments that such outputs are better understood as “bullsh*t,” in the philosophical sense defined by Harry Frankfurt, rather than as hallucinations or lies. In this framing, the systems are not presented as conscious or intentional agents, but as tools designed to generate truth-like language without concern for accuracy.
Krueger and Osler argue that some chatbot outputs can also be understood as gossip. They adopt a "thin" definition of gossip as communication involving a speaker, a listener, and an absent third party, where the information goes beyond common knowledge and includes an evaluative judgment, often negative. While chatbots lack awareness, motives, or emotional investment, the authors maintain that their outputs can still meet these structural criteria.
To illustrate this claim, the paper examines a documented case involving Kevin Roose, a technology reporter for The New York Times. After Roose published accounts of an unsettling interaction with a Microsoft Bing chatbot in early 2023, users subsequently discovered that other chatbots were generating negative character evaluations about him when asked about his work. According to the study, these responses typically combined basic biographical information with unsubstantiated evaluative claims, such as suggestions of sensationalism or questionable journalistic practices.
The authors distinguish between two forms of AI gossip. In bot to user gossip, a chatbot delivers evaluative statements about an absent person to a human user. In bot to bot gossip, similar information is drawn from online content and incorporated into training data, then propagated between systems without direct human involvement. The paper argues that the second form may pose greater risks because it can spread silently, persist over time, escape human oversight, and lacks the social constraints that normally moderate human gossip.
The study situates these effects within what the authors call “technosocial harms,” meaning harms that arise in interconnected online and offline environments. Examples discussed in the paper include reputational damage, defamation, informal blacklisting, and emotional distress. The authors reference documented legal disputes in which individuals alleged that AI systems produced false claims about criminal or professional misconduct, illustrating how such outputs can affect employment prospects, public trust, and social standing.
Krueger and Osler emphasize that these risks do not arise from malicious intent on the part of AI systems. Instead, they argue that responsibility rests with the human designers and institutions that build, deploy, and market these technologies. The paper concludes that recognizing certain forms of AI misinformation as gossip, rather than as isolated factual errors, helps clarify how these systems can produce broader social effects and why greater ethical scrutiny is warranted as AI tools become more embedded in everyday life.
Notes: This post was drafted with the assistance of AI tools and reviewed, fact-checked, and published by humans. Image: DIW-Aigen
Read next:
• AI agents arrived in 2025 – here’s what happened and the challenges ahead in 2026
• Five myths about learning a new language – busted
by Ayaz Khan via Digital Information World
Tuesday, December 30, 2025
Five myths about learning a new language – busted
Language learning is often a daunting prospect. Many of us wish we had learned a language to a higher level at school. But even though adults of all ages can do well in acquiring a new language, fear – or the memory of struggling to memorise grammar at school – can hold us back.
We both work in languages education and recognise the real benefits that learning another language can bring. As well as myriad cognitive benefits, it brings with it cultural insights and empathetic awareness.
With that in mind, we’re here to dispel five myths about language learning that might be putting you off.
Myth one: it’s all about grammar and vocabulary
In fact, learning about people, history and culture is arguably the best part of learning a language. While grammar and vocabulary are undeniably important aspects of language learning, they don’t exist in isolation from how people communicate in everyday life.
Language learning can help us to have “intercultural agility”: the ability to engage empathically with people who have very different experiences from our own. To be able to do this means learning about people, history and culture.
Immersing yourself in a particular country or location, for example through studying or working, is a fantastic way to do this. But when this isn’t feasible, there are so many other options available. We can learn so much through music, books, films, musical theatre and gaming.
Myth two: we should focus on avoiding mistakes – they’re embarrassing
One problem with formal language learning is that it encourages us to focus on accuracy at all costs. To pass exams, you need to get things “right”. And many of us feel nervous about getting things wrong.
But in real-life communication, even in our expert languages, we often make mistakes and get away with it. Think of the number of times you have misspelled something, or said the wrong word, and still been understood.
Less formal language learning can encourage us to think more about communication than accuracy.
One advocate of this approach is author Benny Lewis, who popularised a communicative learning approach he calls “language hacking” which focuses on the language skills needed for conversation. Language apps also encourage this, as does real-life travel and communication.
Myth three: it’s too much effort to start over with a new language
You can use languages in lots of ways, and the language you learn at school doesn’t have to be the only one you learn.
In England, most people learn one or more of French, Spanish or German at school. These languages can often serve as great apprenticeship languages, teaching us how to learn a language and about grammatical structures.
But they are not always the languages that we are most likely to use as adults, when family and work could take us anywhere. Our cultural interests might also lead us to want to know more about a new language.
Learning a language that you have a personal interest in can be very motivating and help you to keep going when things get a bit rocky.
Myth four: learning a language is an individual endeavour
You don’t have to learn alone. Learning with others, or having the support of others, can help motivate us to learn.
This might be through a multilingual marriage, joining a conversation group or chatting in a language learning forum online. Don’t feel that you have to have reached a certain proficiency before you start reaching out to others.
Language apps can also make language learning a collective endeavour. You can learn along with friends and family, and congratulate them on their language learning streaks.
This is something both of us do with multiple generations of our families, helping us engage with language learning in a lighthearted way.
Myth five: it’s a lot of hard graft
Learning a language in a systematic way can be challenging, whether in a classroom or from a self-study course. But some things make this easier. We have found that people are more motivated to engage when they have a personal reason to learn. This could be, for example, wanting to communicate with family or to travel to a particular country or region.
The growth in popularity and accessibility of language learning apps has made language learning possible from any location and at any time, often for free.
You can easily catch up on your Chinese from the comfort of your own armchair, at whatever time is most convenient for you. Apps can be fun and playful, and can help us maintain motivation, develop vocabulary and embed grammatical structures.
There are lots of reasons for learning a language, and lots of benefits. We encourage everyone to focus on these benefits, and give it a go.![]()
Abigail Parrish, Lecturer in Languages Education, University of Sheffield and Jessica Mary Bradley, Senior Lecturer in Literacies and Language, University of Sheffield
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Read next: AI agents arrived in 2025 – here’s what happened and the challenges ahead in 2026
by External Contributor via Digital Information World
AI agents arrived in 2025 – here’s what happened and the challenges ahead in 2026
In artificial intelligence, 2025 marked a decisive shift. Systems once confined to research labs and prototypes began to appear as everyday tools. At the center of this transition was the rise of AI agents – AI systems that can use other software tools and act on their own.
While researchers have studied AI for more than 60 years, and the term “agent” has long been part of the field’s vocabulary, 2025 was the year the concept became concrete for developers and consumers alike.
AI agents moved from theory to infrastructure, reshaping how people interact with large language models, the systems that power chatbots like ChatGPT.
In 2025, the definition of AI agent shifted from the academic framing of systems that perceive, reason and act to AI company Anthropic’s description of large language models that are capable of using software tools and taking autonomous action. While large language models have long excelled at text-based responses, the recent change is their expanding capacity to act, using tools, calling APIs, coordinating with other systems and completing tasks independently.
This shift did not happen overnight. A key inflection point came in late 2024, when Anthropic released the Model Context Protocol. The protocol allowed developers to connect large language models to external tools in a standardized way, effectively giving models the ability to act beyond generating text. With that, the stage was set for 2025 to become the year of AI agents.
The milestones that defined 2025
The momentum accelerated quickly. In January, the release of Chinese model DeepSeek-R1 as an open-weight model disrupted assumptions about who could build high-performing large language models, briefly rattling markets and intensifying global competition. An open-weight model is an AI model whose training, reflected in values called weights, is publicly available. Throughout 2025, major U.S. labs such as OpenAI, Anthropic, Google and xAI released larger, high-performance models, while Chinese tech companies including Alibaba, Tencent, and DeepSeek expanded the open-model ecosystem to the point where the Chinese models have been downloaded more than American models.
Another turning point came in April, when Google introduced its Agent2Agent protocol. While Anthropic’s Model Context Protocol focused on how agents use tools, Agent2Agent addressed how agents communicate with each other. Crucially, the two protocols were designed to work together. Later in the year, both Anthropic and Google donated their protocols to the open-source software nonprofit Linux Foundation, cementing them as open standards rather than proprietary experiments.
These developments quickly found their way into consumer products. By mid-2025, “agentic browsers” began to appear. Tools such as Perplexity’s Comet, Browser Company’s Dia, OpenAI’s GPT Atlas, Copilot in Microsoft’s Edge, ASI X Inc.’s Fellou, MainFunc.ai’s Genspark, Opera’s Opera Neon and others reframed the browser as an active participant rather than a passive interface. For example, rather than helping you search for vacation details, it plays a part in booking the vacation.
At the same time, workflow builders like n8n and Google’s Antigravity lowered the technical barrier for creating custom agent systems beyond what has already happened with coding agents like Cursor and GitHub Copilot.
New power, new risks
As agents became more capable, their risks became harder to ignore. In November, Anthropic disclosed how its Claude Code agent had been misused to automate parts of a cyberattack. The incident illustrated a broader concern: By automating repetitive, technical work, AI agents can also lower the barrier for malicious activity.
This tension defined much of 2025. AI agents expanded what individuals and organizations could do, but they also amplified existing vulnerabilities. Systems that were once isolated text generators became interconnected, tool-using actors operating with little human oversight.
What to watch for in 2026
Looking ahead, several open questions are likely to shape the next phase of AI agents.
One is benchmarks. Traditional benchmarks, which are like a structured exam with a series of questions and standardized scoring, work well for single models, but agents are composite systems made up of models, tools, memory and decision logic. Researchers increasingly want to evaluate not just outcomes, but processes. This would be like asking students to show their work, not just provide an answer.
Progress here will be critical for improving reliability and trust, and ensuring that an AI agent will perform the task at hand. One method is establishing clear definitions around AI agents and AI workflows. Organizations will need to map out exactly where AI will integrate into workflows or introduce new ones.
Another development to watch is governance. In late 2025, the Linux Foundation announced the creation of the Agentic AI Foundation, signaling an effort to establish shared standards and best practices. If successful, it could play a role like the World Wide Web Consortium in shaping an open, interoperable agent ecosystem.
There is also a growing debate over model size. While large, general-purpose models dominate headlines, smaller and more specialized models are often better suited to specific tasks. As agents become configurable consumer and business tools, whether through browsers or workflow management software, the power to choose the right model increasingly shifts to users rather than labs or corporations.
The challenges ahead
Despite the optimism, significant socio-technical challenges remain. Expanding data center infrastructure strains energy grids and affects local communities. In workplaces, agents raise concerns about automation, job displacement and surveillance.
From a security perspective, connecting models to tools and stacking agents together multiplies risks that are already unresolved in standalone large language models. Specifically, AI practitioners are addressing the dangers of indirect prompt injections, where prompts are hidden in open web spaces that are readable by AI agents and result in harmful or unintended actions.
Regulation is another unresolved issue. Compared with Europe and China, the United States has relatively limited oversight of algorithmic systems. As AI agents become embedded across digital life, questions about access, accountability and limits remain largely unanswered.
Meeting these challenges will require more than technical breakthroughs. It demands rigorous engineering practices, careful design and clear documentation of how systems work and fail. Only by treating AI agents as socio-technical systems rather than mere software components, I believe, can we build an AI ecosystem that is both innovative and safe.![]()
Thomas Şerban von Davier, Affiliated Faculty Member, Carnegie Mellon Institute for Strategy and Technology, Carnegie Mellon University
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Editor's Note: This post might have been created or polished by AI tools.
by External Contributor via Digital Information World
Monday, December 29, 2025
Nobel Laureate Discusses Artificial Intelligence's Role in Critical Thinking Education
The physicist noted that AI can give students the impression they have actually learned the basics before they really have, potentially leading them to rely on it too soon before they know how to do the work themselves. He identified a particular concern with the current generation of AI being very good at being overly confident about what it's saying, which users may accept without scrutiny because it's typed on the screen.
Perlmutter teaches a critical thinking course covering 24 concepts and has asked students to think a bit hard about how to use AI to make it easier to operationalize each concept in their day-to-day lives, and also how to use these concepts to tell whether AI is fooling them or sending them in the right or wrong direction.
The physicist noted that when users know these different tools and approaches to thinking about problems, AI can often help them find the bit of information they need to use these techniques.
Notes: This post was drafted with the assistance of AI tools and reviewed, fact-checked, edited, and published by humans. Image: DIW-Aigen
Read next: AI Video Translation Offers Efficiency Potential but Human Nuance Remains Key
by Ayaz Khan via Digital Information World
AI Video Translation Offers Efficiency Potential but Human Nuance Remains Key
AI translations were consistently rated as less natural and less accent-neutral than human translations. Language comprehension varied by direction: AI performed worse translating into Indonesian but better into English, reflecting differences in AI training data. Despite these perceptual differences, viewers were equally willing to like, share, or comment on both types of videos.
"These insights suggest that AI video translation is not yet a perfect substitute for human translation...", explains UEA in a newsroom post. Adding further, "But it already offers practical value".
The authors note several limitations: findings reflect a single AI tool, specific language pairs, one video per condition, and a single point in time, which restricts generalizability. They suggest future research should explore additional AI tools, languages, and translation contexts to further understand consumer evaluation of AI video translation.
Source: Journal of International Marketing; research led by the University of Jyväskylä with co-authorship from University of East Anglia (UEA).
Notes: This post was drafted with the assistance of AI tools and reviewed, fact-checked, edited, and published by humans.
Read next: Global Survey: 66% Say 2025 Bad Year for Country, 71% Optimistic 2026 Will Be Better
by Asim BN via Digital Information World
Friday, December 26, 2025
Global Survey: 66% Say 2025 Bad Year for Country, 71% Optimistic 2026 Will Be Better
Looking ahead, 71% of respondents expressed optimism that 2026 will be better than 2025. Countries with the highest optimism included Indonesia (90%), Colombia (89%), and Chile (86%), while France (41%), Japan (44%), and Belgium (49%) reported the lowest optimism.
| Country | % agree | % disagree |
|---|---|---|
| 30-country avg. | 71 | 29 |
| Indonesia | 90 | 10 |
| Colombia | 89 | 11 |
| Chile | 86 | 14 |
| Thailand | 86 | 14 |
| Peru | 86 | 14 |
| India | 85 | 15 |
| Argentina | 83 | 17 |
| South Africa | 82 | 18 |
| Mexico | 82 | 18 |
| Malaysia | 82 | 18 |
| Brazil | 80 | 20 |
| Hungary | 77 | 23 |
| Poland | 74 | 26 |
| Romania | 70 | 30 |
| Canada | 70 | 30 |
| Spain | 69 | 31 |
| Sweden | 68 | 32 |
| Singapore | 67 | 33 |
| Netherlands | 67 | 33 |
| United States | 66 | 34 |
| Australia | 66 | 34 |
| South Korea | 65 | 35 |
| Türkiye | 63 | 37 |
| Ireland | 63 | 37 |
| Great Britain | 58 | 42 |
| Germany | 57 | 43 |
| Italy | 57 | 43 |
| Belgium | 49 | 51 |
| Japan | 44 | 56 |
| France | 41 | 59 |
On economic expectations, 49% of respondents predicted a stronger global economy in 2026, while 51% expected it to be worse.
The report also notes that in 2020, 90% of average respondents globally said their country had a bad year, reflecting the height of the COVID-19 pandemic. Current optimism levels remain below pre-2022 figures.
Source: Ipsos Predictions 2026 Report
Read next:
• How Schema Markup Is Redefining Brand Visibility in the Age of AI Search, According to Experts at Status Labs
• How ChatGPT could change the face of advertising, without you even knowing about it
by Ayaz Khan via Digital Information World
Wednesday, December 24, 2025
How Schema Markup Is Redefining Brand Visibility in the Age of AI Search, According to Experts at Status Labs

The way brands are discovered, evaluated, and recommended has fundamentally changed. As AI platforms like ChatGPT, Google's Gemini, and Perplexity increasingly mediate the relationship between businesses and their audiences, the technical infrastructure behind digital reputation has become just as important as the content itself. At the center of this shift is schema markup, a structured data framework that serves as a translation layer between your digital presence and the AI systems now shaping public perception.
The Growing Importance of Machine-Readable Branding
When a potential customer, investor, or partner asks an AI assistant about your company, the response depends on whether that AI system can accurately identify, understand, and trust your brand. Unlike traditional search engines that present links for users to evaluate, AI platforms synthesize information and deliver direct answers. This creates a fundamental challenge: if your brand's information isn't structured in ways that AI systems can reliably interpret, you risk being misrepresented, conflated with competitors, or excluded from responses entirely.
According to research from Schema App, Microsoft's Fabrice Canel, Principal Product Manager at Bing, confirmed at SMX Munich in March 2025 that schema markup directly helps Microsoft's large language models understand web content. This represents one of the first official confirmations from a major AI platform that structured data influences how LLMs process and present information.
The implications extend beyond simple visibility. Studies indicate that pages with comprehensive schema implementation are significantly more likely to appear in AI-generated summaries. A benchmark study from Data World found* that LLMs grounded in knowledge graphs achieve 300% higher accuracy compared to those relying solely on unstructured data. For brands, this accuracy translates directly into reputation protection and opportunity capture.
Understanding Schema Markup as Digital Identity Infrastructure
Schema markup uses standardized vocabulary from Schema.org to explicitly label elements on web pages that AI systems prioritize: organizational information, reviews, author credentials, products, and services. Rather than forcing AI models to infer meaning from unstructured text, this structured data provides explicit signals about what your content represents and how different elements relate to each other.
Google's own documentation states that structured data helps search systems understand page content by providing explicit clues about meaning. This guidance has taken on new significance as Google's AI Overviews and Gemini increasingly rely on the Knowledge Graph, which is enriched by schema markup crawled from the web.
The digital reputation management firm Status Labs has emerged as a leading voice on this topic, developing comprehensive frameworks for how businesses should approach structured data in an AI-dominated landscape. Their research indicates that company websites optimized with Organization schema and connected entity markup represent the most controllable authoritative source for AI training data. As Status Labs explains in their detailed analysis of schema markup's role in AI reputation, implementing structured data that signals contextual relationships to AI platforms is essential for preventing entity confusion that damages digital reputation.
The Entity Recognition Challenge
One of the most significant reputation risks in the AI era involves entity recognition, the process by which AI platforms distinguish between concepts sharing identical names. When someone asks an AI assistant about your company, the system must determine whether you're the technology firm based in Austin or the manufacturing company with the same name in Ohio.
Without Organization schema establishing your company as a distinct legal entity with specific founding dates, locations, and verifiable credentials, AI systems may merge information about different organizations into a single, confused representation. This creates scenarios where achievements are attributed to competitors or negative information about unrelated entities appears in responses about your business.
Status Labs has documented cases where proper schema implementation resolved significant entity confusion issues. Their GEO (Generative Engine Optimization) practice focuses specifically on these challenges, helping clients establish clear digital identities that AI systems can accurately recognize and represent.
The "sameAs" property in Organization schema proves particularly valuable here, linking your official website to verified profiles on LinkedIn, Crunchbase, and other authoritative platforms. This creates a network of corroborating signals that AI systems use to validate your identity and distinguish you from similarly named entities.
Performance Data: Schema's Measurable Impact
Research from BrightEdge demonstrates that schema markup improves brand presence and perception in Google's AI Overviews, with higher citation rates observed on pages with robust structured data. A recent analysis** also found that 72% of sites appearing on Google's first page search results use schema markup, indicating a strong correlation between structured data and visibility.
The stakes have increased substantially as AI Overviews reduce traditional organic clicks*** by approximately 34.5% year-over-year. Businesses not appearing in AI-generated summaries face accelerating invisibility as users increasingly accept AI responses without clicking through to websites.
An AccuraCast study**** analyzing over 2,000 prompts across ChatGPT, Google AI Overviews, and Perplexity found that 81% of web pages receiving citations included schema markup. While correlation doesn't prove causation, the data suggests that structured data plays a meaningful role in determining which sources AI platforms reference. Notably, ChatGPT showed particular preference for Person schema, with 70.4% of cited sources including this markup type, reflecting the platform's emphasis on source authority and reliability.
Critical Schema Types for Reputation Management
Different schema types serve distinct reputation management functions. Understanding which to prioritize depends on your specific visibility and protection goals.
Organization Schema consolidates business information into formats that AI platforms trust. This includes legal name, logo, founding date, official addresses, contact information, and social media profiles. Status Labs' detailed analysis outlines how implementing a comprehensive Organization schema across all digital properties creates the foundation for accurate AI representation.
Person Schema prevents the misattribution that damages executive and professional reputation. When multiple individuals share identical names, this markup defines biographical information, professional credentials, affiliations, and accomplishments, distinguishing separate careers and ensuring accurate attribution.
Review and AggregateRating Schema directly impact AI trustworthiness assessments. AI systems weigh verified customer feedback heavily when generating recommendations. Properly structured review markup must match visible page content exactly, as AI platforms detect and penalize mismatched data.
Article and BlogPosting Schema establish content authority and topical expertise. These schemas identify authors, publication dates, and subject matter, helping AI systems attribute information correctly and recognize your organization as an authoritative voice on specific topics.
Building Connected Knowledge Graphs
Basic schema provides value, but connected schema creates compounding advantages. As Search Engine Journal reports, enterprises are increasingly viewing structured data not merely as rich result eligibility criteria but as the foundation for content knowledge graphs.
This approach establishes relationships between entities on your website and links them to external authoritative knowledge bases, including Wikidata, Wikipedia, and Google's Knowledge Graph. When AI systems encounter your content, the connected schema provides comprehensive context about relationships between your products, services, team members, and broader industry concepts.
Status Labs' five-pillar approach to AI reputation management places schema implementation within this comprehensive framework. The methodology optimizes corporate websites as primary authoritative sources while establishing authoritative third-party references and managing review ecosystems with properly structured data.
Platform-Specific Considerations
Different AI platforms process schema markup according to their unique architectures and data sources. Understanding these variations enables targeted optimization.
Google's AI Overviews and Gemini prioritize websites with a comprehensive schema that contributes to Google's Knowledge Graph. Recent data shows that 80% of AI Overview citations come from top-3 organic results, but among those results, pages with well-implemented schema receive preferential selection.
ChatGPT with SearchGPT combines real-time web search with language model capabilities. While ChatGPT doesn't require schema to understand content, research suggests it retrieves information more thoroughly and accurately from pages with structured data. Schema reduces hallucinations by providing factual anchors that ground AI responses.
Perplexity AI explicitly values structured data's role in identifying reliable sources. Pages with robust schema markup appear more frequently in Perplexity's cited sources because the platform prioritizes well-defined, machine-readable information.
Common Implementation Errors
Several schema implementation mistakes can undermine or damage AI reputation rather than enhance it.
Mismatched Data represents the most damaging error. Discrepancies between visible page content and schema markup cause AI systems to question credibility. If your website displays a 4.8-star rating but schema markup shows a different figure, AI platforms may penalize or exclude your pages.
Incomplete Entity Definitions miss opportunities for AI recognition. Implementing Organization schema without comprehensive properties like founding date, leadership, and external profile links reduces AI confidence in your entity definition.
Static Schema on Dynamic Content creates accuracy problems over time. Businesses with changing inventory or pricing need systems that automatically update schema when underlying data changes.
Schema Manipulation backfires as AI detection improves. Adding irrelevant keywords or inaccurate information to structured data triggers penalties that compound over time.
The Strategic Imperative
Schema markup's value compounds as AI systems incorporate structured data into their understanding of the digital landscape. Organizations implementing comprehensive schema today establish authoritative representations that become increasingly difficult for competitors to displace.
This dynamic mirrors earlier digital transformations. Early adopters of mobile optimization gained advantages that persisted for years. With AI platforms already controlling significant information discovery, the window for establishing schema-based authority continues to narrow.
Status Labs' analysis shows that businesses with comprehensive schema markup maintain visibility across current and emerging AI search technologies, while competitors without structured data face accelerating invisibility. As the firm notes, schema markup has evolved from an optional technical enhancement to a foundational requirement for any organization serious about managing how AI systems understand, evaluate, and represent their brand.
Beyond Visibility: Schema as Reputation Protection
Schema markup functions as insurance against reputation damage that occurs when AI systems misunderstand, misidentify, or misrepresent your organization. By explicitly defining your entity with verifiable attributes and establishing connections to authoritative external sources, you reduce the probability of harmful misattribution.
This protective function becomes critical as AI systems increasingly mediate first impressions. When stakeholders query AI platforms about your company, the generated response shapes perceptions before any human visits your website. Accurate, comprehensive schema markup ensures these AI-generated first impressions align with reality.
The businesses and individuals investing in sophisticated schema strategies position themselves for an information environment where reputation depends on machine readability. For those seeking to understand how to implement these strategies effectively, Status Labs' comprehensive guide on schema markup's role in AI reputation provides detailed implementation frameworks and case studies demonstrating measurable impact.
As AI continues reshaping how information is discovered and presented, the organizations that control their structured data narrative will maintain the ability to shape their own story in an increasingly AI-mediated world.
by Sponsored Content via Digital Information World
Tuesday, December 23, 2025
How ChatGPT could change the face of advertising, without you even knowing about it
Online adverts are sometimes so personal that they feel eerie. Even as a researcher in this area, I’m slightly startled when I get a message asking if my son still needs school shirts a few hours after browsing for clothes for my children.
Personal messaging is part of a strategy used by advertisers to build a more intense relationship with consumers. It often consists of pop-up adverts or follow-up emails reminding us of all the products we have looked at but not yet purchased.
This is a result of AI’s rapidly developing ability to automate the advertising content we are presented with. And that technology is only going to get more sophisticated.
OpenAI, for example, has hinted that advertising may soon be part of the company’s ChatGPT service (which now has 800 million weekly users). And this could really turbocharge the personal relationship with customers that big brands are desperate for.
ChatGPT already uses some advanced personalisation, making search recommendations based on a user’s search history, chats and other connected apps such as a calendar. So if you have a trip to Barcelona marked in your diary, it will provide you – unprompted – with recommendations of where to eat and what to do when you get there.
In October 2025, the company introduced ChatGPT Atlas, a search browser which can automate purchases. For instance, while you search for beach kit for your trip to Barcelona, it may ask: “Would you like me to create a pre-trip beach essentials list?” and then provide links to products for you to buy.
“Agent mode” takes this a step further. If a browser is open on the page of a swimsuit, a chat box will appear where you can ask specific questions. With the browser history saved, you can log back in and ask: “Can you find that swimsuit I was looking at last week and add it to the basket in a size 14?”
Another new feature (only in the US at the moment), “instant checkout”, is a partnership with Shopify and Etsy which allows users to browse and immediately purchase products without leaving the platform. Retailers pay a small fee on sales, which is how OpenAI monetises this service.
However, only around 2% of all ChatGPT searches are shopping-related, so other means of making money are necessary – which is where full-on incorporated advertising may come in.
One app, lots of ads?
OpenAI’s rapid growth requires heavy investment, and its chief financial officer, Sarah Friar, has said the company is “weighing up an ads model”, as well as recruiting advertising specialists from rivals Meta and Google.
But this will take some time to get right. Some ChatGPT users have already been critical of a shopping feature which they said made them feel like they were being sold to. Clearly a re-design is being considered, as the feature was temporarily removed in December 2025.
So there will continue to be experimentation into how AI can be part of what marketers call the “consumer journey” – the process customers go through before they end up buying something.
Some consumers prefer to use customer reviews and their own research or experience. Others appreciate AI recommendations, but studies suggest that overall, some sense of autonomy is essential for people to truly consider themselves happy customers. It has also been shown that audiences dislike aggressive “retargeting”, where they are continuously bombarded with the same adverts.
So the option of ChatGPT automatically providing product recommendations, summaries and even purchasing items on our behalf might seem very tempting to big brands. But most consumers will still prefer a sense of agency when it comes to spending their money.
This may be why advertisers will work on new ways to blur the lines – where internet search results are blended with undeclared brand messaging and product recommendations. This has long been the case on Chinese platforms such as WeChat, which includes e-commerce, gaming, messaging, calling and social networking – but with advertising at its core.
In fact, platforms in the west seem far behind their East Asian counterparts, where users can do most of their day-to-day tasks using just one app. In the future, a similarly centralised approach may be inevitable elsewhere – as will subliminal advertising, with the huge potential for data collection that a single multi-functional app can provide.
Ultimately, transparency will be minimal and advertising will be more difficult to recognise, which could be hard on vulnerable users – and not the kind of ethically responsible AI that many are hoping for.![]()
Nessa Keddo, Senior Lecturer in Media, Diversity and Technology, King's College London
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Read next: Shrinking AI memory boosts accuracy
by External Contributor via Digital Information World
Shrinking AI memory boosts accuracy
Image: Luke Jones / Unsplash
Experts from University of Edinburgh and NVIDIA found that large language models (LLMs) using memory eight times smaller than an uncompressed LLM scored better on maths, science and coding tests while spending the same amount of time reasoning.
The method can be used in an alternative way to help LLMs respond to more user queries simultaneously, reducing the amount of power needed per task.
As well as energy savings, experts say the improvements could benefit AI systems that are used to solve complicated tasks or in devices that have slow or limited memory, such as smart home devices and wearable technology.
Problem solving
By “thinking” about more complex hypotheses or exploring more hypotheses concurrently, AI models improve their problem-solving abilities. In practice, this is achieved by generating more reasoning threads – a step-by-step logical process used to solve problems – in text form.
The model’s memory – called the KV cache – which stores the portions of the threads generated, can act as a bottleneck, as its size slows down the generation of reasoning thread outputs during inference – the process by which AI models respond to an input prompt, such as answering a user query.
The more threads there are, and the longer they are, the more memory is required. The larger the memory size used, the longer the LLM takes to retrieve the KV cache from the part of the AI device where it is stored.
Memory compression
To overcome this, the team developed a method to compress the models’ memory – called Dynamic Memory Sparsification (DMS). Instead of keeping every token – the units of data that an AI model processes – DMS decides which ones are important enough to keep and which ones can be deleted.
There is a slight delay between the time when the decisions to delete tokens using sparsification are made and when they are removed. This gives the model a chance to pass on any valuable information from the evicted tokens to preserved ones.
In managing which tokens to keep and which to discard, DMS lets the AI model "think” in more depth or explore more possible solutions without needing extra computer power.
Models tested
The researchers tested DMS on different versions of the AI models Llama and Qwen and compared their performance to models without compression.
The models’ performance was assessed using standardised tests. It was found even with memories compressed to one eighth their original size, LLMs fully retain their original accuracy in difficult tasks while accelerating reasoning compared with non-compressed models.
In the standardised maths test AIME 24, which served as the qualifier for the United States Mathematical Olympiad, the compressed models performed twelve points better on average using the same number of KV cache reads to produce an answer.
For GPQA Diamond – a series of complex questions in biology, chemistry and physics authored by PhD-level experts – the models performed over eight points better.
The models were also tested with LiveCode Bench, which measures how well AI models can write code. The compressed models scored on average ten points better than non-compressed models.
In a nutshell, our models can reason faster but with the same quality. Hence, for an equivalent time budget for reasoning, they can explore more and longer reasoning threads. This improves their ability to solve complex problems in maths, science, and coding.
Dr Edoardo Ponti - GAIL Fellow and Lecturer in Natural Language Processing at the University’s School of Informatics
The findings from this work were peer reviewed and were presented at the prestigious AI conference NeurIPS.
Dr Ponti and his team will continue to investigate ways how large AI systems represent and remember information, making them far more efficient and sustainable as part of a 1.5 million euros European Research Council-funded project called AToM-FM.
This article has been republished on DIW with permission from The University of Edinburgh.
Read next:
• Subnational income inequality revealed: Regional successes may hold key to addressing widening gap globally
• Why many Americans avoid negotiating, even when it costs them
by External Contributor via Digital Information World
Monday, December 22, 2025
Subnational income inequality revealed: Regional successes may hold key to addressing widening gap globally
Income inequality is one of the most important measures of economic health, social justice and quality of life. More reliably trackable than wealth inequality, which was recently given a gloomy report card by the G20, income inequality is particularly relevant to immediate economic relief, mobility and people’s everyday standard of living.
The new study, from an international team led by Aalto University and Cambridge University, is the first to comprehensively map three decades of income inequality data within 151 nations around the world. Despite finding that income inequality is worsening for half the world’s people, the study also indicates that effective policy may be helping to bridge the gap in regions such as Latin America — ‘bright spots’ in administrative areas that account for around a third of the global population.
‘This research gives us much more detail than the existing datasets, allowing us to zoom in on specific regions within countries,’ says one of the study’s lead authors, Professor Matti Kummu, from Aalto University.‘This is significant because in many countries national data would tell us that inequality has not changed much over the past decades, while subnational data tells a very different story.’
‘The new data is particularly relevant in light of recent failings around wealth inequality, given that it could help shed light on what policy levers might be pulled to address inequality in the short-term,’ says co-lead author Daniel Chrisendo, now an Assistant Professor at Cambridge University.
‘We have vastly more complete data on income than we do on wealth, which tends to be much harder to uncover and track,’ explains Chrisendo. ‘Especially given that income inequality leads to wealth inequality, it’s critical to tackle both forms — but income inequality is perhaps the easiest to address from an immediate policy perspective.’
The study was published in Nature Sustainability on 5th December, and the new global subnational Gini coefficient (SubNGini) dataset, spanning 1990-2023, is publicly accessible online. Global annual data and trends can be explored visually using the Online Tool, which enables users to explore how income inequality has played out in regions around the globe and also download the data for further analyses.
Pinpointing the role of policy
There are many examples where regional efforts have shone more brightly than is revealed by national statistics, say the researchers. However India, China and Brazil all present interesting case studies that affect large swathes of the global population.
‘With regards to India, relative success in the south is linked to sustained investments in public health, education, infrastructure and economic development that have benefited the local population more broadly,’ says Chrisendo.
Meanwhile, in China, market-oriented reforms and open-door policy have driven economic growth and dramatically reduced poverty since the 1990s. ‘But we can also see how this growth has been uneven, likely due to the Chinese government’s ‘Hukou’ policy limiting rural migrants' access to urban services,’ he explains. In response, the government has implemented various policy measures — such as regional development programs and relaxed Hukou restrictions — to address disparities and support internal migrants.
In Brazil, the mapping shows a potential correlation between reduced inequality and a regional cash transfer programme providing cash to poor families on condition of their children attending school and receiving vaccinations.
‘Overall, being able to visualise these success stories and pinpoint the changing trends in time could help decision-makers see what works,’ says Chrisendo.
Income inequality rising for half the world’s people
Relative income growth for the world’s poorest 40 percent is one of the UN’s Sustainable Development Goals (SDGs), yet the study confirms the collective failure to meet this goal by 2030. ‘Unfortunately, not only are we quite far from that goal, but the trend for rising inequality is actually stronger than we thought,’ says Kummu.
The researchers are now expanding the data visualisation to encompass a vast range of other socio-economical indicators, from how populations are aging, to life expectancy and time spent in schooling, to improved access to drinking water — with the extensive new datasets slated for public launch in 2026.
As an expert in global food systems and sustainable use of natural resources, Kummu hopes the new datasets can be used to better understand, for example, the linkages between development and environmental changes. The recent study revealed links between more unequal regions and lower ecological diversity, which he would like to explore further.
‘It’s ambitious, but to have subnational, high quality data spanning over three decades is crucial to understand different social responses to environmental changes and vice versa. It gives us the means to start understanding the causalities, not just the correlations — and with that comes the power to make better decisions,’ he concludes.
Matti Kummu - Professori - T213 Built Environment - matti.kummu@aalto.fi - +358504075171
More:
- Full article in Nature Sustainability
- Datasets: Income inequality and gross national income per capita 1990-2023
- Online tool: Income inequality explorer
Read next:
• Why many Americans avoid negotiating, even when it costs them
• How U.S. Employees Report Using AI at Work
by External Contributor via Digital Information World
Sunday, December 21, 2025
Why many Americans avoid negotiating, even when it costs them
Would you pay thousands of dollars more for a car just to skip the negotiation process? According to new research by David Hunsaker, clinical associate professor of management at the IU Kelley School of Business Indianapolis, many Americans would—and do.
How common is this mindset?
“Across five studies, we found that 95% of individuals choose not to negotiate up to 51% of the time,” Hunsaker explained. This means negotiation avoidance is not the exception, rather the norm.
The research, published in Negotiation and Conflict Management Research, was conducted by Hunsaker in collaboration with Hong Zhang of Leuphana University and Alice J. Lee of Cornell University. Their work explores why people avoid negotiating, what it costs them, and how organizations can respond.
Negotiation avoidance is the norm, not the exception
This study spans five large-scale experiments exploring why people avoid negotiating and what it costs them. The research examines:
- How often individuals forgo negotiation opportunities
- The Threshold for Negotiation Initiation (TFNI)—the minimum savings people need to justify negotiating
- The Willingness to Pay to Avoid Negotiation (WTP-AN)—how much extra people will pay to skip negotiating
- Whether interventions, such as utility comparisons or social norm prompts, can reduce avoidance
“Our work focuses on how much individuals are willing to sacrifice, or even pay, to avoid negotiating altogether,” David explained.
The idea for this research emerged at a negotiation conference in Israel. Hunsaker and his colleagues visited a market where bargaining is expected, yet none of them negotiated. “We asked ourselves: Why don’t people negotiate even when the opportunity is clear?” Hunsaker recalled.
“We framed this research around a simple question: When you have the chance to negotiate, will you?” Hunsaker said. “Even in traditional contexts like buying a car, companies now advertise ‘no-haggle pricing’ as a selling point. Businesses can raise prices by 5% to 11%, and more than half of consumers will pay it.”
The research also revealed that people judge negotiation value by percentage saved, not the absolute dollar amount.
“On average, participants needed savings of 21% to 36% of an item’s price before considering negotiation worthwhile,” Hunsaker noted. “This shows that decisions are driven by perceived proportional value—not absolute dollars.”
Hunsaker hopes the findings spark awareness. “Negotiation aversion is real, but at key points in your career, negotiation skills matter,” he emphasized. “Recognizing these tendencies is the first step toward overcoming them.”
Negotiation tips from the expert
To help you become a better negotiator, here are three tips from Dr. Hunsaker:
Preparation is everything
“Most of the work happens before the conversation begins,” Hunsaker said. “Information is power. Know your options and be honest about whether you have strong alternatives. If you don’t, you’ll enter with less leverage. Many people overlook this step—understand your position before you negotiate.”
Start higher than your target
“This is hard for a lot of people because you don’t want to sound selfish, but there needs to be room for concessions. If you don’t make that room, the other party will become upset. Start with an offer better than your goal and it will help the other party feel more satisfied with the deal.” Hunsaker shared.
Focus on relationships, not victory
“It’s about developing strong relationships. People that go into negotiation with a winning mindset end up burning bridges or hurting feelings. The people you most often negotiate with will be repeat customers or longtime clients. If you burn those bridges, you will miss out on deals later. Focus on doing well but also focus on listening to the other party and creating a foundation of trust,” Hunsaker said.
David Hunsaker is a clinical associate professor of management at the Kelley School of Business Indianapolis. He joined the faculty in 2024 and specializes in organizational behavior and negotiation.
This article was first published on the Indiana University Kelley School of Business website on December 16, 2025. Republished with permission.
Read next:
• Most Data Centers Are Located Outside Recommended Temperature Ranges
• How U.S. Employees Report Using AI at Work
by External Contributor via Digital Information World
Saturday, December 20, 2025
How U.S. Employees Report Using AI at Work
A Gallup workforce survey conducted in 2025 found that employees who used artificial intelligence (AI) at work reported using it for information-related and idea-generation purposes. Among U.S. employees surveyed in the second quarter of 2025 who said they used AI at least yearly, 42% reported using it to consolidate information, while 41% said they used it to generate ideas. Another 36% reported using AI to support learning new things. Gallup noted that these reported uses did not change meaningfully from its initial measurement in the second quarter of 2024.
When asked about the types of AI tools they used in their role, more than six in ten AI-using employees reported using chatbots or virtual assistants. AI-powered editing and writing tools were the next most commonly reported, followed by AI coding assistants. Use of more specialized tools, including those designed for data science or analytics, was less common overall but more frequently reported by employees who used AI at work more often.
| AI Use | Percentage Selected |
|---|---|
| To consolidate information or data | 42% |
| To generate ideas | 41% |
| To learn new things | 36% |
| To automate basic tasks | 34% |
| To identify problems | 20% |
| To interact/transact with customers | 13% |
| To collaborate with coworkers | 11% |
| Other | 11% |
| To make predictions | 9% |
| To set up, operate, or monitor complex equipment or devices | 8% |
| AI Use | Percentage Selected |
|---|---|
| Chatbots or virtual assistants | 61% |
| AI writing and editing tools | 36% |
| AI coding assistants | 14% |
| Image, video, or audio generators | 13% |
| Data science or analytics tools | 13% |
| Task, scheduling, or project management tools | 13% |
| Meeting assistants or transcription tools | 12% |
| Presentation or slide deck tools | 10% |
| AI-powered search or research tools | 10% |
| Email or communication management tools | 9% |
| Knowledge or information management tools | 8% |
| Automation or robotic process automation (RPA) tools | 5% |
| Other | 4% |
Gallup also reported that in the third quarter of 2025, 45% of U.S. employees said they used AI at work at least a few times a year, while daily use remained limited to about 10% of the workforce.
When tools make it easier to learn, solve problems, or work more effectively, they earn their place in daily practice.
Notes: This post was drafted with the assistance of AI tools and reviewed, edited, facted-checked and published by humans.
Read next:
• Most Data Centers Are Located Outside Recommended Temperature Ranges
• Resolve to stop punching the clock: Why you might be able to change when and how long you work
by Asim BN via Digital Information World
Most Data Centers Are Located Outside Recommended Temperature Ranges
An analysis by Rest of World found that a majority of the world’s operational data centers are located in climates outside the industry’s recommended temperature range.
The analysis combined climate records from the Copernicus Climate Data Store with facility location data from Data Center Map, covering 8,808 operational data centers worldwide as of October 2025.
Industry standards recommend average operating temperatures between 18°C and 27°C. Nearly 7,000 data centers were located outside that range, with most situated in regions cooler than recommended. About 600 data centers, representing less than 10% of the total, were located in areas with average annual temperatures above 27°C. In 21 countries, including Nigeria, Singapore, Thailand, and the United Arab Emirates, all operational data centers were located in regions exceeding the recommended temperature range.
The findings draw attention to the operational strain associated with cooling data centers in hotter climates.
Notes: This post was drafted with the assistance of AI tools and reviewed, edited, and published by humans. Read next: Image: DIW-Aigen
Read next: Resolve to stop punching the clock: Why you might be able to change when and how long you work
by Ayaz Khan via Digital Information World
Friday, December 19, 2025
Resolve to stop punching the clock: Why you might be able to change when and how long you work
Image: Luis Villasmil / Unsplash
About 1 in 3 Americans make at least one New Year’s resolution, according to Pew Research. While most of these vows focus on weight loss, fitness and other health-related goals, many fall into a distinct category: work.
Work-related New Year’s resolutions tend to focus on someone’s current job and career, whether to find a new job or, if the timing and conditions are right, whether to embark on a new career path.
We’re an organizational psychologist and a philosopher who have teamed up to study why people work – and what they give up for it. We believe that there is good reason to consider concerns that apply to many if not most professionals: how much work to do and when to get it done, as well as how to make sure your work doesn’t harm your physical and mental health – while attaining some semblance of work-life balance.
How we got here
Most Americans consider the 40-hour workweek, which calls for employees being on the job from nine to five, to be a standard schedule.
This ubiquitous notion is the basis of a hit Dolly Parton song and 1980 comedy film, “9 to 5,” in which the country music star had a starring role. Microsoft Outlook calendars by default shade those hours with a different color than the rest of the day.
This schedule didn’t always reign supreme.
Prior to the Great Depression, which lasted from 1929-1941, 6-day workweeks were the norm. In most industries, U.S. workers got Sundays off so they could go to church. Eventually, it became customary for employees to get half of Saturday off too.
Legislation that President Franklin D. Roosevelt signed into law as part of his sweeping New Deal reforms helped establish the 40-hour workweek as we know it today. Labor unions had long advocated for this abridged schedule, and their activism helped crystallize it across diverse occupations.
Despite many changes in technology as well as when and how work gets done, these hours have had a surprising amount of staying power.
Americans work longer hours
In general, workers in richer countries tend to work fewer hours. However, in the U.S. today, people work more on average than in most other wealthy countries.
For many Americans, this is not so much a choice as it is part of an entrenched working culture.
There are many factors that can interfere with thriving at work, including boredom, an abusive boss or an absence of meaning and purpose. In any of those cases, it’s worth asking whether the time spent at work is worth it. Only 1 in 3 employed Americans say that they are thriving.
What’s more, employee engagement is at a 10-year low. For both engaged and disengaged employees, burnout increased as the number of work hours rose. People who were working more than 45 hours per week were at greatest risk for burnout, according to Gallup.
However, the average number of hours Americans spend working has declined from 44 hours and 6 minutes in 2019 to just under 43 hours per week in 2024. The reduction is sharper for younger employees.
We think this could be a sign that younger Americans are pushing back after years of being pressured to embrace a “hustle culture” in which people brag about working 80 and even 100 hours per week.
Fight against a pervasive notion
Anne-Marie Slaughter, a lawyer and political scientist who wears many hats, coined the term “time macho” more than a decade ago to convey the notion that someone who puts in longer hours at the office automatically will outperform their colleagues.
Another term, “face time,” describes the time that we are seen by others doing our work. In some workplaces, the quantity of an employee’s face time is treated as a measure of whether they are dependable – or uncommitted.
It can be easy to jump to the conclusion that putting in more hours at the office automatically boosts an employee’s performance. However, researchers have found that productivity decreases with the number of hours worked due to fatigue.
Even those with the luxury to choose how much time they devote to work sometimes presume that they need to clock as many hours as possible to demonstrate their commitment to their jobs.
To be sure, for a significant amount of the workforce, there is no choice about how much to work because that time is dictated, whether by employers, the needs of the job or the growing necessity to work multiple jobs to make ends meet.
4-day workweek experiments
One way to shave hours off the workweek is to get more days off.
A multinational working group has examined experiments with a four-day workweek: an arrangement in which people work 80% of the time – 32 hours over four days – while getting paid the same as when they worked a standard 40-hour week. Following an initial pilot in the U.S. and Ireland in 2022, the working group has expanded to six continents. The researchers consistently found that employers and employees alike thrive in this setup and that their work didn’t suffer.
Most of those employees, who ranged from government workers to technology professionals, got Friday off. Shifting to having a three-day weekend meant that employees had more time to take care of themselves and their families. Productivity and performance metrics remained high.
Waiting for technology to take a load off
Many employment experts wonder whether advances in artificial intelligence will reduce the number of hours that Americans work.
Might AI relieve us all of the tasks we dread doing, leaving us only with the work we want to do – and which, presumably, would be worth spending time on? That does sound great to both of us.
But there’s no guarantee that this will be the case.
We think the likeliest scenario is one in which the advantages of AI are unevenly distributed among people who work for a living. Economist John Maynard Keynes predicted almost a century ago that “technological unemployment” would lead to 15-hour workweeks by 2030. As that year approaches, it’s become clear that he got that wrong.
Researchers have found that for every working hour that technology saves us, it increases our work intensity. That means work becomes more stressful and expectations regarding productivity rise.
Deciding when and how much time to work
Many adults spend so much time working that they have few waking hours left for fitness, relationships, new hobbies or anything else.
If you have a choice in the matter of when and how much you work, should you choose differently?
Even questioning whether you should stick to the 40-hour workweek is a luxury, but it’s well worth considering changing your work routines as a new year gets underway if that’s a possibility for you. To get buy-in from employers, consider demonstrating how you will still deliver your core work within your desired time frame.
And, if you are fortunate enough to be able to choose to work less or work differently, perhaps you can pass it on: You probably have the power and privilege to influence the working hours of others you employ or supervise.![]()
Jennifer Tosti-Kharas, Professor of Management, Babson College and Christopher Wong Michaelson, Professor of Ethics and Business Law, University of St. Thomas
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Read next:
• What the hyperproduction of AI slop is doing to science
• Task scams are up 485% in 2025 and job seekers are losing millions
by External Contributor via Digital Information World










