Tuesday, October 14, 2025

Google Updates Search Ads with a New “Sponsored Results” Design

Google is rolling out a new look for search advertising that changes how sponsored listings appear on its pages. The update adds a larger “Sponsored results” label, grouping text and shopping ads together under a single heading. The company says the redesign will make it easier for users to recognize paid content while keeping navigation clear on both desktop and mobile.

Easier to See, Harder to Ignore

Sponsored results now sit at the top of each search page in a single block. The section can show up to four text ads, similar in size to the previous format. Once a user scrolls past them, a small control appears that lets them hide the ad group with one click. A similar block also appears at the bottom of the page and can only be hidden after it has been viewed.


The change is meant to make it obvious where promotional listings begin and end. By grouping ads together, Google gives users a consistent structure instead of scattered placements across the page. The company says this helps people find what they need faster without guessing which results are paid.

A Step Toward Clearer Labeling

Earlier versions of Search marked each ad individually. The new design replaces that with one shared label that remains visible as users scroll through the top section. It also extends to shopping ads, giving a unified appearance to all paid formats.

The move represents one of Google’s biggest changes to ad presentation in years. It aligns with ongoing efforts to balance transparency with advertiser visibility. While users gain more control over what they see, Google maintains space for businesses that depend on Search traffic.

Impact on Advertisers

The option to hide ad groups could influence how often users engage with sponsored content. Clearer labeling may reduce accidental clicks, but it can also create more meaningful traffic for ads that attract genuine interest. Advertisers may need to focus more on creative quality and relevance to encourage voluntary engagement.

Marketing specialists expect the update to shift attention toward ad effectiveness rather than volume. The ability for users to bypass entire sections means less space for low-value campaigns and a greater emphasis on trust. This could benefit brands that invest in informative, well-targeted messaging.

Linked to AI Overviews

The update arrives as Google expands ads inside its AI Overviews feature to new English-speaking markets. These AI-generated summaries appear when users enter complex or multi-part questions. Ads placed alongside them blend into the generative results rather than appearing in a separate section.

The connection between AI Overviews and Sponsored results shows how search is changing into a more mixed environment. Ads, organic results, and AI summaries now appear closer together, giving users multiple layers of information in one place. For advertisers, this means adjusting to placements that depend not just on ranking but on how AI presents the overall response.

Search with More Control

Together, the new layout and AI integration highlight Google’s attempt to keep user experience and advertising revenue in balance. Users can now identify paid listings more easily and choose when to hide them, while advertisers retain a prominent presence at both ends of the results page.

These updates suggest that search is moving toward a model built around user choice and transparency. Google’s challenge is to make ads feel informative rather than intrusive, and the new Sponsored results format is its latest step toward that balance.

Notes: This post was edited/created using GenAI tools.

Read next:

• Microsoft Builds Its First AI Image Generator From the Ground Up

• The Digital Coin Revolution: Who Really Controls the Future of Currency?


by Irfan Ahmad via Digital Information World

The Digital Coin Revolution: Who Really Controls the Future of Currency?

Throughout history, control over money has been one of the most powerful levers of state authority. Rulers have long understood that whoever issues and manages the currency also commands the economy and, by extension, society itself.

In Tudor England, Henry VIII’s “Great Debasement” between 1542 and 1551 reduced the silver content of coins from more than 90% to barely one-third, while leaving the king’s portrait shining on the surface, of course. The policy financed wars and courtly extravagance, but also fuelled inflation and public distrust in coinage.

Centuries earlier, Roman emperors had resorted to similar tricks with the denarius, steadily reducing its silver content until by the 3rd century AD, it contained little more than trace amounts, undermining its credibility and contributing to economic instability.

Outside Europe, the same pattern held. In 11th-century China, the Song dynasty pioneered paper money, extending state control over taxation and trade. This was a groundbreaking innovation, but later dynasties such as the Ming over-issued notes, sparking inflation and loss of trust in the currency.

Such episodes underline a timeless truth: money is never neutral. It has always been an instrument of governance – whether to project authority, consolidate control or disguise fiscal weakness. The establishment of central banks, from the Bank of England in 1694 to the US Federal Reserve in 1913, formalised that authority.

Today, the same story is entering a new digital chapter. As Axel van Trotsenburg, senior managing director of the World Bank, wrote in 2024: “Embracing digitalisation is no longer a choice. It’s a necessity.” By this he meant not simply switching to online banking, but making the currencies we use, and the mechanisms for regulating it, entirely digital.

Just as rulers once clipped coins or over-printed notes, governments are now testing how far digital money can extend their reach – both within and beyond national boundaries. Of course, different governments and political systems have very different ideas about how the money of the future should be designed.

In March 2024, then-former President Trump, back on the hustings trail, declared: “As your president, I will never allow the creation of a central bank digital currency.” It was a campaign moment, but also a salvo in a much larger battle – not just over the future of money, but who controls it.

In the US, the issuance of currency – whether in the form of physical cash or digital bank deposits and electronic payments – has traditionally been monopolised by the Federal Reserve (more commonly known as “the Fed”), a technocratic institution designed to operate independently from the elected government and houses. But Trump’s hostility toward the Fed is well-documented, and noisy.

During his second term, Trump has publicly berated the Fed’s chair, Jerome Powell, calling him “a stubborn MORON” over his interest rate policies, and even floating the idea of replacing him. Trump’s discomfort with the Fed’s autonomy echoes earlier populist movements such as President Andrew Jackson’s 1830s crusade against the Second Bank of the United States, when federal financial elites were portrayed as obstacles to democratic control of money.

In March 2025, when Trump issued an executive order establishing a Strategic Bitcoin Reserve, he signalled the opening of a new front in this institutional battle. By incorporating bitcoin into an official US reserve, the world’s largest economy is, for the first time, sanctioning its use as part of state financial infrastructure.

For a leader like Trump, who has consistently sought to break, bypass or dominate independent institutions – from the judiciary to intelligence agencies – the idea of replacing the Fed’s influence with a state-aligned crypto ecosystem may represent the ultimate act of executive assertion.

Such a step reframes bitcoin as more than an investment fad or criminal fallback; it is being drawn into the formal monetary system – in the US, at least.

America’s crypto future?

Bitcoin is, by a distance, the world’s most valuable cryptocurrency (at the time of writing, one coin is worth just shy of US$120,000) having established a record high in August 2025. Like gold, its value is ensured in part by its finite supply, and its security by the blockchain technology that makes it unhackable.


Image: engin akyurt / Unsplash

For most who buy bitcoins, its key value is not as a currency but a speculative investment product – a kind of “digital gold” or high-risk stock that investors buy hoping for big returns. Many people have indeed made millions from their purchases.

But now, thanks in particular to Trump’s aggressively pro-crypto, anti-central bank approach, bitcoin’s potential role as part of a new form of state-controlled digital currency is in the spotlight like never before.

Trump’s framing of bitcoin as “freedom money” reflects its traditional sales pitch as being censorship-resistant, unreviewable, and free from state control. At the same time, his blurring of public authority and private financial interest, when it comes to cryptocurrencies, has raised some serious ethical and governance concerns.

But the crucial innovation here is that Trump is not proposing a truly libertarian system. It is a hybrid model: one where the issuance of money may become privatised while control of the US’s financial reserve strategy – and associated political and economic narratives – remains firmly in state hands.

This raises provocative questions about the future of the Federal Reserve. Could it be sidelined not through legal abolition, but by the growing relevance of parallel monetary systems blessed by the executive? The possibility is no longer far-fetched.

According to a 2023 paper published by the Bank for International Settlements, a powerful if little-known organisation that coordinates central bank policy globally: “The decentralisation of monetary functions across public and private actors introduces a new era of contestable monetary sovereignty.”

In plain English, this means money is no longer the sole domain of states. Tech firms, decentralised communities and even AI-powered platforms are now building alternative value systems that challenge the monopoly of national currencies.

Calls to diminish the role of central banks in shaping macroeconomic outcomes are closely tied to the rise of what the University of Cambridge’s Bennett School of Public Policy calls “crypto populism” – a movement that shifts legitimacy away from unelected technocrats towards “the people”, whether they are retail investors, cryptocurrency miners or politically aligned firms.

Supporters of this agenda argue that central banks have too much unchecked power, from manipulating interest rates to bailing out financial elites, while ordinary savers bear the costs through inflation or higher borrowing charges.

In the US, Trump and his advisers have become the most visible proponents, tying bitcoin and also so-called “stablecoins” (cryptocurrencies designed to maintain a stable value by being pegged to an external asset) to a broader populist narrative about wresting control from elites.

The emergence of this dual monetary system is causing deep unease in traditional financial institutions. Even the economist-activist Yanis Varoufakis – a long-time critic of central banks – has warned of the dangers of Trump’s approach, suggesting that US private stablecoin legislation could deliberately weaken the Fed’s grip on money, while “depriving it of the means to clean up the inevitable mess” that will follow.

Weaponisation of the dollar

Some rival US nations also feel deep unease about its approach to money – in part because of what analysts call the “weaponisation of the dollar”. This describes how US financial dominance, via Swift and correspondent banking systems, has long enabled sanctions that effectively exclude targeted governments, companies or individuals from global finance.

These tools have been used extensively against Iran, Russia, Venezuela and others – triggering efforts by countries including China, Russia and even some EU states to build alternative payment systems and digital currencies, aimed at reducing dependency on the dollar. As the Atlantic put it in 2023, the US appeared to be “pushing away allies and adversaries alike by turning its currency into a geopolitical bludgeon”.

Spurred on by these concerns and an increasing desire to delink from the dollar as the world’s anchor currency, many countries are now moving towards creating their own central bank digital currencies (CBDCs) – government-issued digital currencies backed and regulated by state institutions.

While fully live CBDCs are already in use in countries ranging from the Bahamas and Jamaica to Nigeria, many more are in active pilot phases – including China’s digital yuan (e-CNY). Having been trialled in multiple cities since 2019, the e-CNY now has millions of domestic users and, by mid-2024, had processed nearly US$1 trillion in retail transactions.

A key part of Beijing’s ambition is to use the digital yuan as a strategic hedge against dollar-based clearance systems, positioning it as part of a wider plan to reduce China’s reliance on the US dollar in international trade. Likewise, the European Central Bank has framed its digital euro – which entered its preparation phase in October 2023 – as essential to future European monetary sovereignty, stating that it would reduce reliance on non-European (often US-controlled) digital payment providers such as Visa, Mastercard and PayPal.

In this way, CBDCs are becoming a new front in global competition over who sets the rules of money, trade and financial sovereignty in the digital age. As governments rush to build and test these systems, technologists, civil libertarians and financial institutions are clashing over how best to do this – and whether the world should embrace or fear the rise of central bank digital currencies.

Trojan horses for surveillance?

The experience of using a CBDC will be much like today’s mobile banking apps: you’ll receive your salary directly into a digital wallet, make instant payments in shops or online, and transfer money to friends in seconds. The key difference is all of that money will be a direct claim on the central bank, guaranteed by the state, rather than a private bank.

In many countries, CBDCs are being pitched as more efficient tools for economic inclusion and societal benefit. A 2023 Bank of England consultation paper emphasised that its proposal for a digital pound would be “privacy-respecting by design” and “non-programmable by the state”. It would not replace cash but sit alongside it, the BoE suggested, with each citizen allowed to hold up to a capped limit digital pounds (suggested at £10,000-£20,000) to avoid destabilising commercial bank deposits.

However, some critics see CBDCs as Trojan horses for surveillance. In 2019, a report by the professional services network PWC suggested that CBDCs, if unchecked, could entrench executive power by removing intermediary financial institutions and enabling programmable, direct government control over citizen transactions. According to the report, this could mean stimulus payments that expire if not spent within 30 days, or taxes deducted at the moment of transaction. In other words, CBDCs could be tools of efficiency – but also of unprecedented oversight.

A 2024 CFA Institute paper warned that digital currencies could allow governments to trace, tax or block payments in real time – tools that authoritarian regimes might embrace. The Bank for International Settlements (BIS) has called the advent of this “programmable money” inevitable.

Imagine, for example, a parent transferring 20 digital pounds to their child’s CBDC wallet, but with a rule that this money can only be spent on food, not video games. When the child uses it at a supermarket, their payment is programmed so that the retailer’s suppliers and the tax authority are paid instantly (£15 to the shop, £3 to wholesalers, £2 straight to the tax office) with no extra steps. In theory, at least, everyone is happy: the parent sees the child spent the money responsibly, the suppliers are paid immediately, and the retailer’s tax bill is settled automatically.

In technical terms, programmable payments such as this are straightforward for CBDCs. But such a system raises big questions about privacy and personal freedom. Some critics fear that programmable CBDCs might be used to restrict spending on disapproved categories such as alcohol and fuel, create expiry dates for unemployment benefits, or enforce climate targets through money flow limits. The BIS has warned that CBDCs should be “designed with safeguards” to preserve user privacy, financial inclusion and interoperability across borders.

Even well-intentioned digital systems can create tools of surveillance. CBDC architecture choices, such as default privacy settings, tiered access or transaction expiry can all shape the extent of executive control embedded in the system. If designed without democratic oversight, these infrastructures risk institutional capture.

Some CBDC pilots – including China’s e-CNY, the Sand Dollar and the eNaira – have been criticised for omitting clear privacy guarantees, with their respective central banks deferring decisions on privacy protections to future legislation. According to Norbert Michel, director of the Cato Institute’s Center for Monetary and Financial Alternatives and one of the most prominent US voices warning about the risks of CBDCs:

A fully implemented CBDC gives the government complete control over the money going into, and coming out of, every person’s account. It’s not difficult to see that this level of government control is incompatible with both economic and political freedom.

Fears of mission creep

The concerns being raised about central bank digital currencies extend beyond personal payment controls. A recent analysis by Rand Corporation highlighted how law enforcement capabilities could dramatically increase with the introduction of CBDCs. While this could strengthen efforts to stop money laundering and the financing of terrorism, it also raises fears of “mission creep”, whereby the same tools could be used to police ordinary citizens’ spending or political activities.

Concerns about mission creep – the idea that a system introduced for limited goals (efficiency, anti-money laundering) gradually expands into broader tools of control – extend into other areas of digital authoritarianism. The Bennett School has cautioned that without legal and political safeguards, CBDCs risk empowering state surveillance and undermining democratic oversight, especially in an interconnected global system.

It is not anti-technology or overly conspiratorial to ask hard questions about the design, governance and safeguards built into our future money. The legitimacy of CBDCs will hinge on public trust, and that trust must be earned. As has been highlighted by the OECD, democratic values like privacy, civic trust and rights protection must all be integral to CBDC design.

The future of money

Predictably, the public view of what we want our money to look like in future is mixed. The tensions we see between centralised CBDCs and decentralised alternatives reflect fundamentally different philosophies.

In the US, populist rhetoric has found a strong base among cryptocurrency investors and libertarian movements. At the same time, surveys in Europe suggest many people remain sceptical of replacing a central bank’s authority, associating it with stability and trustworthiness.

For the US Federal Reserve, the debate over bitcoin, decentralised finance (“DeFi”) and stablecoins goes to the heart of American financial power. Behind closed doors, some US officials worry that both the unchecked use of stablecoins and a widespread adoption of foreign CBDCs like China’s e‑CNY will erode the dollar’s central role and weaken the US’s monetary policy apparatus.

In this context, Trump’s push to elevate crypto into a US Strategic Bitcoin Reserve carries serious implications. While US officials generally avoid direct comment on partisan moves, their policy documents make the stakes clear: if crypto expands outside regulatory boundaries, this could undermine financial stability and weaken the very tools – from monetary policy to sanctions – that sustain the dollar’s global dominance.

Meanwhile, the Bank of England’s governor, Andrew Bailey, writing in the Financial Times this week, sounded more accommodating of a financial future that includes stablecoins, suggesting: “It is possible, at least partially, to separate money from credit provision, with banks and stablecoins coexisting and non-banks carrying out more of the credit provision role.” He has previously stressed that stablecoins must “pass the test of singleness of money”, ensuring that one pound always equals one pound (something that cannot be guaranteed if a currency is backed by risky assets).

This isn’t just caution for caution’s sake – it’s grounded in both history and recent events.

During the US’s Free Banking Era in the middle of the 19th century, state-chartered banks could issue their own paper money (banknotes) with little oversight. These “wildcat banks” often issued more notes than they could redeem, especially when economic stress hit – meaning people holding those notes found they weren’t worth the paper they were printed on.

A much more recent example is the collapse of TerraUSD (UST) in May 2022. Terra was a so-called stablecoin that was supposed to keep its value pegged 1:1 with the US dollar. In practice, it relied on algorithms and reserves that turned out to be fragile. When confidence cracked, UST lost its peg, dropping from $1 to as low as 10 cents in a matter of days. The crash wiped out over US$40 billion (around £29 billion) in value and shook trust in the whole stablecoin sector.

But Bailey’s crypto caution extends to CBDCs too. In his most recent Mansion House speech, the Bank of England governor said he remains unconvinced of the need for a “Britcoin” CBDC, so long as improvements to bank payment systems (such as making bank transfers faster, cheaper and more user-friendly) prove effective.

Ultimately, the form our money takes in future is not a question of technology so much as trust. In its latest guidance, the IMF underscores the necessity of earning public trust, not assuming it, by involving citizens, watchdog groups and independent experts in CBDC design, rather than allowing central banks or big tech to shape it unilaterally.

If done right, digital money could be more inclusive, more transparent, and more efficient than today’s systems. But that future is not guaranteed. The code is already being written – the question is: by who, and with what values?

Rafik Omar, Lecturer in Finance, Cardiff Metropolitan University and Vinden Wylde, Lecturer in Computer Sciences at Gulf College, Oman, and PhD Candidate in Big Data, AI and Visualisation, Cardiff Metropolitan University

This article is republished from The Conversation under a Creative Commons license. Read the original article.


by Web Desk via Digital Information World

Microsoft Builds Its First AI Image Generator From the Ground Up

Microsoft has introduced its first fully self-developed image generation model, marking a notable shift toward building its own AI infrastructure rather than leaning on external partners. The new system, called MAI-Image-1, has already made its debut among the top ten text-to-image models on the public testing platform LMArena.


Unlike earlier creative tools that often carried traces of shared frameworks, MAI-Image-1 represents a step toward independence for Microsoft’s AI division. The company describes it as a model designed to capture the subtleties of lighting, texture, and visual realism more effectively than typical generators. It can reproduce details such as soft reflections, natural sunlight gradients, or complex environments like forest landscapes and city streets, aiming for a quality that aligns closely with real-world photography.

Behind its development lies a focus on usability and diversity rather than spectacle. Microsoft’s engineers said they concentrated on curating cleaner and more representative training data, limiting the kind of repetitive, overly stylized imagery that has plagued many existing models. The evaluation process involved testing how well the system handled realistic creative tasks, including concept development for design work and content creation for digital artists. That testing extended to professionals within visual fields, whose input helped refine the system’s flexibility.

MAI-Image-1’s structure allows it to produce results faster without compromising visual depth, offering an efficiency balance often difficult for larger and slower models. This speed is intended to help users cycle through multiple drafts or creative variations in less time, making it easier to transfer results into other editing tools for further refinement.

While the model’s visual strength has drawn attention, Microsoft has equally emphasized its commitment to safe deployment. For the moment, MAI-Image-1 remains in public testing on LMArena, a community leaderboard where participants can generate images and provide feedback. This phase allows the company to monitor how the model performs in everyday scenarios and gather data to guide updates before a wider release.

The company plans to integrate MAI-Image-1 into Copilot and Bing Image Creator, expanding its reach across Microsoft’s ecosystem of productivity and search tools. This inclusion would make photorealistic image generation available to a broad base of users directly inside products many people already use daily.

Internally, the model also signals a wider ambition. Microsoft has been gradually moving toward a portfolio of in-house AI systems capable of standing alongside its partnership models. Earlier in the year, it unveiled its first two proprietary models aimed at text and multimodal understanding. MAI-Image-1 extends that trajectory into visual creation, reinforcing a long-term plan to align AI capabilities with the company’s broader software ecosystem.

In essence, this release represents both a technological and strategic step: a more autonomous Microsoft AI stack designed to evolve independently while maintaining compatibility with existing tools. The model’s blend of speed, realism, and control suggests the company is not only refining how AI images are produced but also how such tools fit into the creative process itself.

As testing continues, MAI-Image-1’s eventual rollout across Microsoft’s platforms will likely determine whether this internal direction can match or surpass the established players in generative imagery. For now, its top-tier ranking on LMArena indicates that Microsoft’s shift toward home-grown AI systems is beginning to find traction.

Notes: This post was edited/created using GenAI tools. 

Read next: Americans Face a Global Fraud Storm as AI Erodes Consumer Trust
by Asim BN via Digital Information World

Monday, October 13, 2025

Americans Face a Global Fraud Storm as AI Erodes Consumer Trust

New research shows that Americans are navigating more scams than anyone else in the world, reflecting a broader global shift toward what experts are calling a “trust nothing” era. The Ping Identity 2025 Consumer Survey, based on responses from more than 10,000 people across 11 countries, reveals how artificial intelligence is reshaping the fraud landscape and undermining confidence in digital security.

America Leads the World in Scam Exposure

The survey found that the average American encounters roughly 100 scam attempts every month... far higher than the global average. Each week, people in the United States receive about nine scam calls, nine fraudulent emails, and seven suspicious texts. That pace leaves Americans dealing with about 25 scam contacts per week.

By comparison, the United Kingdom averages 84 scam attempts per month, while Australians handle around 52. Singapore reports the lowest levels, with only 40. These figures suggest the United States now sits at the epicenter of global fraud activity, with both human deception and AI-generated manipulation increasing the risk.

Spam inboxes illustrate how bad the problem has become. Americans and Brits each have more than 350 unread messages flagged as spam, while Indonesians have fewer than 160.

The Daily Flood of Fraud

Scam messages arrive through almost every channel imaginable — phone calls, emails, texts, and social media platforms. People around the world now receive an average of five spam messages per week on their social media accounts, adding yet another layer to the problem.

When scam messages appear, most people act quickly: 53 percent delete them immediately, and 52 percent block the sender. However, a significant minority in India and the United Arab Emirates prefer to verify the sender’s address before taking any action, showing different regional habits in dealing with fraud.

Despite widespread caution, phone calls remain a key weak spot. Nearly half of Indians (46 percent) and more than a third of Brits (35 percent) admit they sometimes answer calls marked “potential spam.” In the U.S., 31 percent still do, despite knowing the risks.

Confidence Is Collapsing

The research paints a worrying picture of declining public confidence. Only 23 percent of global respondents said they feel very confident in recognizing a scam. Among Americans, that number aligns closely with the global average.

Trust in institutions and brands is also in decline. Just 17 percent of respondents worldwide said they fully trust organizations that manage their identity data. More than a quarter said they have little or no trust at all. Only 14 percent trust large global enterprises, while 20 percent favor regional or local brands.

France reported the lowest levels of trust, with just 8 percent of respondents expressing full confidence in data-handling organizations. The United Arab Emirates stood out as the most trusting country, with 37 percent saying they have full confidence in those who manage their identity data.

AI Intensifies Fraud and Fear

Artificial intelligence is reshaping not only the types of scams people face but also how they perceive digital safety. According to the survey, 68 percent of respondents now use AI in their daily lives (a sharp increase from 41 percent the previous year) and this familiarity has brought new anxieties.

About three-quarters of respondents said they are more concerned about their personal data than they were five years ago. Among their top fears are AI-driven phishing, voice cloning, and deepfake impersonations.

Thirty-nine percent listed AI-generated phishing as the most concerning emerging fraud type. Fake apps that imitate legitimate services followed closely at 38 percent. Deepfake video and audio attacks ranked third at 32 percent, while voice cloning scams came in at 31 percent. Nearly 30 percent cited synthetic identity fraud... where criminals combine real and fake data to create entirely new identities.

Different Fears in Different Places

The survey shows striking differences across countries. Australians expressed the greatest worry over how companies use and store personal data with AI systems, with 34 percent citing transparency concerns. In Singapore, nearly four in ten respondents were most afraid of deepfake impersonations and AI-generated voice cloning. Swedes, in contrast, were among the least concerned about AI impersonation, with just 14 percent mentioning it.

Across all regions, financial fraud remains the top fear at 46 percent, followed by personal data breaches at 25 percent. A quarter of respondents said storing passwords or payment details on social platforms made them feel especially vulnerable.

Password Fatigue and the Rise of Passkeys

Weak password habits continue to drive much of the risk. The average respondent uses 12 passwords for work and 17 for personal accounts, spreading their security thin. Forgetting or misplacing passwords (38 percent) happens more often than using multi-factor authentication (30 percent).

The study points to passkeys and biometric authentication as safer options. About 34 percent said fingerprint or facial recognition would make them feel more secure, while 33 percent favored multi-factor authentication. In Indonesia, preference for passkeys reached 44 percent, second only to biometric methods, which topped 60 percent.

A Reluctance to Stay Online

As digital risks rise, many people are willing to give up parts of their online lives to protect themselves. Globally, 40 percent said they would leave social media altogether rather than risk identity theft. One in three would stop online shopping, and more than a quarter would quit online banking.

In Australia, 26 percent said they would abandon streaming services to stay safe. Meanwhile, 22 percent of Germans would stop using travel planning apps, while 36 percent of Dutch respondents said they would give up nothing — reflecting lower overall anxiety levels in the Netherlands.

The Demand for Regulation

Three-quarters of respondents said they believe governments should regulate AI to protect personal identity data. Support for regulation is strongest in Indonesia (74 percent) and lowest in Sweden (31 percent). Yet fewer than half of people worldwide believe they are sufficiently informed or protected by government or online safety organizations.

This gap between public expectation and institutional response underscores how much uncertainty surrounds AI and digital identity. Even as people expect stronger protections, they remain skeptical about whether governments or corporations can provide them.

Toward a Fragile Future of Trust

Behind the statistics lies a clear global mood: anxiety, exhaustion, and distrust. Consumers are navigating an online world that feels increasingly unsafe, with AI transforming not only how scams are created but how believable they appear.

Yet the research also shows signs of resilience. While full trust is rare, 61 percent of respondents said they have at least some level of trust in organizations managing their data... a sign that improvement is possible. Biometric logins, passkeys, and transparent data policies could help rebuild this fragile confidence.

For Americans, however, the path forward looks steep. Facing nearly twice as many scams as people in most other countries, they are living at the forefront of the global fraud problem. With AI accelerating deception and trust in free fall, the question now is not just how to stop the scams... but how to restore faith in the digital world itself.





Notes: This post was edited/created using GenAI tools.

Read next: Under Pressure, Even Trained Users Miss the Signs of Phishing
by Irfan Ahmad via Digital Information World

Hebbia Transforms Financial Analysis Through Strategic Microsoft Azure AI Partnership

The artificial intelligence landscape continues to shift as specialized platforms forge critical infrastructure partnerships to deliver enterprise-ready solutions. A significant development emerged when Hebbia, a leading AI platform for finance, announced the integration of GPT-5, available through Microsoft Azure AI Foundry, into its flagship platform. This collaboration between Hebbia and Microsoft Azure represents more than a technical partnership... it signals a fundamental transformation in how financial institutions process complex information and make strategic decisions.

Breaking Down the Partnership Architecture

The technical foundation of this collaboration centers on GPT-5's advanced reasoning capabilities combined with Hebbia's intuitive AI interface, creating a system that fundamentally changes how financial professionals interact with vast document repositories. By leveraging Microsoft's secure Azure infrastructure and Hebbia's intuitive AI interface, the platform eliminates time-consuming document review, enabling finance teams to supercharge their workflows with enterprise-grade reliability and security.

Danny Wheller, VP of Business and Strategy at Hebbia, articulated the partnership's strategic value: " Integrating Microsoft Azure AI Foundry into Hebbia is about more than speed — it's about giving financial professionals a new edge in generating alpha. By cutting through noise to surface the numbers and drivers that truly matter, teams can build and test investment cases in hours instead of days, with every step traceable, secure, and grounded in real market data."

The partnership leverages GPT-5 in Azure AI Foundry, which pairs frontier reasoning with high-performance generation and cost efficiency, delivered on Microsoft Azure's enterprise-grade platform. This combination enables organizations to transition confidently from pilot programs to full-scale production deployments, addressing a critical need in the financial services sector for scalable AI solutions.

Strategic Benefits for Financial Services

The partnership delivers concrete advantages across multiple dimensions of financial operations. With advanced AI embedded in Hebbia's Matrix platform, professionals can uncover critical insights they'd otherwise miss and accelerate high-value tasks — from due diligence and market intelligence to deal sourcing, contract analysis, and regulatory compliance.

Zia Mansoor, CVP of Cloud & AI Platforms at Microsoft, emphasized the transformative potential: "Combining Microsoft Azure AI Foundry with Hebbia's platform exemplifies how generative AI is reshaping the future of financial services. By joining together secure, scalable infrastructure and cutting-edge AI, we're helping financial institutions move beyond manual analysis and toward more strategic, insight-driven decision-making."

The platform's capabilities extend beyond simple document processing. With GPT-5's advanced reasoning in Hebbia, they can pinpoint critical figures across thousands of documents and structure complex financial analysis with speed and accuracy. This precision enables financial teams to tackle increasingly sophisticated analytical challenges while maintaining the transparency and auditability required in regulated environments.

The Power of Strategic Technology Partnerships

This collaboration exemplifies broader trends in the AI ecosystem where companies are using AI, both generative and analytical, as a catalyst for new ways to work together. The partnership model has become increasingly critical as AI development requires substantial infrastructure, diverse data sets, and specialized expertise that few organizations can develop independently.

Recent industry analysis highlights how "These partnerships will provide them with diverse data sets that will help them to train their AI models better and generate more accurate outputs", according to Sameer Patil, director of the Centre for Security, Strategy & Technology at Observer Research Foundation. This collaborative approach accelerates innovation while distributing development costs and risks across multiple stakeholders.

The financial services industry particularly benefits from such partnerships, as AI agents are partly autonomous; they require a human-led management model. By combining Microsoft's infrastructure expertise with Hebbia's domain-specific knowledge, the partnership creates solutions that balance automation with human oversight—a critical requirement in financial decision-making.

Understanding the AI Platform's Capabilities and Growth

Founded in 2020 by George Sivulka, Hebbia has raised $130 million in Series B funding at a roughly $700 million valuation led by Andreessen Horowitz, with participation from Index Ventures, Google Ventures, and Peter Thiel. The company's rapid ascent reflects the pressing need for sophisticated AI tools in financial services.

The platform's Matrix product represents a significant advancement in financial AI applications. Users can upload documents or integrate with data sources to instantly structure, analyze, and surface insights, enabling rapid, citation-backed research, deal sourcing, diligence, memo drafting, portfolio monitoring, credit underwriting, credit agreement analysis, and risk assessment.

Customer adoption has been remarkable, with Hebbia powering AI-driven decisions for BlackRock, KKR, Carlyle, and 40% of the largest asset managers by AUM. The platform currently helps manage over $15 trillion in assets globally, demonstrating its critical role in modern financial infrastructure.

Expanding Capabilities Through Strategic Acquisitions

The company's growth strategy extends beyond partnerships to strategic acquisitions. In June 2025, Hebbia announced its acquisition of FlashDocs, a leader in generative AI slide deck creation. This acquisition addresses what CEO George Sivulka described as a "last-mile problem" in financial workflows.

The acquisition expands Hebbia's platform beyond information retrieval and agentic workflows into content generation, with FlashDocs currently automating 10,000+ slides per day for leading AI and enterprise companies. Adam Khakhar, CTO and co-founder of FlashDocs, explained the strategic value: "Now Hebbia is not just surfacing insights but generating the final outputs that matter most in finance: investment memos, board decks, diligence summaries."

Financial Performance and Market Position

The company's financial trajectory has been exceptional. Over the last 18 months, we grew revenue 15X, quintupled headcount, and drove over 2% of OpenAI's daily volume, according to founder George Sivulka. Hebbia had ARR of $13 million, and the company was profitable at the time of its Series B funding, demonstrating sustainable business fundamentals alongside rapid growth.

The platform serves a diverse client base, including KKR, MetLife, and the U.S. Air Force, extending beyond traditional financial institutions to government and military applications. This diversity reflects the platform's versatility in handling complex document analysis across various domains.

Future Implications for Financial Technology

The Microsoft Azure AI Foundry partnership positions Hebbia at the forefront of a fundamental shift in financial services technology. As AI stands out from these inventions because it offers more than access to information. It can summarize, code, reason, engage in a dialogue, and make choices; the technology promises to democratize sophisticated financial analysis capabilities.

Looking ahead, the partnership of GPT-5 through Azure AI Foundry represents just the beginning. As developers need an end-to-end platform that seamlessly connects code, collaboration, and cloud, partnerships like this one establish the foundation for next-generation financial applications that combine human expertise with AI capabilities.

Navigating the Competitive Landscape

The financial AI sector has become increasingly competitive, with multiple players vying for market share. However, Hebbia's approach of combining deep financial domain expertise with cutting-edge infrastructure partnerships creates significant competitive advantages. The platform's ability to handle dense files and respond to users' inquiries concisely and accurately, precisely in the way that is needed, differentiates it from more generic AI solutions.

Industry observers note that customers are redefining how they work through the platform, using Hebbia to gain insights that were never before possible. During the SVB crisis, for instance, asset managers instantly mapped exposure to regional banks across millions of documents, demonstrating the platform's value in time-critical scenarios.

Shaping the Future of Financial Analysis

The strategic partnership between Hebbia and Microsoft Azure AI Foundry represents more than a technical partnership—it exemplifies how specialized AI companies can leverage infrastructure partnerships to deliver transformative solutions. By combining domain expertise with enterprise-grade infrastructure, the collaboration enables financial institutions to navigate increasingly complex markets with unprecedented speed and accuracy.

As the financial services industry continues its digital transformation, partnerships that balance innovation with security, scalability with specialization, will determine which solutions ultimately succeed. This collaboration demonstrates how strategic alliances can accelerate the deployment of AI technologies while maintaining the rigorous standards required in financial services, setting a blueprint for future industry partnerships.


by Web Desk via Digital Information World

Under Pressure, Even Trained Users Miss the Signs of Phishing

People are more likely to fall for phishing scams when their attention is split across several tasks. New research led by Milena Head at McMaster University shows that distraction, not ignorance, often causes these errors.

The study, published in the European Journal of Information Systems, looked at how mental workload affects people’s ability to judge whether an email is legitimate. Participants who had to remember longer sets of numbers were less accurate in spotting phishing attempts. Those under heavier mental load were also less confident in their decisions.

Researchers say phishing detection is a thinking task, not an automatic reaction. When the mind is busy, the mental reminder to “check this message carefully” often fades before a person can decide what to trust.

Mental Load Reduces Accuracy

The experiments involved more than 900 participants who reviewed both real and fraudulent emails. Each person performed a memory task before judging the messages. When the task was simple, detection accuracy was higher. When it was harder, accuracy dropped.

Data from the first experiment showed that high memory load had a measurable negative effect on detection accuracy (β = −.124, p = .049) and decision quality (β = −.066, p = .008). This pattern confirmed what many workplaces see in practice: multitasking reduces focus and leads to quick, sometimes wrong, decisions.

People who were confident in their cybersecurity skills did not necessarily perform better. Some overestimated their ability and became less cautious. Messages that looked familiar also reduced attention, especially when participants were juggling other tasks. The researchers observed that mental effort from one activity can spill into another, making it harder to focus. “When cognitive demands are high, users may never retrieve the goal of phishing detection at all,” the study explains.

Simple Cues Help Refocus the Mind

The second experiment tested whether a short reminder could offset this problem. After reading a short memo, half of the participants saw a quick message reminding them to watch for phishing before they checked their inbox.

That short prompt improved accuracy and decision quality (β = .230, p < .001). It acted as a mental cue, helping people recall their security goal at the right moment. The negative effect of memory load was weaker when reminders appeared, which suggests that a well-timed message can restore focus even under pressure.

These reminders worked best for emails framed around rewards or refunds, known as “gain-framed” messages. Such messages often escape suspicion because they appear positive. Loss-framed messages, like account warnings, already triggered more caution and showed smaller improvement.

Gender differences also appeared. Male participants showed a larger boost from reminders, though the researchers said this pattern needs more investigation.

What the Findings Mean for Training

The research challenges how most organizations train people to detect phishing. Many awareness sessions happen in quiet settings, far from the fast-paced reality of everyday work. The study suggests that detection exercises should include distractions to reflect real conditions.

Practical systems could also help. A context-aware tool might track when a user is switching tasks or typing rapidly, then deliver a subtle alert before they open new emails. Training programs could schedule phishing simulations during peak work hours to capture how attention works under stress.

The study’s data show that even small reminders can make a measurable difference. They don’t need to interrupt work or appear constantly. Timing is more important than volume.

With billions of phishing emails circulating every day, small improvements in detection can have a broad effect. As the researchers conclude, mental overload, not lack of awareness, is often the cause of these mistakes. Understanding how attention works under strain may help organizations protect employees at the moments they are most likely to slip.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next:

The AI Boss Effect: How ChatGPT Is Quietly Replacing Workplace Guidance

People Struggle to Tell AI from Doctors, and Often Trust It More


by Irfan Ahmad via Digital Information World

Sunday, October 12, 2025

The AI Boss Effect: How ChatGPT Is Quietly Replacing Workplace Guidance

A growing number of employees are turning to artificial intelligence for answers they once sought from their managers. What began as a curiosity has become a daily routine that shapes how people think, communicate, and make decisions at work. A recent survey of U.S. workers shows that this reliance is no longer limited to tech-savvy staff or specific roles. It has become a cross-industry behavior many describe as the “AI Boss Effect,” where workers treat tools like ChatGPT as a trusted adviser.

The survey, conducted by Resume Now in mid-2025, included 968 employees across different fields. Nearly everyone questioned (97 percent)said they had asked ChatGPT for advice instead of turning to their boss. Around 63 percent said they do this regularly. The responses suggest that AI is filling gaps in communication, confidence, and trust that once existed between managers and their teams.

Why Workers Now Ask ChatGPT Instead of Their Boss

Many employees find it easier to approach AI than a human supervisor. The reasons are varied but often stem from workplace tension and fear of judgment. About 57 percent said they worry about possible retaliation for asking sensitive questions. Another 38 percent admitted they avoid asking their manager for help because they do not want to appear incompetent.

At the same time, 70 percent of those surveyed said ChatGPT seems to understand their work challenges better than their manager does. Roughly half said AI tools are faster and more convenient when they need a quick answer. These responses show that employees are not necessarily rejecting their managers, but they are looking for safer and more efficient ways to get guidance.

For many, the appeal lies in the privacy and neutrality of AI. There is no visible hierarchy, no office politics, and no social discomfort. It gives employees space to think through problems without the pressure of being watched or judged.

How Workers Are Using ChatGPT Day to Day

Beyond seeking advice, many employees are using ChatGPT as a practical assistant for everyday communication and planning. According to the survey data, 93 percent have used it to prepare for a conversation with their boss. About 61 percent have sent a message written with ChatGPT’s help. Another 57 percent rely on it for writing or editing work-related documents, from reports to routine emails.

Survey Finds Workers Trust ChatGPT Over Their Boss — Even for Emotional Support

More than half said they use ChatGPT for creative thinking or brainstorming, while 52 percent turn to it for coding or debugging. About 40 percent rely on it for research or summarizing information, and 35 percent said they use it to draft a message before revising it themselves. These figures show how AI is no longer just an optional productivity tool. It has become part of the professional thought process for many people, shaping how they write, reason, and solve problems throughout the day.

Emotional Support from an Unlikely Source

Another notable finding is that employees are beginning to see ChatGPT as a source of emotional balance. A majority said they would feel comfortable talking about stress or mental health with an AI assistant. Almost half of the respondents (49 percent)said ChatGPT has provided more emotional support than their manager during times of work-related stress.

This kind of use signals a subtle but important shift. It suggests that AI is becoming a stand-in for emotional safety at work, especially in environments where employees feel unheard or under pressure. Workers appear to be using AI not only for guidance but for reassurance and composure when human empathy feels distant.

The Link Between Productivity and AI Access

Productivity now depends heavily on access to ChatGPT. The survey shows that 77 percent of workers believe losing access would harm their output, and 44 percent think it would seriously affect their ability to perform. About 72 percent said the advice they get from ChatGPT is better than what they receive from their boss. More than half (56 percent believe) it has doubled their productivity, while 26 percent said it improves their performance significantly. Only 2 percent said it has no impact at all.

These results reveal how central AI tools have become in the modern workplace. Many employees treat ChatGPT as both a problem solver and a thinking companion that helps them stay organized and efficient.

A Growing Shift in Workplace Trust

The widespread use of AI also raises new questions about transparency and fairness. Around 91 percent of respondents said they have suspected that an AI system made an unfair decision affecting their job. This shows that while workers rely on AI, they also want greater clarity about how it operates.

It appears employees are willing to trust AI as a personal tool, but they remain cautious about how companies apply it in decision-making. They want openness from their employers about where and when AI systems are being used.

What This Means for Leaders

For managers, this growing trend highlights an important gap. Employees are not using ChatGPT because they dislike their supervisors; they use it because it feels easier, faster, and safer. The data reflects a growing need for reassurance and consistency. AI provides those qualities instantly, but good management requires them too.

This pattern offers a lesson rather than a warning. Leaders who adapt by being more available, more empathetic, and more transparent can rebuild the kind of trust that prevents workers from turning to machines for human understanding. The “AI Boss Effect” is less about machines taking over and more about what employees are missing.

Workplaces that recognize this change early may find that the most effective approach is not competition between managers and technology but collaboration between the two. When AI handles structure and clarity, human leadership can focus on what it does best... building trust and supporting people through the parts of work that technology cannot feel.

Read next: People Struggle to Tell AI from Doctors, and Often Trust It More


by Web Desk via Digital Information World