Wednesday, October 15, 2025

YouTube Refreshes Its Look and Experiments With AI for Realistic Lip-Syncing

YouTube has introduced a series of updates aimed at improving both the user experience and creator tools. The platform’s changes span interface adjustments, expanded interactive features, and early experiments with AI-driven video translation technology.

The interface updates focus on making content more accessible and engaging. The playback display has been redesigned to provide a more immersive viewing experience. Translucent buttons now overlay videos, while double-tap gestures have been refined. Users can quickly skip forward or backward, with the on-screen text showing the exact seconds moved. Skip durations can be customized in five-second increments. Video descriptions have also been refreshed, adopting color accents drawn from the video content itself. The changes are particularly intended to enhance Shorts and Connected TV viewing.

Comment sections have also been upgraded. YouTube is now rolling out threaded comments, allowing up to three levels of replies. Additional responses beyond the third level appear as flattened comments. This adjustment aims to make conversations beneath videos easier to follow. In conjunction with threading, the platform has introduced custom like animations. Reactions now vary based on content type, such as musical notes for music videos or sports-themed animations for athletics content.

Voice replies are another feature receiving wider access. Previously available to a limited number of creators, the tool now allows several hundred thousand creators to respond to comments using short voice notes of up to 30 seconds. Voice replies can be recorded both in the main app and through Studio Mobile, offering a more personal way for creators to engage with their audiences.


YouTube’s courses feature is expanding as well. Initially tested with a small group of creators, this feature allows channels to offer free or paid learning programs. Courses now display a dedicated badge on the Watch page and may appear on youtube.com/courses for broader visibility. Creators with access to advanced features can monitor course performance through detailed analytics, including views, watch time, and revenue. This expansion opens additional monetization opportunities for channels that produce educational or specialist content.

In terms of content moderation, YouTube is refining its fixable violations process. Creators with advanced features can now revise videos that received an official warning, providing a way to address minor issues without facing full removals or penalties. Limits apply, including one fix attempt per video and exclusion for more severe policy violations.

Alongside these updates, YouTube is testing an AI-powered lip-sync feature designed to enhance its auto-dubbing capabilities. The tool adjusts facial movements to align with translated audio, improving the visual consistency of dubbed videos. Early testing shows the feature works best in Full HD and supports translations in English, French, German, Spanish, and Portuguese. Over time, the goal is to extend lip-syncing to all languages covered by YouTube’s auto-dubbing system, including Hindi, Japanese, Korean, and many others. Access is currently limited to select creators as the platform evaluates performance, compute requirements, and quality. YouTube also plans to clearly disclose when videos have been synthetically altered.

Together, these updates reflect YouTube’s focus on enhancing interactivity, creator engagement, and accessibility across its platform. Users can expect a refreshed visual experience, more nuanced social interactions, new monetization and educational tools, and the beginnings of AI-assisted translation and lip-syncing, signaling a continued evolution of the platform.

Read next: 

• Meta Expands Teen Protections With Stricter Content Rules

• Signals From Space: Study Finds Unencrypted Military, Telecom, and Retail Traffic Across U.S. Skies
by Web Desk via Digital Information World

Signals From Space: Study Finds Unencrypted Military, Telecom, and Retail Traffic Across U.S. Skies

A large share of satellite data moving above North America has been found unprotected, exposing communications from mobile carriers, corporations, and even military networks. The discovery came from a joint investigation by researchers at the University of California, San Diego, and the University of Maryland, who spent seven months scanning the sky with low-cost satellite equipment.

The team examined signals from 39 geostationary satellites and 411 transponders visible from La Jolla, California, using a consumer-grade motorized dish and custom-built software that cost under $800.


Their scans revealed that about half of all links to high-orbit satellites were transmitting data in cleartext, leaving large volumes of voice calls, text messages, and operational data accessible to anyone with similar hardware.

How the discovery happened

The researchers built an automated ground station capable of aligning to each satellite and decoding raw traffic. By developing a universal parser that could handle seven different and often proprietary communication stacks, they overcame a major obstacle that limited earlier studies. This allowed them to recover six times more data packets than previous research tools could handle.

Across their recordings, they found that link-layer encryption, long used in satellite television, was rarely enabled for internet or private network connections. Many organizations treated satellite backhaul as an internal network path, assuming the signal was secure by nature of being in space. Instead, the study showed that such signals could be intercepted from Earth with off-the-shelf equipment.

What the researchers uncovered

The investigation exposed unencrypted traffic from a range of industries and government bodies. Telecommunications companies were among the first identified. In one case, the team intercepted T-Mobile cellular backhaul data from rural tower links, including unprotected text messages, call metadata, and voice data sent through the IP Multimedia Subsystem. T-Mobile confirmed the finding and stated that the problem affected less than 0.1% of its remote sites. Encryption has since been applied.

Similar unencrypted traffic was found from AT&T Mexico, which revealed control signals and internet sessions routed through its satellite backhaul. Another carrier, KPU Telecommunications in Alaska, was found to be transmitting unencrypted VoIP data during backup link operation.

Beyond telecom systems, the team recorded internal communications from major companies and institutions. Walmart-Mexico’s satellite links exposed login credentials, inventory records, and internal email traffic. Grupo Santander, Banorte, and Banjército were also affected, with plaintext LDAP and ATM network data visible across satellite channels. The researchers traced these signals through identifiable IP ranges and organizational domains.

Government and military exposure

The analysis also uncovered sensitive government transmissions. Two Mexican government and military links were found to be broadcasting unencrypted operational data, including personnel files, military asset locations, and live surveillance records. Some traffic even contained web application data related to law enforcement and narcotics tracking systems.

From another satellite, the researchers intercepted communication originating from U.S. military vessels. That traffic included plaintext DNS and SIP signaling data, which identified ship names that matched known naval assets. While some encryption was present in isolated links, several channels still carried unencrypted packets mixed with ordinary network traffic.

Why encryption is missing

The study found that the lack of encryption was not due to technical limits. Most satellite terminals and hubs include encryption options at the physical, link, or network layer. In practice, many organizations disable it to save bandwidth, reduce latency, or avoid troubleshooting difficulties. In some cases, encryption licenses for satellite systems carry extra costs, leading operators to rely on trust in isolation rather than protection.

Another factor is operational inertia. Satellite systems often run for years without full audits, and responsibility for encryption can shift between providers, resellers, and end users. As a result, critical communications (covering everything from industrial control systems to financial data) remain open to interception.

Broader implications

The researchers disclosed their findings between December 2024 and mid-2025 to affected organizations, including T-Mobile, AT&T, Intelsat, Panasonic Avionics, and several government agencies. Many have since applied fixes, but not all. The study warns that the same patterns likely exist beyond North America, given that the same satellite equipment and protocols are used globally.

The team has made its scanning tools publicly available to encourage independent verification and stronger encryption adoption. They note that while low-Earth orbit networks like Starlink already employ modern cryptographic frameworks, traditional geostationary links remain a significant blind spot in network security... one that sits quietly above most of the planet.

Notes: This post was edited/created using GenAI tools.

Read next:

• How Technical Glitches Quietly Drain U.S. Developer Productivity

• The Truth About Dopamine Detoxes: Can You Really Reset Your Brain?


by Asim BN via Digital Information World

Tuesday, October 14, 2025

How Technical Glitches Quietly Drain U.S. Developer Productivity

Ask most teams how they measure developer productivity, and you’ll likely hear the usual suspects: lines of code, features shipped, sprint goals met. But those metrics don’t tell the whole story. Behind that visible progress is a quieter problem, one that doesn’t show up in dashboards or JIRA boards.

According to Lokalise’s Developer Delay Report , technical issues are quietly eating into developers’ time, energy, and focus. And while the impact might not be obvious at first glance, it adds up fast, and gets expensive even faster.

The Hidden Time Suck Developers Deal With Every Week

Everyone knows bugs happen. Downtime? Sure, that’s part of the game. But what Lokalise found is that these "expected" disruptions aren’t rare exceptions, they’re a regular part of the work week for most developers.

Their survey of 500 U.S. devs found that engineers lose an average of three hours per week to avoidable issues, stuff like broken tools, flaky workflows, or missing documentation. That may not seem outrageous in isolation, but when you do the math, it stings: that’s about 20 full workdays per year, per developer.

Now tie that to a salary. If a developer earns $100,000 annually, those 20 days of lost productivity equate to around $8,000 flushed away per person. For a 10-person team, that’s $80,000. For a team of 100, you’re burning close to a million dollars a year on technical friction alone.

The top culprits?

  • Software bugs and glitches (55%)
  • Downtime from platforms or services (47%)
  • Incomplete or bad documentation (35%)
  • Tool integration problems (24%)
  • Slow code reviews (23%)

What’s striking is how persistent these issues are. It’s not that teams are unprepared, it’s that the issues are baked into the day-to-day experience.

Workarounds Are Just More Work

When something breaks, developers don’t usually wait around. They get scrappy. According to the report, 60% of them say they build their own workarounds. That might sound resourceful, but it just kicks the problem down the road. Nearly all of those devs then spend about an hour a week maintaining those fixes, patches that were never meant to last.

That’s more time lost. More mental energy spent on duct tape instead of delivery.

And then there’s AI. With all the buzz around generative tools, many devs have turned to AI for help. But 42% say AI tools actually slowed them down instead of speeding things up.

Why? Because context matters. AI-generated code often misses the bigger picture, it’s missing the nuance, the architecture, the naming conventions, the edge cases. So now the team spends extra time reviewing and rewriting what the bot suggested. You’ve swapped one bottleneck for another.

Even when AI-generated output is decent, it’s uneven. Junior developers may lean on it too much, skipping the deeper thinking behind the code. Senior engineers end up cleaning up the mess. And so begins another feedback loop of inefficiency.

When Developers Become Support Staff

It’s bad enough when your own tools break. It’s worse when you’re constantly fixing someone else’s.

According to Lokalise, 61% of developers regularly get pulled into tech support roles that aren’t part of their job. In some cases, the time they spend helping others rivals what they spend fixing their own blockers.

Common support tasks developers get dragged into:

  • Troubleshooting network or connectivity issues (48%)
  • Explaining workflows and documentation (39%)
  • Resolving permissions or access issues (35%)
  • Setting up or configuring tools (34%)
  • Helping others with local environment setup (24%)

These issues are fixable with better documentation, better training, or clearer systems, but instead, they fall on devs already swamped with their own priorities.

Only 16% of developers said they were given proper resources to handle these support asks. One in three got nothing at all. No training. No documentation. Just figure it out and move on.

It’s no surprise that 66% said this extra support work hurts their focus. Nearly half (47%) said they hold off on asking for help themselves, just to avoid looking like they’re not competent.

That’s a culture problem, and a big one. Developers working under pressure, juggling support and delivery, and avoiding help for fear of judgment? That’s how you lose people.

Delays Aren’t Just Common, They’re Costly

Almost half of developers (44%) say they’ve missed a deadline in the past year because of tech-related issues. And these aren’t small blips.

Here’s how much time a single issue can eat up:

  • 30% lost 1–3 hours
  • 29% lost half a workday
  • 18% lost an entire day
  • 11% lost 2–3 days
  • 1% lost a full week, or more

That kind of disruption derails everything. Sprints slip. Dependencies pile up. Product launches push back. It’s not just one person missing a target, it’s entire teams scrambling to re-align.

Then you start rushing. You skip proper testing. You delay writing documentation. You half-fix bugs. And suddenly you’ve created more of the same mess that caused the delays in the first place.

It’s a vicious cycle. And companies that don’t actively manage it pay for it twice, first in lost time, then in tech debt.

Regional Patterns Reveal Deeper Gaps

Lokalise also pulled Google Trends data to see where developers are searching for help the most. The results showed clear differences across states.

Top states for dev troubleshooting searches (per capita):

  • Washington
  • Vermont
  • Massachusetts

Bottom states:

  • Mississippi
  • Louisiana
  • Alabama

The most-searched terms?

  • “segmentation fault”
  • “git revert merge”
  • “Python TypeError”
  • “nginx 502 bad gateway”
  • “macOS kernel panic”
  • “JavaScript undefined”

It’s not just Silicon Valley dealing with cryptic error messages. Developers across the country are hitting the same walls. The difference is, some have better systems in place to fix them, and some are left Googling in the dark.

And in lower-search states, the numbers might reflect something else entirely: less visibility into problems, or fewer support structures to surface them in the first place.

Developers Are Piecing Together Help

When devs get stuck, they don’t go to internal IT. Most of the time, they’re on their own.

Here’s how developers look for help:

  • 41% use a mix of forums, AI, and internal docs
  • 25% turn to public forums like Stack Overflow
  • 19% rely on AI tools
  • 15% use internal documentation or team support

The result? A fragmented, inconsistent support ecosystem.

You end up with devs bouncing from Reddit to ChatGPT to Slack, cross-referencing answers, and guessing what’ll break the least. That’s not a system, it’s a scavenger hunt.

Worse, companies often treat this DIY support as a strength. But it’s not. It’s a failure of internal systems to give developers what they need.

Senior engineers become the default knowledge base. Junior devs fall behind. Knowledge becomes tribal, siloed, and fragile.

What Engineering Leaders Should Actually Do

If you’re leading a team, or a company, and you think this doesn’t apply to you, think again.

This isn’t a niche frustration. It’s a widespread, structural problem. And fixing it isn’t just about morale, it’s about money.

To make progress, teams need:

  • Reliable tools that don’t break the flow
  • Smooth integrations across the stack
  • Simple, intuitive UX for faster ramp-up
  • Real documentation that’s actually helpful
  • Fast, human support when things go sideways

But tooling is just the start. Leadership needs to dig deeper.

  • Run regular workflow reviews
  • Identify friction points before they become culture problems
  • Encourage help-seeking, not silence
  • Build documentation into dev workflows
  • Track internal issues like product bugs
  • Rotate support roles so the burden is shared
  • Budget time and headcount for cleaning up tech debt
  • Set internal SLAs for support, not just for customers

Final Thought: Friction Is the Silent Killer

Your developers aren’t just writing code. They’re context-switching, firefighting, fixing documentation, building bandaids, and cleaning up after AI.

They’re wasting hours not because they’re slow, but because the system is.

And that’s the part most companies miss: developer friction isn’t just annoying, it’s expensive.

If you want your team to move fast, ship confidently, and actually enjoy their work, start by removing the stuff that’s slowing them down.


Read next: The Truth About Dopamine Detoxes: Can You Really Reset Your Brain?
by Irfan Ahmad via Digital Information World

Google Updates Search Ads with a New “Sponsored Results” Design

Google is rolling out a new look for search advertising that changes how sponsored listings appear on its pages. The update adds a larger “Sponsored results” label, grouping text and shopping ads together under a single heading. The company says the redesign will make it easier for users to recognize paid content while keeping navigation clear on both desktop and mobile.

Easier to See, Harder to Ignore

Sponsored results now sit at the top of each search page in a single block. The section can show up to four text ads, similar in size to the previous format. Once a user scrolls past them, a small control appears that lets them hide the ad group with one click. A similar block also appears at the bottom of the page and can only be hidden after it has been viewed.


The change is meant to make it obvious where promotional listings begin and end. By grouping ads together, Google gives users a consistent structure instead of scattered placements across the page. The company says this helps people find what they need faster without guessing which results are paid.

A Step Toward Clearer Labeling

Earlier versions of Search marked each ad individually. The new design replaces that with one shared label that remains visible as users scroll through the top section. It also extends to shopping ads, giving a unified appearance to all paid formats.

The move represents one of Google’s biggest changes to ad presentation in years. It aligns with ongoing efforts to balance transparency with advertiser visibility. While users gain more control over what they see, Google maintains space for businesses that depend on Search traffic.

Impact on Advertisers

The option to hide ad groups could influence how often users engage with sponsored content. Clearer labeling may reduce accidental clicks, but it can also create more meaningful traffic for ads that attract genuine interest. Advertisers may need to focus more on creative quality and relevance to encourage voluntary engagement.

Marketing specialists expect the update to shift attention toward ad effectiveness rather than volume. The ability for users to bypass entire sections means less space for low-value campaigns and a greater emphasis on trust. This could benefit brands that invest in informative, well-targeted messaging.

Linked to AI Overviews

The update arrives as Google expands ads inside its AI Overviews feature to new English-speaking markets. These AI-generated summaries appear when users enter complex or multi-part questions. Ads placed alongside them blend into the generative results rather than appearing in a separate section.

The connection between AI Overviews and Sponsored results shows how search is changing into a more mixed environment. Ads, organic results, and AI summaries now appear closer together, giving users multiple layers of information in one place. For advertisers, this means adjusting to placements that depend not just on ranking but on how AI presents the overall response.

Search with More Control

Together, the new layout and AI integration highlight Google’s attempt to keep user experience and advertising revenue in balance. Users can now identify paid listings more easily and choose when to hide them, while advertisers retain a prominent presence at both ends of the results page.

These updates suggest that search is moving toward a model built around user choice and transparency. Google’s challenge is to make ads feel informative rather than intrusive, and the new Sponsored results format is its latest step toward that balance.

Notes: This post was edited/created using GenAI tools.

Read next:

• Microsoft Builds Its First AI Image Generator From the Ground Up

• The Digital Coin Revolution: Who Really Controls the Future of Currency?


by Irfan Ahmad via Digital Information World

The Digital Coin Revolution: Who Really Controls the Future of Currency?

Throughout history, control over money has been one of the most powerful levers of state authority. Rulers have long understood that whoever issues and manages the currency also commands the economy and, by extension, society itself.

In Tudor England, Henry VIII’s “Great Debasement” between 1542 and 1551 reduced the silver content of coins from more than 90% to barely one-third, while leaving the king’s portrait shining on the surface, of course. The policy financed wars and courtly extravagance, but also fuelled inflation and public distrust in coinage.

Centuries earlier, Roman emperors had resorted to similar tricks with the denarius, steadily reducing its silver content until by the 3rd century AD, it contained little more than trace amounts, undermining its credibility and contributing to economic instability.

Outside Europe, the same pattern held. In 11th-century China, the Song dynasty pioneered paper money, extending state control over taxation and trade. This was a groundbreaking innovation, but later dynasties such as the Ming over-issued notes, sparking inflation and loss of trust in the currency.

Such episodes underline a timeless truth: money is never neutral. It has always been an instrument of governance – whether to project authority, consolidate control or disguise fiscal weakness. The establishment of central banks, from the Bank of England in 1694 to the US Federal Reserve in 1913, formalised that authority.

Today, the same story is entering a new digital chapter. As Axel van Trotsenburg, senior managing director of the World Bank, wrote in 2024: “Embracing digitalisation is no longer a choice. It’s a necessity.” By this he meant not simply switching to online banking, but making the currencies we use, and the mechanisms for regulating it, entirely digital.

Just as rulers once clipped coins or over-printed notes, governments are now testing how far digital money can extend their reach – both within and beyond national boundaries. Of course, different governments and political systems have very different ideas about how the money of the future should be designed.

In March 2024, then-former President Trump, back on the hustings trail, declared: “As your president, I will never allow the creation of a central bank digital currency.” It was a campaign moment, but also a salvo in a much larger battle – not just over the future of money, but who controls it.

In the US, the issuance of currency – whether in the form of physical cash or digital bank deposits and electronic payments – has traditionally been monopolised by the Federal Reserve (more commonly known as “the Fed”), a technocratic institution designed to operate independently from the elected government and houses. But Trump’s hostility toward the Fed is well-documented, and noisy.

During his second term, Trump has publicly berated the Fed’s chair, Jerome Powell, calling him “a stubborn MORON” over his interest rate policies, and even floating the idea of replacing him. Trump’s discomfort with the Fed’s autonomy echoes earlier populist movements such as President Andrew Jackson’s 1830s crusade against the Second Bank of the United States, when federal financial elites were portrayed as obstacles to democratic control of money.

In March 2025, when Trump issued an executive order establishing a Strategic Bitcoin Reserve, he signalled the opening of a new front in this institutional battle. By incorporating bitcoin into an official US reserve, the world’s largest economy is, for the first time, sanctioning its use as part of state financial infrastructure.

For a leader like Trump, who has consistently sought to break, bypass or dominate independent institutions – from the judiciary to intelligence agencies – the idea of replacing the Fed’s influence with a state-aligned crypto ecosystem may represent the ultimate act of executive assertion.

Such a step reframes bitcoin as more than an investment fad or criminal fallback; it is being drawn into the formal monetary system – in the US, at least.

America’s crypto future?

Bitcoin is, by a distance, the world’s most valuable cryptocurrency (at the time of writing, one coin is worth just shy of US$120,000) having established a record high in August 2025. Like gold, its value is ensured in part by its finite supply, and its security by the blockchain technology that makes it unhackable.


Image: engin akyurt / Unsplash

For most who buy bitcoins, its key value is not as a currency but a speculative investment product – a kind of “digital gold” or high-risk stock that investors buy hoping for big returns. Many people have indeed made millions from their purchases.

But now, thanks in particular to Trump’s aggressively pro-crypto, anti-central bank approach, bitcoin’s potential role as part of a new form of state-controlled digital currency is in the spotlight like never before.

Trump’s framing of bitcoin as “freedom money” reflects its traditional sales pitch as being censorship-resistant, unreviewable, and free from state control. At the same time, his blurring of public authority and private financial interest, when it comes to cryptocurrencies, has raised some serious ethical and governance concerns.

But the crucial innovation here is that Trump is not proposing a truly libertarian system. It is a hybrid model: one where the issuance of money may become privatised while control of the US’s financial reserve strategy – and associated political and economic narratives – remains firmly in state hands.

This raises provocative questions about the future of the Federal Reserve. Could it be sidelined not through legal abolition, but by the growing relevance of parallel monetary systems blessed by the executive? The possibility is no longer far-fetched.

According to a 2023 paper published by the Bank for International Settlements, a powerful if little-known organisation that coordinates central bank policy globally: “The decentralisation of monetary functions across public and private actors introduces a new era of contestable monetary sovereignty.”

In plain English, this means money is no longer the sole domain of states. Tech firms, decentralised communities and even AI-powered platforms are now building alternative value systems that challenge the monopoly of national currencies.

Calls to diminish the role of central banks in shaping macroeconomic outcomes are closely tied to the rise of what the University of Cambridge’s Bennett School of Public Policy calls “crypto populism” – a movement that shifts legitimacy away from unelected technocrats towards “the people”, whether they are retail investors, cryptocurrency miners or politically aligned firms.

Supporters of this agenda argue that central banks have too much unchecked power, from manipulating interest rates to bailing out financial elites, while ordinary savers bear the costs through inflation or higher borrowing charges.

In the US, Trump and his advisers have become the most visible proponents, tying bitcoin and also so-called “stablecoins” (cryptocurrencies designed to maintain a stable value by being pegged to an external asset) to a broader populist narrative about wresting control from elites.

The emergence of this dual monetary system is causing deep unease in traditional financial institutions. Even the economist-activist Yanis Varoufakis – a long-time critic of central banks – has warned of the dangers of Trump’s approach, suggesting that US private stablecoin legislation could deliberately weaken the Fed’s grip on money, while “depriving it of the means to clean up the inevitable mess” that will follow.

Weaponisation of the dollar

Some rival US nations also feel deep unease about its approach to money – in part because of what analysts call the “weaponisation of the dollar”. This describes how US financial dominance, via Swift and correspondent banking systems, has long enabled sanctions that effectively exclude targeted governments, companies or individuals from global finance.

These tools have been used extensively against Iran, Russia, Venezuela and others – triggering efforts by countries including China, Russia and even some EU states to build alternative payment systems and digital currencies, aimed at reducing dependency on the dollar. As the Atlantic put it in 2023, the US appeared to be “pushing away allies and adversaries alike by turning its currency into a geopolitical bludgeon”.

Spurred on by these concerns and an increasing desire to delink from the dollar as the world’s anchor currency, many countries are now moving towards creating their own central bank digital currencies (CBDCs) – government-issued digital currencies backed and regulated by state institutions.

While fully live CBDCs are already in use in countries ranging from the Bahamas and Jamaica to Nigeria, many more are in active pilot phases – including China’s digital yuan (e-CNY). Having been trialled in multiple cities since 2019, the e-CNY now has millions of domestic users and, by mid-2024, had processed nearly US$1 trillion in retail transactions.

A key part of Beijing’s ambition is to use the digital yuan as a strategic hedge against dollar-based clearance systems, positioning it as part of a wider plan to reduce China’s reliance on the US dollar in international trade. Likewise, the European Central Bank has framed its digital euro – which entered its preparation phase in October 2023 – as essential to future European monetary sovereignty, stating that it would reduce reliance on non-European (often US-controlled) digital payment providers such as Visa, Mastercard and PayPal.

In this way, CBDCs are becoming a new front in global competition over who sets the rules of money, trade and financial sovereignty in the digital age. As governments rush to build and test these systems, technologists, civil libertarians and financial institutions are clashing over how best to do this – and whether the world should embrace or fear the rise of central bank digital currencies.

Trojan horses for surveillance?

The experience of using a CBDC will be much like today’s mobile banking apps: you’ll receive your salary directly into a digital wallet, make instant payments in shops or online, and transfer money to friends in seconds. The key difference is all of that money will be a direct claim on the central bank, guaranteed by the state, rather than a private bank.

In many countries, CBDCs are being pitched as more efficient tools for economic inclusion and societal benefit. A 2023 Bank of England consultation paper emphasised that its proposal for a digital pound would be “privacy-respecting by design” and “non-programmable by the state”. It would not replace cash but sit alongside it, the BoE suggested, with each citizen allowed to hold up to a capped limit digital pounds (suggested at £10,000-£20,000) to avoid destabilising commercial bank deposits.

However, some critics see CBDCs as Trojan horses for surveillance. In 2019, a report by the professional services network PWC suggested that CBDCs, if unchecked, could entrench executive power by removing intermediary financial institutions and enabling programmable, direct government control over citizen transactions. According to the report, this could mean stimulus payments that expire if not spent within 30 days, or taxes deducted at the moment of transaction. In other words, CBDCs could be tools of efficiency – but also of unprecedented oversight.

A 2024 CFA Institute paper warned that digital currencies could allow governments to trace, tax or block payments in real time – tools that authoritarian regimes might embrace. The Bank for International Settlements (BIS) has called the advent of this “programmable money” inevitable.

Imagine, for example, a parent transferring 20 digital pounds to their child’s CBDC wallet, but with a rule that this money can only be spent on food, not video games. When the child uses it at a supermarket, their payment is programmed so that the retailer’s suppliers and the tax authority are paid instantly (£15 to the shop, £3 to wholesalers, £2 straight to the tax office) with no extra steps. In theory, at least, everyone is happy: the parent sees the child spent the money responsibly, the suppliers are paid immediately, and the retailer’s tax bill is settled automatically.

In technical terms, programmable payments such as this are straightforward for CBDCs. But such a system raises big questions about privacy and personal freedom. Some critics fear that programmable CBDCs might be used to restrict spending on disapproved categories such as alcohol and fuel, create expiry dates for unemployment benefits, or enforce climate targets through money flow limits. The BIS has warned that CBDCs should be “designed with safeguards” to preserve user privacy, financial inclusion and interoperability across borders.

Even well-intentioned digital systems can create tools of surveillance. CBDC architecture choices, such as default privacy settings, tiered access or transaction expiry can all shape the extent of executive control embedded in the system. If designed without democratic oversight, these infrastructures risk institutional capture.

Some CBDC pilots – including China’s e-CNY, the Sand Dollar and the eNaira – have been criticised for omitting clear privacy guarantees, with their respective central banks deferring decisions on privacy protections to future legislation. According to Norbert Michel, director of the Cato Institute’s Center for Monetary and Financial Alternatives and one of the most prominent US voices warning about the risks of CBDCs:

A fully implemented CBDC gives the government complete control over the money going into, and coming out of, every person’s account. It’s not difficult to see that this level of government control is incompatible with both economic and political freedom.

Fears of mission creep

The concerns being raised about central bank digital currencies extend beyond personal payment controls. A recent analysis by Rand Corporation highlighted how law enforcement capabilities could dramatically increase with the introduction of CBDCs. While this could strengthen efforts to stop money laundering and the financing of terrorism, it also raises fears of “mission creep”, whereby the same tools could be used to police ordinary citizens’ spending or political activities.

Concerns about mission creep – the idea that a system introduced for limited goals (efficiency, anti-money laundering) gradually expands into broader tools of control – extend into other areas of digital authoritarianism. The Bennett School has cautioned that without legal and political safeguards, CBDCs risk empowering state surveillance and undermining democratic oversight, especially in an interconnected global system.

It is not anti-technology or overly conspiratorial to ask hard questions about the design, governance and safeguards built into our future money. The legitimacy of CBDCs will hinge on public trust, and that trust must be earned. As has been highlighted by the OECD, democratic values like privacy, civic trust and rights protection must all be integral to CBDC design.

The future of money

Predictably, the public view of what we want our money to look like in future is mixed. The tensions we see between centralised CBDCs and decentralised alternatives reflect fundamentally different philosophies.

In the US, populist rhetoric has found a strong base among cryptocurrency investors and libertarian movements. At the same time, surveys in Europe suggest many people remain sceptical of replacing a central bank’s authority, associating it with stability and trustworthiness.

For the US Federal Reserve, the debate over bitcoin, decentralised finance (“DeFi”) and stablecoins goes to the heart of American financial power. Behind closed doors, some US officials worry that both the unchecked use of stablecoins and a widespread adoption of foreign CBDCs like China’s e‑CNY will erode the dollar’s central role and weaken the US’s monetary policy apparatus.

In this context, Trump’s push to elevate crypto into a US Strategic Bitcoin Reserve carries serious implications. While US officials generally avoid direct comment on partisan moves, their policy documents make the stakes clear: if crypto expands outside regulatory boundaries, this could undermine financial stability and weaken the very tools – from monetary policy to sanctions – that sustain the dollar’s global dominance.

Meanwhile, the Bank of England’s governor, Andrew Bailey, writing in the Financial Times this week, sounded more accommodating of a financial future that includes stablecoins, suggesting: “It is possible, at least partially, to separate money from credit provision, with banks and stablecoins coexisting and non-banks carrying out more of the credit provision role.” He has previously stressed that stablecoins must “pass the test of singleness of money”, ensuring that one pound always equals one pound (something that cannot be guaranteed if a currency is backed by risky assets).

This isn’t just caution for caution’s sake – it’s grounded in both history and recent events.

During the US’s Free Banking Era in the middle of the 19th century, state-chartered banks could issue their own paper money (banknotes) with little oversight. These “wildcat banks” often issued more notes than they could redeem, especially when economic stress hit – meaning people holding those notes found they weren’t worth the paper they were printed on.

A much more recent example is the collapse of TerraUSD (UST) in May 2022. Terra was a so-called stablecoin that was supposed to keep its value pegged 1:1 with the US dollar. In practice, it relied on algorithms and reserves that turned out to be fragile. When confidence cracked, UST lost its peg, dropping from $1 to as low as 10 cents in a matter of days. The crash wiped out over US$40 billion (around £29 billion) in value and shook trust in the whole stablecoin sector.

But Bailey’s crypto caution extends to CBDCs too. In his most recent Mansion House speech, the Bank of England governor said he remains unconvinced of the need for a “Britcoin” CBDC, so long as improvements to bank payment systems (such as making bank transfers faster, cheaper and more user-friendly) prove effective.

Ultimately, the form our money takes in future is not a question of technology so much as trust. In its latest guidance, the IMF underscores the necessity of earning public trust, not assuming it, by involving citizens, watchdog groups and independent experts in CBDC design, rather than allowing central banks or big tech to shape it unilaterally.

If done right, digital money could be more inclusive, more transparent, and more efficient than today’s systems. But that future is not guaranteed. The code is already being written – the question is: by who, and with what values?

Rafik Omar, Lecturer in Finance, Cardiff Metropolitan University and Vinden Wylde, Lecturer in Computer Sciences at Gulf College, Oman, and PhD Candidate in Big Data, AI and Visualisation, Cardiff Metropolitan University

This article is republished from The Conversation under a Creative Commons license. Read the original article.


by Web Desk via Digital Information World

Microsoft Builds Its First AI Image Generator From the Ground Up

Microsoft has introduced its first fully self-developed image generation model, marking a notable shift toward building its own AI infrastructure rather than leaning on external partners. The new system, called MAI-Image-1, has already made its debut among the top ten text-to-image models on the public testing platform LMArena.


Unlike earlier creative tools that often carried traces of shared frameworks, MAI-Image-1 represents a step toward independence for Microsoft’s AI division. The company describes it as a model designed to capture the subtleties of lighting, texture, and visual realism more effectively than typical generators. It can reproduce details such as soft reflections, natural sunlight gradients, or complex environments like forest landscapes and city streets, aiming for a quality that aligns closely with real-world photography.

Behind its development lies a focus on usability and diversity rather than spectacle. Microsoft’s engineers said they concentrated on curating cleaner and more representative training data, limiting the kind of repetitive, overly stylized imagery that has plagued many existing models. The evaluation process involved testing how well the system handled realistic creative tasks, including concept development for design work and content creation for digital artists. That testing extended to professionals within visual fields, whose input helped refine the system’s flexibility.

MAI-Image-1’s structure allows it to produce results faster without compromising visual depth, offering an efficiency balance often difficult for larger and slower models. This speed is intended to help users cycle through multiple drafts or creative variations in less time, making it easier to transfer results into other editing tools for further refinement.

While the model’s visual strength has drawn attention, Microsoft has equally emphasized its commitment to safe deployment. For the moment, MAI-Image-1 remains in public testing on LMArena, a community leaderboard where participants can generate images and provide feedback. This phase allows the company to monitor how the model performs in everyday scenarios and gather data to guide updates before a wider release.

The company plans to integrate MAI-Image-1 into Copilot and Bing Image Creator, expanding its reach across Microsoft’s ecosystem of productivity and search tools. This inclusion would make photorealistic image generation available to a broad base of users directly inside products many people already use daily.

Internally, the model also signals a wider ambition. Microsoft has been gradually moving toward a portfolio of in-house AI systems capable of standing alongside its partnership models. Earlier in the year, it unveiled its first two proprietary models aimed at text and multimodal understanding. MAI-Image-1 extends that trajectory into visual creation, reinforcing a long-term plan to align AI capabilities with the company’s broader software ecosystem.

In essence, this release represents both a technological and strategic step: a more autonomous Microsoft AI stack designed to evolve independently while maintaining compatibility with existing tools. The model’s blend of speed, realism, and control suggests the company is not only refining how AI images are produced but also how such tools fit into the creative process itself.

As testing continues, MAI-Image-1’s eventual rollout across Microsoft’s platforms will likely determine whether this internal direction can match or surpass the established players in generative imagery. For now, its top-tier ranking on LMArena indicates that Microsoft’s shift toward home-grown AI systems is beginning to find traction.

Notes: This post was edited/created using GenAI tools. 

Read next: Americans Face a Global Fraud Storm as AI Erodes Consumer Trust
by Asim BN via Digital Information World

Monday, October 13, 2025

Americans Face a Global Fraud Storm as AI Erodes Consumer Trust

New research shows that Americans are navigating more scams than anyone else in the world, reflecting a broader global shift toward what experts are calling a “trust nothing” era. The Ping Identity 2025 Consumer Survey, based on responses from more than 10,000 people across 11 countries, reveals how artificial intelligence is reshaping the fraud landscape and undermining confidence in digital security.

America Leads the World in Scam Exposure

The survey found that the average American encounters roughly 100 scam attempts every month... far higher than the global average. Each week, people in the United States receive about nine scam calls, nine fraudulent emails, and seven suspicious texts. That pace leaves Americans dealing with about 25 scam contacts per week.

By comparison, the United Kingdom averages 84 scam attempts per month, while Australians handle around 52. Singapore reports the lowest levels, with only 40. These figures suggest the United States now sits at the epicenter of global fraud activity, with both human deception and AI-generated manipulation increasing the risk.

Spam inboxes illustrate how bad the problem has become. Americans and Brits each have more than 350 unread messages flagged as spam, while Indonesians have fewer than 160.

The Daily Flood of Fraud

Scam messages arrive through almost every channel imaginable — phone calls, emails, texts, and social media platforms. People around the world now receive an average of five spam messages per week on their social media accounts, adding yet another layer to the problem.

When scam messages appear, most people act quickly: 53 percent delete them immediately, and 52 percent block the sender. However, a significant minority in India and the United Arab Emirates prefer to verify the sender’s address before taking any action, showing different regional habits in dealing with fraud.

Despite widespread caution, phone calls remain a key weak spot. Nearly half of Indians (46 percent) and more than a third of Brits (35 percent) admit they sometimes answer calls marked “potential spam.” In the U.S., 31 percent still do, despite knowing the risks.

Confidence Is Collapsing

The research paints a worrying picture of declining public confidence. Only 23 percent of global respondents said they feel very confident in recognizing a scam. Among Americans, that number aligns closely with the global average.

Trust in institutions and brands is also in decline. Just 17 percent of respondents worldwide said they fully trust organizations that manage their identity data. More than a quarter said they have little or no trust at all. Only 14 percent trust large global enterprises, while 20 percent favor regional or local brands.

France reported the lowest levels of trust, with just 8 percent of respondents expressing full confidence in data-handling organizations. The United Arab Emirates stood out as the most trusting country, with 37 percent saying they have full confidence in those who manage their identity data.

AI Intensifies Fraud and Fear

Artificial intelligence is reshaping not only the types of scams people face but also how they perceive digital safety. According to the survey, 68 percent of respondents now use AI in their daily lives (a sharp increase from 41 percent the previous year) and this familiarity has brought new anxieties.

About three-quarters of respondents said they are more concerned about their personal data than they were five years ago. Among their top fears are AI-driven phishing, voice cloning, and deepfake impersonations.

Thirty-nine percent listed AI-generated phishing as the most concerning emerging fraud type. Fake apps that imitate legitimate services followed closely at 38 percent. Deepfake video and audio attacks ranked third at 32 percent, while voice cloning scams came in at 31 percent. Nearly 30 percent cited synthetic identity fraud... where criminals combine real and fake data to create entirely new identities.

Different Fears in Different Places

The survey shows striking differences across countries. Australians expressed the greatest worry over how companies use and store personal data with AI systems, with 34 percent citing transparency concerns. In Singapore, nearly four in ten respondents were most afraid of deepfake impersonations and AI-generated voice cloning. Swedes, in contrast, were among the least concerned about AI impersonation, with just 14 percent mentioning it.

Across all regions, financial fraud remains the top fear at 46 percent, followed by personal data breaches at 25 percent. A quarter of respondents said storing passwords or payment details on social platforms made them feel especially vulnerable.

Password Fatigue and the Rise of Passkeys

Weak password habits continue to drive much of the risk. The average respondent uses 12 passwords for work and 17 for personal accounts, spreading their security thin. Forgetting or misplacing passwords (38 percent) happens more often than using multi-factor authentication (30 percent).

The study points to passkeys and biometric authentication as safer options. About 34 percent said fingerprint or facial recognition would make them feel more secure, while 33 percent favored multi-factor authentication. In Indonesia, preference for passkeys reached 44 percent, second only to biometric methods, which topped 60 percent.

A Reluctance to Stay Online

As digital risks rise, many people are willing to give up parts of their online lives to protect themselves. Globally, 40 percent said they would leave social media altogether rather than risk identity theft. One in three would stop online shopping, and more than a quarter would quit online banking.

In Australia, 26 percent said they would abandon streaming services to stay safe. Meanwhile, 22 percent of Germans would stop using travel planning apps, while 36 percent of Dutch respondents said they would give up nothing — reflecting lower overall anxiety levels in the Netherlands.

The Demand for Regulation

Three-quarters of respondents said they believe governments should regulate AI to protect personal identity data. Support for regulation is strongest in Indonesia (74 percent) and lowest in Sweden (31 percent). Yet fewer than half of people worldwide believe they are sufficiently informed or protected by government or online safety organizations.

This gap between public expectation and institutional response underscores how much uncertainty surrounds AI and digital identity. Even as people expect stronger protections, they remain skeptical about whether governments or corporations can provide them.

Toward a Fragile Future of Trust

Behind the statistics lies a clear global mood: anxiety, exhaustion, and distrust. Consumers are navigating an online world that feels increasingly unsafe, with AI transforming not only how scams are created but how believable they appear.

Yet the research also shows signs of resilience. While full trust is rare, 61 percent of respondents said they have at least some level of trust in organizations managing their data... a sign that improvement is possible. Biometric logins, passkeys, and transparent data policies could help rebuild this fragile confidence.

For Americans, however, the path forward looks steep. Facing nearly twice as many scams as people in most other countries, they are living at the forefront of the global fraud problem. With AI accelerating deception and trust in free fall, the question now is not just how to stop the scams... but how to restore faith in the digital world itself.





Notes: This post was edited/created using GenAI tools.

Read next: Under Pressure, Even Trained Users Miss the Signs of Phishing
by Irfan Ahmad via Digital Information World