Friday, October 3, 2025

Meta's Threads Introduces Communities to Group Conversations Around Interests

Meta is adding a new option to Threads that creates communities, spaces built around topics such as sports, music, books, television, and technology. The feature is being tested on the web and mobile versions of the app. It marks the latest step in Meta’s effort to give the platform more structure as its user base grows.

How the feature works

Users can join a community, see posts inside a dedicated feed, and display their membership on their profiles. Each community is listed in the menu so it is easy to move between them. More than 100 groups are already active, including NBA Threads, Book Threads, and Tech Threads. Within these spaces, posts are arranged to highlight material most relevant to the theme, rather than showing a mix of tagged content.

Personalization and design changes


Joining communities also affects the main feed. The app takes those choices into account when recommending other posts. Meta says this should make feeds less random and more focused on what people want to follow. Some communities use custom emoji for likes, such as a basketball symbol in NBA Threads.

Why Meta is adding communities

Threads passed 400 million users this year. Its feed has been criticized for relying heavily on suggested posts from unrelated accounts, which makes it harder for users to keep track of ongoing discussions. Communities are intended to reduce that problem by offering consistent spaces that stay tied to specific subjects.

Comparison with other platforms

The idea is not new. Reddit has long been built on topic-based groups, and X introduced a communities feature in 2021. Meta is now adapting that approach to make Threads more cohesive and to give users more control over the flow of content they see.

Notes: This post was edited/created using GenAI tools.

Read next: Cracking Bcrypt: Is New Gen Hardware and AI Making Passw

by Irfan Ahmad via Digital Information World

Thursday, October 2, 2025

Cracking Bcrypt: Is New Gen Hardware and AI Making Password Hacking Faster?

In the last two years, the boom of artificial intelligence has resulted in an arms-race in computing power, graphics performance, and consumer hardware. While on the surface this sounds like a beneficial thing, there is also a flip side. This artificial intelligence boom is also benefiting cybercriminals who aim to hack passwords better, faster, and more efficiently. With new generation hardware, such as Nvidia’s recent 50-series and AMD’s upcoming transition to ‘UDNA’ architecture, high-quality computing is becoming more affordable and more available. From this, the burden of entry for cybercriminals is reduced and hackers are able to run computations to crack passwords much faster and much more often.

What is Password Hashing?

It is a relatively standard protocol that enterprises protect their user’s passwords with hashing algorithms. Storing passwords as plain text is considered a bad practice, because anyone who can access a database – authorized or unauthorized – can simply read the passwords and take them if they so wish. Hashed passwords, however, aim to prevent people from being able to read the passwords. Think of hashed passwords as a mask on the true thing. The only way for people to know the true password beneath the mask is if each password is individually unscrambled through brute force techniques.

Brute force hacking is when hackers, with the assistance of quality hardware, guess possible password combinations through a series of trial and error guesses. Individually, this would be impractical due to how long it would take to go through every possible combination, however, now with access to such powerful hardware, hackers can run billions of these computations simultaneously, and thus faster.

The Bcrypt Hashing Algorithm

There are many different types of hashing algorithms, some of which are better than others. MD5, for example, is older and not as effective. It is hence frequently cited as one of the most common hash algorithms found in leaks, demonstrating its lack of efficacy.

Bcrypt is another hashing algorithm, developed in 1999. This one turns a user’s password into a string of characters in a one-way hashing function which is irreversible, meaning that it cannot be changed back to the original password. When a user logs in, this algorithm re-hashes the password and compares the value to the one already stored in the system memory to see if the passwords match. If the password is short to begin with, the bcrypt hashing process also can stretch it to become longer and more complex. Bcrypt also adds a random piece of data to each password hash, ensuring uniqueness. This increases the difficulty that passwords can be guessed with dictionaries or brute-force attacks, and is also known as salting where a 22-character string is put in front of the hash.

What differentiates bcrypt hashing from other algorithms is also that it has a cost factor to it. This shows how many password iterations were made before the hash is generated and is applied in front of the salt, thus significantly increasing the time, effort, and resources required to calculate the password.

How Does Bcrypt Stand Up Against New Generation Hardware?

While bcrypt hashing is generally considered effective, the boom of artificial intelligence and the increased affordability, capability and availability of new generation hardware has only improved the performance of brute force hacking techniques against hashed passwords and data sets. In a recent study by Specops Software, researchers used newer, more powerful hardware to determine how long exactly it takes to crack bcrypt. These findings were compared to a similar study from two years ago, with weaker hardware to show exactly how fast hackers are advancing alongside the hardware they use.

In the Specops Software study, a sample of 750,000 hashes was taken from well-known data leaks (RockYou2024, etc.), and put under the pressure of brute force hacking techniques. The findings showed that the mass investment in artificial intelligence infrastructure by major enterprises has significantly increased the availability of heavy compute hardware. A couple of years ago, one might expect that brute force attacks would be conducted by hardware like the RTX 4090 graphics card. But today’s RTX 5090 flagship cards are approximately 65% faster when up against bcrypt hashing.

The findings of the study found that short, non-complex passwords were able to be cracked relatively quickly by both older, less powerful hardware and with new generation artificial-intelligence powered hardware. Passwords like ‘password’, ‘123456’, ‘admin’, and the like have always been easily crackable, even in the beginning of the internet. Unfortunately, these passwords remain very commonly used even to this day. This is because a lot of users, both in and out of the workplace, experience password fatigue from all the different, unique credentials they must remember for all their different accounts.


However, even so, the newer hardware was able to crack slightly more complex passwords much faster. The older study found that bcrypt hashed passwords with 6 or 7 characters that were made of numbers only could be cracked instantly. The new generation hardware, however, was able to instantly crack hashed passwords of 4 to 6 characters including numbers, uppercase, and lowercase.

Password Best Practices

From the study’s findings, we can conclude that the longer, more-complex the password, the better. As complexity increases, so too does the length of time it takes to crack a hashed dataset. Indeed, once a password becomes over 12 characters long, with a combination of types of characters, the time to crack becomes nearly impossible for hackers.

For this reason, it is important that individuals and organizations always follow a few key practices to ensure a combination of protections. Passwords should ideally be at least 18 characters and they should also always be comprised of each of the following: lowercase, uppercase, numbers, and special characters.

Additional protections include enabling passphrases with at least 18 characters and including multiple or all character cases (uppercase, lowercase, digits, and special characters). Complexity in passphrases goes against advice since length is key over complexity, but this will make the phrase harder to crack. It's recommended to avoid lines from songs, poems, and films, and deliberately misspelling a word can be good practice. Organizations may also implement a custom dictionary for their employees which blocks words that are associated with the organization itself, such as words in the company name or products.

The Problem: Known, Compromised Passwords

Implementing strong password protocols is the first step to protecting against the brute force hacking of passwords. However, it is important to understand that once an attacker already has access to a password or dataset in question – whether because of re-use or because it has been leaked through infostealers – it becomes too late. In this case, it does not matter how complex a password is or how well it has been hashed. If someone in an organization reuses passwords across multiple accounts, then their single compromised password could be the difference between an entire company being hacked or not.

Complex hashing protocols should never be considered a replacement for good password security hygiene. In order to maintain good password security hygiene, passwords should be unique and never reused. The Specops Software study, after all, found that the time to crack known, compromised passwords was instantaneous, regardless of the kind of hardware that was used, and regardless of how well it was hashed. In order to prevent risk, organizations and individuals must continuously be aware of appropriate password hygiene, they must never allow the re-use of passwords, and they must constantly be on guard that their passwords do not become compromised.

Darren James is a Senior Product Manager at Specops Software , an Outpost24 company. Darren is a seasoned cybersecurity professional with more than 20 years of experience in the IT industry. He has worked as a consultant across various organizations and sectors, including central and local governments, retail and energy. His areas of specialization include identity and access management, Active Directory, and Azure AD. Darren has been with Specops Software for more than 12 years and brings his expertise to the support and development of world-class password security and authentication solutions.

Read next: AI Chatbots Use Emotional Pressure to Keep People From Logging Off


by Web Desk via Digital Information World

Israel Pours Millions Into AI and Influencer Campaigns to Shape Online Narratives

Israel is putting significant money into shaping how it appears on digital platforms and in artificial intelligence systems. Documents filed under US foreign agent rules show contracts worth millions aimed at building online campaigns, working with influencers, and even steering the way tools like ChatGPT respond to questions.

One of the biggest deals, as per ResponsibleStateCraft, involves Clock Tower X, a US firm linked to former Trump campaign manager Brad Parscale. The company has a $6 million contract to produce material for Israel. At least four-fifths of what it creates must focus on younger audiences using TikTok, Instagram, YouTube, podcasts, and other channels. Targets in the contract require at least 50 million impressions each month.

Part of the plan is to build websites that feed into the data used by AI systems, so that responses to political subjects reflect positions that Israel wants highlighted. To help the material rise in search results, Clock Tower is using MarketBrew, an AI platform that predicts how Google and Bing rank content. The contract also gives the firm scope to place narratives through Salem Media Network, a conservative Christian broadcaster in the US where Parscale now serves as chief strategist.

The filings say the project is framed as a campaign against antisemitism. Few details are given about the specific themes of the material, but Israel’s foreign ministry is closely involved, with senior adviser Eran Shayovich named as the main contact. He has previously described his work as expanding Israel’s public diplomacy under a project labeled 545.

Alongside this effort, a separate program has paid social media influencers large sums to post supportive content. Invoices from Bridges Partners, another firm linked to Israel’s ministry, show that around $900,000 was budgeted between June and November. After production and legal costs, more than half a million dollars went directly to influencers. The documents suggest each post on platforms like TikTok or Instagram brought in between $6,000 and $7,000 for those taking part. The campaign, called the Esther Project, was designed to reach Western audiences through lifestyle-style media.

Other moves point to wider spending. In June, Google began a $45 million advertising program on behalf of Israel’s prime minister’s office. The ads, spread through YouTube and the company’s display network, were listed as government-backed public relations. TikTok also recently hired Erica Mendel, a former Israeli army instructor and US State Department contractor, to oversee its hate-speech policy, raising questions about possible alignment with Israel’s approach.

All of this comes at a time when US polling shows weakening support for Israel. A Gallup survey over the summer found that only nine percent of Americans aged 18 to 34 backed Israeli military actions in Gaza. A New York Times and Siena poll later showed more respondents supporting Palestinians than Israel for the first time in that survey’s history. Quinnipiac University found that fewer than half of Americans think supporting Israel is in Washington’s interest, while only one in five hold a favorable view of Prime Minister Benjamin Netanyahu.

Netanyahu has underlined how vital he sees online communication in this struggle. He has said digital platforms are central to influencing opinion, comparing them to weapons that replace older tools of conflict. Investors close to Israel, including Oracle founder Larry Ellison, are also involved in bids to buy TikTok, a platform Netanyahu has suggested could become a decisive tool in shaping perception.

Taken together, the contracts show how Israel is concentrating resources on digital space, mixing influencer partnerships, targeted media buys, and AI-driven search manipulation. The effort reflects both the scale of its investment and the challenge it faces with younger audiences, where opinion polls reveal attitudes have shifted sharply.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next: Meta Denies Microphone Tracking As It Expands AI Ad Targeting
by Irfan Ahmad via Digital Information World

Meta Denies Microphone Tracking As It Expands AI Ad Targeting

Instagram’s chief Adam Mosseri has stepped in once again to address the claim that Meta secretly listens to people’s conversations. The video appeared on the same day Meta confirmed it will start using data from its AI tools to fine-tune advertising, a move that has already raised new privacy concerns.

A Rumor That Won’t Fade

The idea that Meta, once Facebook, listens through phone microphones has been around for years. People often point to moments when they talk about a product and then soon after see an ad for it. The theory looks convincing because the ads sometimes feel too accurate to be chance.

The company has rejected the idea several times before. A blog post in 2016 said the microphone wasn’t used for ads or news feeds. Mark Zuckerberg repeated that denial at a Senate hearing in 2018. Meta’s help pages still tell users the same: microphones are only switched on when someone gives permission and opens a feature that needs sound input.

Why Ads Seem To Match Conversations

Mosseri tried to explain why people link ads to their private talks. One reason is that browsing activity is already shared with advertisers. If someone looks at a product online, the retailer may pay Meta to show that same item later. Another factor is that the system places ads based on the interests of friends or of people who behave in similar ways.

Sometimes the ad has already been shown before the conversation, but people don’t always notice. They scroll past quickly and only recall it later when the subject comes up. And sometimes, Mosseri added, it is nothing more than coincidence. He also pointed out that continuous microphone use would quickly drain a phone’s battery, something people would spot right away.

New Data From AI Tools

What gives the debate fresh weight is Meta’s plan to change how it collects information. A new privacy policy due in December will allow the company to use details from chats with its AI products to shape advertising. The update covers Instagram, Facebook, WhatsApp, and Messenger.

That means ads will no longer depend only on web searches or patterns in friends’ activity. Conversations with chatbots can include personal thoughts, plans, and preferences, giving the company a deeper view than a single search or website visit ever could. The targeting may feel more direct as a result, even if microphones aren’t involved.

Ongoing Distrust

Despite repeated denials, many people still believe the company listens in. Comments under Mosseri’s post showed continued doubt. The concern may only grow as AI data is folded into the system, since ads could appear even more precise than before.

Meta insists that microphones aren’t the source. The technology it already has, powered by data sharing and now AI, seems enough to create the effect that has kept the rumor alive for nearly a decade.


Image: DIW-Aigen

Note: This post was edited/created using GenAI tools

Read next:

• Meta Expands Political Fight on AI Rules While Supplying U.S. Military with Its Tech

Which Country Reigns Supreme in Global Crypto Growth?
by Irfan Ahmad via Digital Information World

Wednesday, October 1, 2025

Which Country Reigns Supreme in Global Crypto Growth?

A new international index, courtesy of Henleyglobal, assessing the growth of cryptocurrency ecosystems places Singapore at the top, reflecting the city-state’s continued role as a hub for digital assets. The ranking measures 29 jurisdictions based on six categories, including adoption, infrastructure, innovation, regulation, economic conditions, and tax treatment.

Singapore achieved the highest overall score of 48.4 out of 60, performing strongly in innovation with 9.4 points, and maintaining high marks across taxation, regulation, and economic readiness. Its infrastructure score, at 6.8, was lower than some rivals, but this was balanced by consistently strong performance in other areas.

Hong Kong and the United States Close Behind

Hong Kong secured second place with a total score of 45.7, boosted by a leading infrastructure rating of 8.2 and a tax-friendliness score of 9.0, one of the highest in the index. Public engagement, however, was weaker at 5.1, showing limited uptake among the wider population despite strong institutional support.

The United States ranked third with 43.4 points, reflecting widespread public adoption at 7.7 and a high innovation score of 8.6, supported by its start-up ecosystem and government-backed initiatives. Its relatively low tax rating, 5.9, dragged down the overall total.

Europe and the Middle East in the Mix

Switzerland came in fourth with 43.1 points, continuing its role as a financial hub with balanced scores across infrastructure, regulation, and taxation. The United Arab Emirates followed closely at 42.9, standing out with the maximum score of 10.0 for tax-friendliness, although its infrastructure score of 3.4 showed that the domestic ecosystem for day-to-day use remains limited compared to its policy appeal.

Malta and the United Kingdom also placed in the top ten, with 40.9 and 40.4 respectively. Both countries scored well on regulation and innovation, but showed less strength in infrastructure and tax treatment.

Asia-Pacific’s Expanding Role

Canada, Thailand, and Australia rounded out the top ten. Canada, at 39.6, had a high economic factor score of 8.5, reflecting strong connectivity and financial inclusion. Thailand ranked ninth with 37.1, notable for its regulatory score of 7.4, one of the strongest among emerging economies. Australia followed with 36.0, performing evenly but with weaker results in tax and infrastructure.

Several smaller jurisdictions scored surprisingly high in niche areas. Cyprus, for instance, ranked eleventh overall with 35.2, helped by strong tax advantages, while Monaco, despite low public adoption at 2.7, achieved a maximum tax score of 10.0 and strong infrastructure at 7.6.

Mixed Results in Emerging Economies

Further down the index, Malaysia, Austria, Italy, and Portugal all scored between 31 and 34 points, showing partial progress but lacking balance across the six measures. Mauritius and Antigua and Barbuda, both known for offering tax advantages, also appeared in the top 20 with scores of 30.2 and 29.9, though weak innovation and infrastructure limited their performance.

Notably, El Salvador, despite making headlines for adopting Bitcoin as legal tender, ranked only 21st with 26.7 points. While it had relatively strong infrastructure at 6.6, its regulatory environment and broader economic conditions were rated significantly lower.

Lower Scores Across Latin America and Southern Europe

The bottom of the ranking included several Latin American and southern European countries. St. Kitts and Nevis, Türkiye, Latvia, Panama, and Grenada all fell below 30 points, largely due to limited infrastructure, inconsistent regulation, or restrictive taxation. Costa Rica, Uruguay, and Greece also scored in the lower range.

Uruguay’s 20.4 and Costa Rica’s 20.1 were the lowest totals, reflecting modest adoption and weak innovation indicators, despite moderate economic conditions.

Global Trends in Adoption

The index highlights a clear divide between global financial hubs with supportive regulation and tax frameworks, and countries with limited infrastructure but strong economic potential. Singapore and Hong Kong illustrate how clear regulation and favorable taxation can push countries to the top, while nations like El Salvador demonstrate that symbolic policy moves, such as making Bitcoin legal tender, are not enough to secure broad adoption without a stronger ecosystem.

Which Nations Are Winning the Cryptocurrency Race?

.
Country TOTAL Public Adoption Infrastructure Adoption Innovation and Technology Regulatory Environment Economic Factors Tax-Friendliness
Singapore 48.4 7.2 6.8 9.4 7.6 8.9 8.5
Hong Kong (SAR China) 45.7 5.1 8.2 7.8 6.2 9.4 9
USA 43.4 7.7 6.6 8.6 6.2 8.4 5.9
Switzerland 43.1 6.8 7.1 6.6 6.3 8.7 7.6
UAE 42.9 7.6 3.4 7.5 5.8 8.6 10
Malta 40.9 7.4 6.1 4.2 7.1 8.2 7.9
UK 40.4 6.7 6.3 7.1 6.5 8.2 5.6
Canada 39.6 6.8 6 6 7 8.5 5.3
Thailand 37.1 6 4.7 3.6 7.4 8.8 6.6
Australia 36 6.2 4.6 5.7 7.6 7.6 4.3
Cyprus 35.2 7.1 3.4 3.1 6.2 7.5 7.9
Luxembourg 34.6 6.4 3.4 3.7 5.6 8.5 7
Monaco 34.4 2.7 7.6 2.5 3.7 7.9 10
Malaysia 33.8 5 2.4 3.9 6.5 7.8 8.2
Austria 33.6 5.4 4.5 5.2 5.4 8.1 5
Italy 31.7 4.8 4.5 4.9 4.8 7.7 5
Portugal 31.1 5.5 2.4 3 5.1 8.3 6.8
Mauritius 30.2 4.9 2.8 1.7 4.7 7 9.1
Antigua and Barbuda 29.9 5.2 1.9 1.6 7 5.7 8.5
New Zealand 29 5.2 3 1.2 4.8 8.5 6.3
El Salvador 26.7 3.8 6.6 2.7 4.3 4.5 4.8
St. Kitts and Nevis 25.7 4.7 1.2 2.2 4.9 4.4 8.3
Türkiye 25.2 6 3.1 3 6.3 5.9 0.9
Latvia 24.6 5.6 1.9 2 4.6 7.8 2.7
Panama 23.4 3.7 1.3 1.2 2.9 5.8 8.5
Grenada 22.8 4.2 0.9 1.3 5.6 6 4.8
Greece 22.2 4.6 1 1.9 4.7 7.2 2.8
Uruguay 20.4 3.7 0.7 1.6 3.6 7.3 3.5
Costa Rica 20.1 3.6 3.2 0.8 2.8 5.8 3.9

Read next:

• Survey Finds Platforms, Not Governments, Should Decide Online Rules

• Families Lose Billions in Remittance Fees Every Year, Stablecoins Could Change That

by Asim BN via Digital Information World

Convenience vs. Control: Weighing the Benefits and Risks of Facial Recognition Technology

Walk into a shop, board a plane, log into your bank, or scroll through your social media feed, and chances are you might be asked to scan your face. Facial recognition and other kinds of face-based biometric technology are becoming an increasingly common form of identification.


Image: yousef samuil / Unsplash

The technology is promoted as quick, convenient and secure – but at the same time it has raised alarm over privacy violations. For instance, major retailers such as Kmart have been found to have broken the law by using the technology without customer consent.

So are we seeing a dangerous technological overreach or the future of security? And what does it mean for families, especially when even children are expected to prove their identity with nothing more than their face?

The two sides of facial recognition

Facial recognition tech is marketed as the height of seamless convenience.

Nowhere is this clearer than in the travel industry, where airlines such as Qantas tout facial recognition as the key to a smoother journey. Forget fumbling for passports and boarding passes – just scan your face and you’re away.

In contrast, when big retailers such as Kmart and Bunnings were found to be scanning customers’ faces without permission, regulators stepped in and the backlash was swift. Here, the same technology is not seen as a convenience but as a serious breach of trust.

Things get even murkier when it comes to children. Due to new government legislation, social media platforms may well introduce face-based age verification technology, framing it as a way to keep kids safe online.

At the same time, schools are trialling facial recognition for everything from classroom entry to paying in the cafeteria.

Yet concerns about data misuse remain. In one incident, Microsoft was accused of mishandling children’s biometric data.

For children, facial recognition is quietly becoming the default, despite very real risks.

A face is forever

Facial recognition technology works by mapping someone’s unique features and comparing them against a database of stored faces. Unlike passive CCTV cameras, it doesn’t just record, it actively identifies and categorises people.

This may feel similar to earlier identity technologies. Think of the check-in QR code systems that quickly sprung up at shops, cafes and airports during the COVID pandemic.

Facial recognition may be on a similar path of rapid adoption. However, there is a crucial difference: where a QR code can be removed or an account deleted, your face cannot.

Why these developments matter

Permanence is a big issue for facial recognition. Once your – or your child’s – facial scan is stored, it can stay in a database forever.

If the database is hacked, that identity is compromised. In a world where banks and tech platforms may increasingly rely on facial recognition for access, the stakes are very high.

What’s more, the technology is not foolproof. Mis-identifying people is a real problem.

Age-estimating systems are also often inaccurate. One 17-year-old might easily be classified as a child, while another passes as an adult. This may restrict their access to information or place them in the wrong digital space.

A lifetime of consequences

These risks aren’t just hypothetical. They already affect lives. Imagine being wrongly placed on a watchlist because of a facial recognition error, leading to delays and interrogations every time you travel.

Or consider how stolen facial data could be used for identity theft, with perpetrators gaining access to accounts and services.

In the future, your face could even influence insurance or loan approvals, with algorithms drawing conclusions about your health or reliability based on photo or video.

Facial recognition does have some clear benefits, such as helping law enforcement identify suspects quickly in crowded spaces and providing convenient access to secure areas.

But for children, the risks of misuse and error stretch across a lifetime.

So, good or bad?

As it stands, facial recognition would seem to carry more risks than rewards. In a world rife with scams and hacks, we can replace a stolen passport or drivers’ licence, but we can’t change our face.

The question we need to answer is where we draw the line between reckless implementation and mandatory use. Are we prepared to accept the consequences of the rapid adoption of this technology?

Security and convenience are important, but they are not the only values at stake. Until robust, enforceable rules around safety, privacy and fairness are firmly established, we should proceed with caution.

So next time you’re asked to scan your face, don’t just accept it blindly. Ask: why is this necessary? And do the benefits truly outweigh the risks – for me, and for everyone else involved?The Conversation

Joanne Orlando, Researcher, Digital Wellbeing, Western Sydney University

This article is republished from The Conversation under a Creative Commons license. Read the original article.


by Web Desk via Digital Information World

Google Removes Years of EU Political Ad Data Ahead of New Rules

Google has taken down access to political advertising records in the European Union, removing seven years of material that had tracked campaigns in 27 countries, as first reported by The Briefing. The archive once covered ads placed on Google Search, YouTube, and its wider display network.

Archive No Longer Available

Until late September, the Ad Transparency Center let visitors look up political ads from EU countries back to 2018. The database showed how much parties spent, what kind of audiences they tried to reach, and which candidates or groups were behind each campaign. That section has now disappeared. The tool only shows political ads for a handful of countries such as the United States, United Kingdom, Brazil, India, Israel, and Australia.

Link to EU Transparency Law

The removal comes ahead of new EU legislation, the Regulation on Transparency and Targeting of Political Advertising. It comes into effect on October 10 and introduces stricter rules for how online political campaigns are run. Ads will have to be clearly marked, include details of sponsors, and explain if targeting tools were used. Certain personal data, such as ethnicity or political opinion, cannot be used to shape these ads.

Google announced last year that it would stop running political ads in the EU once the regulation applied. At the time, the company said old ads would still remain visible in the Transparency Center. Instead, the archive for EU campaigns is no longer available.

Impact on Research and Accountability

The loss of these records means there is no longer a way to study how campaigns across Europe used Google platforms during recent elections. Researchers had depended on the archive to follow spending patterns, review campaign messaging, and track how voters were targeted. Without it, the history of political advertising in the region has become harder to examine.

The EU plans to launch its own repository of political ads, but that system has not yet gone live. Until then, a gap remains in public access to past political campaign data.

Broader Industry Response

Meta has also adjusted its policies. In July it said it would no longer accept political, electoral, or social issue ads in EU countries. Unlike Google, it still allows access to earlier campaign material in its ad library.

Transition Period

The new rules were designed to address concerns about election interference and hidden campaign tactics. But the shift has also created uncertainty about how platforms should handle material already published. For now, Google’s withdrawal means a large part of Europe’s recent political advertising history is no longer accessible.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen

Read next:

• 2.3 Billion Hungry, One Billion Tonnes Wasted: The Paradox Defining Global Food Security

• AI Answers in Crisis: Reliable at the Extremes, Risky in the Middle
by Asim BN via Digital Information World