Tuesday, November 11, 2025

The Future of Insights in 2026: How AI is Evolving Researchers’ Roles

By Erica Parker, Managing Director, The Harris Poll

A new study finds that 98% of researchers now use AI as part of their day-to-day workflow. What does this mean for the future of the insights industry? Is job security under threat? Or is automation empowering researchers?

Artificial intelligence has been subtly reshaping the role of researchers for some time now. The true extent of this new world of insights has now been revealed in research from QuestDIY and The Harris Poll .

AI is embedded into every aspect of our lives

The undercurrent of AI has permeated into all aspects of our lives and for researchers, the reality is no different. A study of more than 200 research professionals found that the use of AI is omnipresent and on the rise – integrating itself into every aspect of their plans and protocols.


The vast majority of researchers (98%) reported using AI at least once in their work over the past year, with 72% saying they use it at least once a day or more (39% daily, 33% several times per day or more).

Welcoming a brave new world of insights

This widespread integration has been welcomed on the whole. A large majority view the proliferation of AI as positive, with 89% saying AI has made their work lives better (64% somewhat; 25% significantly).

The research finds that AI is mostly being used to speed up how research is carried out and delivered. Researchers report using AI mainly for jobs such as analysis and summarizing.

What are researchers mainly using AI for?

  • Analyzing multiple data sources (58%)
  • Analyzing structured data (54%)
  • Automating reports (50%)
  • Coding / analyzing open-ends (49%)
  • Summarizing findings (48%)

AI as a ‘co-analyst’

However, there are concerns around data privacy, accuracy, and trust. Research professionals recognize AI’s potential, but also its limitations. The industry doesn’t view AI as a replacement, but more of an apprentice of sorts.

“Researchers view AI as a junior analyst, capable of speed and breadth, but needing oversight and judgment,” says Gary Topiol, Managing Director, QuestDIY.

Giving them more time for strategy and innovation

Despite needing oversight and careful management, the efficiency gains are real. More than half (56%) say AI saves them 5 or more hours per week. This is because AI enables faster analysis with 43% saying it increases the speed of insights delivery. Plus, many of the researchers (44%) say that it improves accuracy and surfaces insights that might otherwise be missed (43%).

This extra time has empowered researchers to spend more time on strategy and innovation. More than a third of researchers (39%) said that this freed-up time has made them more creative.

Human led, AI supported

AI is not only accelerating tasks for insight professionals, but also enriching the quality and impact of insights delivered. The ideal model is human-led research supported by AI; where AI tackles the repetitive tasks (coding, cleaning, reporting) and researchers focus on interpretation, strategy, and impact. Humans remain in charge, with AI doing the heavy lifting.


However, despite this, there are legitimate barriers to adoption, which include data privacy and security (33%), effective training (32%), and having the time to learn and experiment with these tools (32%).

Quality insights, not just data volume

This suggests that it’s more of an enablement and governance issue than it is a tooling problem, i.e. it’s not about layering on tools, but more about ensuring the data is credible and researchers are trained to spot abnormalities. Indeed, the number one frustration levied at AI from the researchers spoken to was accuracy and the risks of hallucinations. Almost a third (31%) say they had to spend validating outputs due to concerns around validity.

But the more researchers rely on AI to speed up deliverables, the more likely acute errors (hallucinations) will be felt. As the report highlights, at the macro level, AI is revolutionizing decision-making, personalizing customer experiences, and speeding up product development.

For researchers, this creates both pressure and opportunity. Businesses now expect agile, real-time insights – and researchers must adapt their skills and workflows to meet that demand.

Rather than focusing on the quantity and sheer volume of research insight professionals are able to deliver with these tools, we should instead be looking at quality. This includes QAing data, but could start to involve bringing insight professionals into the C-suite more. Not just relying on research to tell organizations what is happening and why, but also what should we do next?

This is where the humans take center stage.

The researcher of 2030

If we’re confident that AI can be relied on to deal with the grunt work, it can allow the researcher role to shift up the value chain as AI takes over the cleaning up of data, coding, first-pass insights, and much more. The researcher role will then shift into interpreting the data, defining the contexts, strategic storytelling, building out ethical models, and being the voice of reason.

By 2030, researchers expect that AI will be helping them with a myriad of tasks that their time would otherwise be taken up with. Tasks such as generating survey drafts and proposals (56%), supplying synthetic or augmented data (53%), automated cleaning, setup, and dashboards (48%), and predictive analytics (44%). To do this effectively they’ll need to ensure that AI is embedded into their workflow. They’ll need to start treating AI not as a plugin, but as core infrastructure for analysis, research, reporting, survey builds, and analyzing open-ended questions.

As Topiol says, “The future is human-led, AI-supported. “AI can surface missed insights – but it still needs a human to judge what really matters.”

‘More opportunity than threat’

That may be why many researchers aren’t concerned about AI coming for the jobs. Just 29% cite job security as an issue. On balance, many see AI as more of an opportunity than a threat. The majority (59%) view it as primarily a support, and 36% see it as an opportunity. Importantly, 89% say AI has already improved their work lives.

And arguably it may even lead to fresh opportunities and elevated roles as strategic leaders within businesses and organizations. As researchers become unburdened by analysis-heavy workloads, it’s time for them to step out from the shadows and take the spotlight.

Translating data into decisions that shape organizations

The researcher of the future won’t be defined by technical execution alone, but by

strategic judgment, adaptability, and storytelling. Their role will be to supervise AI systems, ensuring rigor, accuracy, and fairness. They’ll be expected to guide stakeholders with culturally sensitive, ethically grounded narratives. And translate data into decisions that shape business strategy.

Research teams of the future will require ‘AI Insights Agents’ to work alongside human Research Supervisors and Insight Advocates, complementing their roles.

As we look ahead to 2030, the researcher of the future needs AI not to do their job, but to enable them to become more efficient and strategic with their job. Those who are using AI correctly will find that it frees them up from day-to-day legwork of analysis to become more strategic and creative in their output. They’ll start to evolve more into leaders who use the insights they’ve gleaned to influence decision making upstream. They’ll be uplifted by their AI co-analysts, not replaced by them.

Read next: Study Reveals a Triple Threat: Explosive Data Growth, AI Agent Misuse, and Human Error Driving Data Loss


by Web Desk via Digital Information World

Websites Will Lose Facebook’s Like and Comment Plugins Next Year as Meta Ends Support

Meta has announced plans to end two long-standing features that once defined Facebook’s presence across the wider web. The company confirmed that its external Like and Comment plugins will be discontinued on February 10, 2026, marking the quiet closure of a chapter that helped shape how users interacted with online content in the early 2010s.

The two plugins allowed visitors to show approval for web pages or leave comments using their Facebook accounts without leaving the site. Both features became common across blogs and news outlets (including Digital Information World) when social sharing was at its peak. Over time, though, the landscape shifted. Social activity moved inside apps, third-party integrations faded, and Facebook’s influence on external web traffic gradually waned.

Meta says the removal is part of a broader effort to streamline its developer platform. The company describes it as a technical update rather than a disruptive change. After the cutoff date, the plugins will no longer appear but will not break site functionality. Each will simply render as an invisible 0x0 element instead of showing the familiar buttons or comment sections. Website owners are not required to act, though they can remove the old code to keep pages clean.

The company’s note positions this decision as part of ongoing modernization. It signals a shift in focus toward tools that reflect how businesses and developers use Meta’s ecosystem today rather than how they did a decade ago. The move also mirrors a wider industry trend where large platforms continue to retire older web integrations that no longer align with user behavior or advertising priorities.

The Like and Comment buttons, introduced around 2010, once drove massive engagement loops between publishers and Facebook feeds. For years, they helped the platform dominate referral traffic. But as algorithms evolved and sharing patterns changed, those widgets lost their place on many sites. The quiet sunset in 2026 closes a once-central feature that defined an earlier phase of social connectivity online.


Notes: This post was edited/created using GenAI tools. Eyestetix Studio/unsplash

 Read next: Apple’s Next iPhones May Gain Smarter Satellite Capabilities
by Asim BN via Digital Information World

Top Digital Solutions for Improving Operational Efficiency in Hotels

In the hospitality industry, leveraging digital solutions is crucial for enhancing operational efficiency and guest satisfaction. Modern hotels must integrate technology to streamline operations, reduce costs and provide superior service to remain competitive.

Digital transformation in the hospitality sector has become essential to meet the evolving expectations of travelers. By implementing a sophisticated hotel booking system , hotels can ensure seamless operations and accurate room availability across all sales platforms. This not only minimizes the risk of overbooking but also builds trust with guests who rely on instant reservation confirmations. Advanced digital solutions enable hotels to optimize their operations and enhance the guest experience through effective use of technology.

Strategies for Optimizing Hotel Operations

To excel in operational efficiency, you must employ some strategies that optimize room availability while reducing overbooking risks. One such strategy involves implementing channel managers that connect your property management system with various online travel agencies (OTAs). These tools ensure uniform data dissemination across all platforms where your hotel is listed, maintaining consistent availability information.

Another effective approach is adopting yield management techniques that allow you to adjust prices based on demand forecasts and market conditions. By analyzing booking patterns and seasonality trends, you can anticipate periods of high demand and set rates accordingly to maximize occupancy without compromising profitability. This proactive stance enables you to stay ahead in a competitive market while delivering exceptional value to guests.

Furthermore, developing a comprehensive cancellation policy that includes flexible options for guests can mitigate potential losses from last-minute cancellations or no-shows. Encouraging early bookings with incentives like discounts or package deals ensures higher occupancy rates well in advance while providing guests with added value for committing early.

Integration of mobile check-in and keyless entry systems represents another crucial strategy for operational optimization. These technologies not only reduce front desk workload but also provide guests with a contactless, efficient arrival experience. By allowing guests to bypass traditional check-in procedures, hotels can significantly reduce wait times during peak arrival periods while simultaneously decreasing staffing requirements. This modernization of the check-in process also provides valuable data about guest arrival patterns and preferences, enabling further operational refinements.

Real-Time Data Integration for Enhanced Efficiency

Real-time data integration is a cornerstone of operational efficiency in hotels. By synchronizing information across various platforms, hotels can maintain consistent room availability and pricing, ensuring guests have reliable information when booking. This integration helps prevent double bookings, which can negatively impact guest satisfaction and lead to revenue loss.

Incorporating real-time data into a hotel's operational framework ensures that any change in room status is updated instantly across all platforms. Whether a booking is made via a hotel's website or an external travel agency, the system reflects this change in real time. Such synchronization eliminates discrepancies that might arise from manual updates, which are prone to error and delay. Guests appreciate this level of accuracy, knowing their bookings are confirmed instantly without the risk of unexpected cancellations or overbookings.

Moreover, real-time data integration enhances the ability to implement dynamic pricing strategies effectively. By continuously analyzing demand fluctuations, hotels can adjust their rates to optimize occupancy and revenue. This approach not only maximizes profit but also ensures guests receive competitive pricing, further enhancing their overall experience and perception of value.

The implementation of cloud-based solutions further enhances real-time data integration capabilities. Cloud systems enable hotels to access and manage their data from anywhere, facilitating remote management and decision-making. This technological advancement proves particularly valuable during peak seasons when quick responses to market changes are crucial. Additionally, cloud-based systems offer robust backup solutions, ensuring business continuity even in the event of local system failures or technical issues.

Building Guest Trust and Loyalty Through Technology

Advanced digital solutions play a significant role in building guest trust and loyalty. A seamless experience begins with accurate information during the booking process and extends through every touchpoint in a guest's stay. Prioritizing transparent communication and reliable service delivery fosters a relationship of trust with clientele.

One major benefit of employing real-time integrated systems is the reduction of human error. Manual processes often result in mistakes that can lead to guest dissatisfaction. With automated systems managing inventory and reservations, these errors are minimized, ensuring smoother operations and happier guests. Additionally, by providing immediate confirmation and updates on reservation status, guests feel more secure and valued by your establishment.

Trust is further reinforced when technology facilitates personalized experiences for guests. By leveraging data analytics within your reservation system, you can tailor services to meet individual preferences and needs. This could range from offering customized room settings to suggesting local attractions based on previous stays or interests expressed by the guest during booking. These personalized touches not only enhance satisfaction but also encourage repeat visits and positive word-of-mouth referrals.

Modern digital solutions also enable hotels to implement sophisticated loyalty programs that track guest preferences and reward frequent stays. These systems can automatically identify returning guests, apply earned benefits and suggest personalized upgrades or special offers. By maintaining detailed guest profiles and preference histories, hotels can create memorable experiences that demonstrate attention to detail and commitment to guest satisfaction, ultimately fostering long-term loyalty and increased lifetime customer value.

Emerging Trends in Hotel Digital Solutions

As we look towards future developments in hotel digital solutions, several emerging trends are set to redefine how you manage operations and guest interactions. Artificial intelligence (AI) plays an increasingly important role in predicting customer behavior patterns based on historical data analysis. By understanding these patterns better, hotels can make informed decisions about pricing strategies or promotional campaigns tailored specifically for target audiences.

The rise of mobile technology also influences how guests interact with booking platforms today. More travelers prefer using smartphones over desktops for making reservations online due to the convenience factors associated with mobile access. Ensuring your system accommodates mobile users seamlessly becomes imperative to remain competitive in the marketplace.

Finally, sustainability considerations are gaining traction within the industry, prompting hoteliers to explore eco-friendly solutions to reduce the environmental impact of operations . Incorporating green practices into the design and functionality of digital platforms not only aligns with business values and global responsibility initiatives but also attracts environmentally conscious consumers seeking accommodations that align with their personal beliefs and values.

[Partner Content]


by Web Desk via Digital Information World

Monday, November 10, 2025

Study Reveals a Triple Threat: Explosive Data Growth, AI Agent Misuse, and Human Error Driving Data Loss

A new study by Proofpoint shows that data protection is being tested from several directions at once. The findings highlight how fast data volumes are rising, how AI tools are introducing fresh exposure, and how human habits remain at the core of many breaches. Together, these trends have made the task of securing information far harder than before.

The 2025 Data Security Landscape study gathered views from a thousand security professionals across ten countries. It found that 85 percent of organizations faced at least one data loss event in the past year. Many experienced repeated incidents, showing that leaks have become routine rather than exceptional. Human behavior continues to play the biggest part in these losses. Fifty-eight percent of cases were linked to careless employees or outside contractors, while forty-two percent involved compromised accounts. Only one percent of users caused three-quarters of all data loss incidents, confirming how a small group of risky users can have a large effect.

Proofpoint’s internal data supports this pattern. Its systems record that even in firms with strong policies, a handful of people are often responsible for repeated leaks. The company says the most common cause is simple error, such as sharing files to the wrong channel or emailing information to unintended contacts. In many cases, these mistakes go unnoticed until damage has already been done.

The amount of information under management is adding to the pressure. Among large enterprises with more than ten thousand staff, forty-one percent now store over a petabyte of data. Nearly a third saw their total data increase by thirty percent or more within a year. For smaller firms, cloud use is expanding at a similar pace. The study found that forty-six percent of organizations view data spread across cloud and hybrid platforms as their main problem. Almost a third said outdated or duplicated data creates risk by increasing the number of files that need to be monitored. Proofpoint’s analysis of major cloud systems revealed that about twenty-seven percent of stored material is abandoned and no longer used.

Artificial intelligence is introducing a second layer of risk. Many companies have deployed generative tools and automated agents without enough oversight. Two out of five respondents listed data leaks through AI tools among their top concerns. Forty-four percent admitted they lack full visibility of what these systems can access. Roughly a third said they were worried about automated agents that operate with high-level permissions and can move information without supervision. These views were strongest in Germany and Brazil, where half of surveyed organizations ranked AI data loss as their top security issue. In the United Arab Emirates, forty-six percent said the use of confidential data for model training was their main fear.

The problem is worsened by security operations that are already stretched. Sixty-four percent of organizations rely on at least six different security vendors. This creates overlaps and makes investigations slower. One in five teams reported that resolving a data loss incident can take up to four weeks. Around a third said they do not have enough skilled staff to manage their systems and often depend on partial or temporary support.

Even with these constraints, many companies are beginning to reorganize their security setups. About sixty-five percent are now using AI-based tools to classify data, while nearly six in ten apply automated systems to flag unusual user activity. Half of all respondents believe that a unified data protection platform would help them manage information more safely and allow responsible use of AI.

Proofpoint concludes that organizations can no longer rely on scattered systems or manual monitoring. The combination of growing data stores, increased AI access, and the human element has turned data protection into a continuous process rather than a response to single events. The report suggests that firms will need clearer oversight, simpler toolsets, and stronger control of both human and automated actions to prevent small errors from becoming wide exposures.




Notes: This post was edited/created using GenAI tools.

Read next:

Why Your Doctor Seems Rushed: The Hidden Strain of Modern Healthcare

• China’s AI Growth Challenges U.S. Supremacy, Nvidia Executive Says
by Irfan Ahmad via Digital Information World

Study Finds AI Can Mimic Grammar but Not Feeling in the Way Humans Communicate Online

Artificial intelligence has become fluent in nearly every structured task it touches. It can compose essays, generate code, and even craft marketing slogans with uncanny precision. Yet, when it steps into the messy world of online discussion, it still sounds slightly off.

A new international study reveals that large language models, despite their sophistication, continue to struggle with the one quality that defines human communication: emotion.

Researchers from the University of Zurich, University of Amsterdam, Duke University, and New York University with the help of “computational Turing test” tried to measure how human-like AI text really is. Instead of relying on people’s guesses, they used algorithms to compare linguistic and emotional features across thousands of social media posts from Reddit, Bluesky, and X. The team tested nine open-weight models from families such as Llama, Mistral, DeepSeek, Qwen, and Apertus to see whether machines could truly replicate human tone and spontaneity. The results were clear: AI-generated replies were detected as artificial in 70 to 80 percent of cases, even after applying advanced optimization techniques.

The findings highlight a recurring pattern in today’s large models, they can imitate structure but not sentiment. AI posts tend to follow grammatical rules flawlessly yet sound emotionally muted. They respond with balanced politeness where humans lean into sarcasm, irritation, or humor. Toxicity, often a hallmark of authentic online talk, became the most reliable clue separating people from machines. When classifiers analyzed tone, emotionally flat or overly polite replies nearly always turned out to be AI-made.

Across the three platforms, the differences remained visible. The models performed slightly better on X, where short and stylized exchanges dominate, but stumbled on Reddit, where argument, humor, and personal storytelling mix in unpredictable ways. Bluesky fell somewhere in between. The research also found that smaller non-instruction-tuned versions such as Llama 3.1 8B and Mistral 7B reproduced human style more effectively than fine-tuned variants designed for politeness or safety. Larger models, including Llama 70B, showed no consistent advantage, suggesting that scaling alone doesn’t bring emotional realism.

One of the most striking aspects of the study lies in how emotion and meaning pull in opposite directions. When researchers optimized the models for lower detectability, their semantic similarity with real human replies dropped. In other words, the more human the responses sounded, the less they actually matched what people said. Models trained to be accurate lost some of their conversational rhythm, while those fine-tuned for style began drifting from factual or contextually correct replies.

Attempts to close this gap through prompt design and fine-tuning didn’t produce the expected improvements. Complex strategies such as persona prompting, contextual retrieval, and fine-tuning often made text more uniform and easier to identify as machine-generated. Simple adjustments worked better. Providing stylistic examples or short snippets of authentic replies helped the models capture certain nuances of user language. Even then, emotional expressiveness (especially sarcasm and empathy) remained beyond their reach.

The research also uncovered subtle linguistic fingerprints that persist even after optimization. Average word length, lexical variety, and sentiment polarity continued to separate AI text from human writing. These markers changed shape across platforms, but the emotional gap held steady. When emotion-related terms such as “affection,” “optimism,” or “anger” appeared, they followed mechanical patterns rather than the fluid shifts seen in human exchanges.

For ordinary readers, these findings explain why AI comments often feel too polished, cautious, or context-blind. They mirror the syntax of online talk without its volatility. That distinction makes AI-generated dialogue easy to spot, even without expert tools. For developers, the study underlines a deeper limitation, current models excel at copying the form of communication but not its intention. True human language involves affective tension, inconsistency, and risk, all qualities machines still handle poorly.

The Zurich-led team’s conclusion is both reassuring and sobering. It shows how far natural language systems have come and how far they remain from sounding truly alive. Despite billions of parameters and countless training samples, today’s chatbots cannot convincingly reproduce the emotional unpredictability of human conversation. They have mastered grammar, but feeling remains out of reach. And for now, that gap ensures the internet still sounds unmistakably human.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next:

• 2025’s Most Common Passwords Show Users Still Haven’t Learned the Cybersecurity Basics

• Scrolling Without Thinking: Data Shows TikTok’s Ease and Accuracy Fuel Addictive Engagement
by Asim BN via Digital Information World

Sunday, November 9, 2025

Scrolling Without Thinking: Data Shows TikTok’s Ease and Accuracy Fuel Addictive Engagement

TikTok has become the default destination for short videos, but its power goes far beyond entertainment. A new Baylor University study shows how the app’s effortless design and algorithmic precision create engagement levels that easily slide into addiction.

Researchers James Roberts and Meredith David asked 555 college students to compare TikTok, Instagram Reels, and YouTube Shorts. They examined three traits that define how these platforms work: ease of use, how accurately they recommend content, and how often they surprise users with new material. These are called technological affordances, in simple terms, the ways a platform invites people to act.

The findings were clear. TikTok scored higher than both rivals on every measure. On average, participants rated TikTok’s ease of use at 6.6 out of 7, while Instagram reached 6.3 and YouTube 5.5. TikTok’s recommendation accuracy averaged 5.9, ahead of Instagram’s 5.0 and YouTube’s 4.5. For surprise and variety, TikTok again led the field with 5.5 compared to 4.8 for Instagram and 4.5 for YouTube. These gaps explain why people often say the app feels effortless and strangely personal.

The research showed that design plays a major role in how people engage. TikTok opens directly into a video that begins playing instantly. No search, no click, no pause. Every swipe feeds the algorithm new data, which quickly learns preferences and fine-tunes what appears next. The system feels intuitive but quietly removes decision-making friction, turning interaction into reflex.

This simplicity matters. The study found that TikTok’s easy interface increases engagement, and that engagement predicts addiction. In plain terms, the more natural the scrolling feels, the harder it becomes to stop. Instagram and YouTube also use recommendation systems, but their design still demands more effort — users must choose a video before watching. That small step slows down the feedback loop and makes self-control slightly easier.

Numbers from the study and outside sources underline the difference. TikTok has 1.6 billion active users who spend about 54 minutes a day watching videos. YouTube Shorts attracts 1.5 billion users averaging 49 minutes, and Instagram Reels, despite reaching a larger audience overall, holds attention for roughly 33 minutes. Roughly one in four TikTok users shows signs of addiction based on the same psychological scale used for gaming and social media studies.

The design rewards the brain with constant small wins. Each new clip either confirms what someone enjoys or surprises them with something unexpected but still satisfying. That mix of predictability and novelty creates a loop of reward and anticipation. The result is what psychologists call time distortion, users think only a few minutes have passed when half an hour has slipped away.

Beyond the numbers, the implications reach into daily life. The more time people spend scrolling, the less they devote to activities that build real connections or require focus. The researchers link heavy use to shorter attention spans, lower self-control, and reduced well-being. The harm isn’t just emotional; it’s about opportunity cost. Every hour spent inside the feed replaces an hour of sleep, study, or face-to-face interaction.

TikTok’s design success also highlights a paradox. What makes it popular is the same thing that makes it difficult to quit. The algorithm grows more accurate the longer someone stays on the app, learning behaviors in fine detail. This tightens the feedback cycle, keeping people engaged even when they intend to stop. Other platforms have tried to copy this structure, but TikTok’s mix of speed, accuracy, and surprise remains unmatched.

The study doesn’t point fingers. It simply shows that engagement is built into the system. When platforms compete for watch time, they naturally evolve toward designs that keep attention locked in. TikTok happens to have refined that formula first and most effectively.

For users, awareness is the only real safeguard. Checking screen-time data or setting reminders may sound simple, but even those small steps can interrupt the scrolling trance. The researchers suggest paying attention not just to how often you use these apps, but how they make you feel afterward.

TikTok’s rise has changed how billions consume video, shaping habits that now feel instinctive. The Baylor study exposes the quiet engineering behind that habit, the combination of ease, accuracy, and novelty that keeps fingers swiping long after intention fades. It’s not magic or mystery. It’s design working exactly as planned.


Notes: This post was edited/created using GenAI tools. Image: Unsplash

Read next:

• Website Loading Animations Work Best At Mid-Range Speeds, Research Finds

• ChatGPT and Copilot Lead the Corporate AI Race as Claude, Perplexity, and DeepSeek Lag Behind
by Irfan Ahmad via Digital Information World

2025’s Most Common Passwords Show Users Still Haven’t Learned the Cybersecurity Basics

 Researchers analyzing billions of leaked credentials this year found that users are still clinging to the same weak passwords that have circulated for over a decade. Despite countless warnings, words like “password” and simple number strings remain among the most used combinations online in 2025.

Comparitech’s team examined over two billion real account passwords exposed through data breaches across forums and Telegram channels. The results show a disappointing pattern: “123456” appeared more than 7.6 million times, securing the top spot yet again, followed by “12345678” with 3.6 million and “123456789” not far behind. Simple sequences such as “1234”, “12345”, and “1234567890” continued to dominate the global chart, while “admin” and “password” still ranked inside the top ten.

Despite global breaches revealing billions of passwords, users in 2025 still rely on weak, decade-old combinations.

Beyond these predictable entries, some users added weak variations like “Pass@123”, “P@ssw0rd”, or “Aa123456”. Familiar terms such as “qwerty123”, “welcome”, and “minecraft” also surfaced repeatedly. The word “minecraft” alone appeared around 70,000 times, plus another 20,000 with different letter casing. Among the more regional results, “India@123” stood out, ranking 53rd among the most common passwords.

The data reveals deeper behavioral trends that haven’t shifted much over the years. One quarter of the top 1,000 passwords contained only numbers, while 38.6% featured the sequence “123” and 3.1% included “abc”. Short numeric strings still dominate because they’re easy to remember, yet they remain the easiest to crack. Most passwords analyzed were shorter than recommended: 65.8% had fewer than 12 characters, and 6.9% had fewer than 8. Only a small fraction (just above 3%) stretched to 16 characters or more.

Short Passwords Under 8 Characters Still in Use, Posing Serious Security Risks

Modern brute-force tools exploit that weakness instantly. According to strength estimates from Hive Systems, a password made only of numbers can be broken almost immediately. Add a mix of uppercase and lowercase letters, numbers, and symbols, and a 12-character password could take billions of years to decode. At 16 characters, the cracking time expands to astronomical scales. A 12-digit numeric password, on the other hand, may last only three months before falling to an automated attack, while a 16-digit number-only one might survive a couple of thousand years, proof that even length alone adds considerable resistance.

The recurring issue, however, is repetition. Many users recycle old logins or apply the same structure across multiple accounts. That habit fuels credential-stuffing attacks, where one leaked password can expose several services at once. Security experts continually advise creating unique passwords for every platform, but convenience still outweighs caution for most.

There are simple ways to fix this. A strong password should include at least 12 to 16 characters, mixing symbols and letters in no predictable order. Instead of inventing one manually, users can generate them automatically using free tools like Digital Information World’s Password Generator. This kind of randomness removes human bias and greatly limits exposure. Adding two-factor authentication further reduces the risk of account takeovers even when a password leaks.

The findings suggest that password hygiene in 2025 remains as careless as ever. Technology keeps evolving, yet human habits seem frozen in place. Until users prioritize security over simplicity, the same familiar strings (123456, admin, and password) will keep returning to the top of the world’s weakest password lists.

Notes: This post was edited/created using GenAI tools.

Read next: Website Loading Animations Work Best At Mid-Range Speeds, Research Finds


by Irfan Ahmad via Digital Information World