Saturday, October 4, 2025

Survey Finds Few Americans Turn to Chatbots for News

Artificial intelligence chatbots are gaining users in the United States, but news is not the main reason people use them. A recent Pew Research Center survey shows that most adults still avoid relying on tools like ChatGPT or Gemini for news updates.

How Often People Use Chatbots for News

Only a small share of U.S. adults report getting news this way. About 2% say they do it often and 7% say they do it sometimes. Another 16% say they rarely use chatbots for news, while 75% say they never do. Fewer than 1% prefer chatbots over other options such as television, websites, or social platforms.

Younger Adults Engage More

People under 50 are more likely to check news with chatbots. Twelve percent in that age group say they use them at least sometimes, compared with 6% of those 50 and older. Younger users also report seeing more inaccurate stories. Nearly six in ten of those aged 18 to 29 say they have come across news from chatbots that seemed wrong, compared with just over a third of those 65 and older.

Mixed Views on Reliability

Experiences with chatbot news vary. Around one-third of users say it is hard to judge what is true, while a quarter say they find it easier to sort out facts. Four in ten are unsure. About half of chatbot news users say they sometimes encounter information they think is inaccurate. A smaller group say they see it often, while another group say it rarely or never happens.

Context in the Media Landscape

Search engines and social platforms remain far more influential for news. A Pew survey last year showed nearly a quarter of U.S. adults often get news through search engines, most commonly Google. TikTok has also expanded its role, with one in five adults now getting news there, up from 3% five years ago.

By comparison, studies of ChatGPT use show that people mainly turn to it for practical help. Many use it for learning, homework, or everyday advice rather than following the news cycle.




Notes: This post was edited/created using GenAI tools.

Read next: 

• TikTok Faces Scrutiny Over Exposure of Minors to Pornographic Content in UK

• AI Chatbots Use Emotional Pressure to Keep People From Logging Off
by Irfan Ahmad via Digital Information World

Gmail Users Face Security Cut-Off as Google Retires Old Email Tools in 2026

From January 2026, Gmail will stop delivering mail through two long-standing features that many people still rely on. The change targets Gmailify, which added Google’s own filters and search tools to outside accounts, and POP, a decades-old protocol that let emails be pulled into Gmail from other providers.

For everyday users, the announcement means some familiar options will simply vanish. Anyone who set up Gmail to fetch messages from Outlook, Yahoo, or other services using POP will lose that connection. Those who upgraded outside accounts with Gmailify will also find that Google’s spam protection and inbox sorting no longer apply. Nothing already stored in Gmail will be removed, but the flow of new mail will break unless settings are updated in advance.

The reason behind this shift lies in security and standards. POP, the older of the two features, dates back to a time when email systems were far simpler and less protected. It sends login details and content in ways that can expose information if not shielded properly, and it has never supported modern safeguards such as multifactor checks. IMAP, which most providers now offer, is more flexible and secure, and Google is steering everyone toward it. Gmailify, on the other hand, was more about convenience than safety, and Google appears ready to retire it in order to streamline Gmail around one consistent model.

For those affected, the fix is not complicated but it requires action. External accounts need to be reconnected using IMAP, which most major services already support. Mobile users can still attach Outlook, Yahoo, or other accounts inside the Gmail app, but the extra Gmail-only perks will no longer be available. People using work or education accounts may also need help from administrators to ensure continuity.

Google’s decision may frustrate those who valued the simplicity of POP or the enhancements of Gmailify, but the direction is clear: Gmail is consolidating around modern protocols and moving away from older systems that no longer meet its security standards. With the deadline set for January 2026, users have more than a year to prepare... and those who do not adjust risk finding that their Gmail inbox suddenly goes quiet.


Image: appshunter/unsplash

Read next: Apple’s Removal of ICEBlock Highlights Growing U.S. Government Influence Over Big Tech
by Asim BN via Digital Information World

Friday, October 3, 2025

Apple’s Removal of ICEBlock Highlights Growing U.S. Government Influence Over Big Tech

Apple’s decision to remove the immigration-tracking app ICEBlock from its App Store has placed the United States in a debate more commonly associated with other nations. The move followed a request from the Department of Justice, led by Attorney General Pam Bondi, who argued that the tool endangered federal agents.

ICEBlock, created by developer Joshua Aaron earlier this year, enabled users to crowdsource reports of Immigration and Customs Enforcement activity. Supporters framed it as a form of public accountability, while critics viewed it as a direct obstacle to law enforcement. The app had no Android version because, according to its developers, anonymity and push notifications could not be supported on that platform without maintaining user data. This meant iOS was the sole channel for the service, leaving Apple’s action effectively decisive.

The significance extends beyond the fate of one app. In recent months, the federal government has leaned on Apple and Google twice: once to remove TikTok amid disputes over its ownership, and now with ICEBlock. Each time, the platforms complied. This reliance on private companies to execute policy decisions shows how easily Washington can shape digital access when it chooses to act through the technology sector.

The comparison with Apple’s earlier removals abroad is difficult to avoid. In 2019, the company took down an application used during Hong Kong’s protests after Beijing expressed concern. Similar cases have occurred in Saudi Arabia and Russia, where governments pressured Apple to remove politically sensitive content. Critics now point to the parallels, warning that the United States is adopting tactics it once condemned in other countries.

What makes ICEBlock different from the Hong Kong case is that no alternative path remains open. When HKmap.live was pulled, the service could still be used as a website saved to an iPhone’s home screen. ICEBlock lacks that option, and without an Android version, the removal cuts off all new users. Existing installations continue to work, but updates are blocked and redownloads are impossible.

From Apple’s perspective, the decision follows a long pattern of risk calculation. The company has resisted governments in the past, most notably during its standoff with the FBI in 2016 over access to a locked iPhone, but it has also shown willingness to bend under pressure when the political or commercial costs of resistance appear too high. Observers argue that this case falls into the latter category, particularly given recent security incidents involving federal officers that have heightened sensitivities around public tracking.

The broader issue is structural. Apple’s control of distribution through its App Store gives the government an indirect but powerful lever. When officials apply pressure, Apple has limited room to maneuver without risking confrontation that could damage its business. For critics of concentrated corporate power, this episode reinforces the concern that a handful of firms hold the gateways through which civic information flows.

It also highlights the limits of expecting moral stands from corporations. Their primary obligation lies with shareholders, and their responses tend to align with reputational and financial considerations rather than abstract principles. In practice, that means decisions like the removal of ICEBlock are framed less by questions of rights or liberties, and more by calculations of risk, liability, and long-term business stability.

The outcome is that the government now knows it can lean on large platforms to implement controversial measures without passing new laws. Once such leverage has been demonstrated, there is little reason to assume it will not be used again. Whether that influence remains confined to security matters or extends further into civic and political disputes will determine the long-term consequences of Apple’s decision.


Notes: This post was edited/created using GenAI tools and reviewed by human editor(s) for accuracy. Image: DIW-Aigen.

Read next:

What Worries Parents The Most When It Comes To Teens And Technology

• Buffer Study Finds X Premium Users Gain Clear Reach Advantage
by Irfan Ahmad via Digital Information World

What Worries Parents The Most When It Comes To Teens And Technology

The mobility of wearable tech, smartphone enhancements, and social media/AI advancements has increased younger generations' technological literacy. Each generation has been more exposed to technology than the last, inside and outside the house. According to a study by Pew Research, 95% of children have access to a smartphone. Pair that with the fact that many high schools now have kids learning on laptops, and you’ll be hard-pressed to find a time when teens aren’t interacting with some form of technology. Parents have to be more vigilant than ever when it comes to monitoring their teens’ usage of various devices.

A 2025 All About Cookies survey found that screen time reduction is an ongoing challenge for most families. The AAC team asked parents their opinions on when they feel their child should use certain things, like AI, and when they can own a phone, as well as manage a public social media account.

Parents' Feelings About Their Teens Using Certain Tech

With the rise of AI chatbots, many people are turning to places like ChatGPT and Grok for assistance with a variety of things. These tools have their benefits, but usage at a young age could lead to over reliance on them. There have been numerous cases in recent months of students using AI to complete assignments, as well as many people utilizing AI chatbots to replace human interaction. Other serious cases include people turning to AI for mental health advice, which has led to recent lawsuits from parents who have lost their teens due to using ChatGPT in place of a licensed mental health professional.

The same report found that one in four parents surveyed (22%) felt that their teen should never use an AI chatbot like Grok or ChatGPT. Even parents who were more forgiving of AI usage felt that their child should be at least 16 years old before interacting with LLMs, the highest median age among all categories, tied with having your own social media account.
Parents were most opposed to their teens using AI over things like having their own social media and smart devices, showing that right now, AI use has not been willingly accepted as a normal part of the digital world like social media by the past generation, as of yet.

The Most Dangerous Social Media Platforms According to Parents

While social media has been accepted as something that teens will use at some point, that doesn’t mean that there won't be levels of concern among parents for their teens putting themselves out there on social media platforms.

Unanimously, the app that parents felt posed the highest risk to their teens was TikTok.

Thirty-eight percent of parents identified the video-based social media platform as the most dangerous. That is more than double the percentage of concern among all other social media platforms. Snapchat, a messaging app and media sharing platform, was the second biggest concern for parents, chosen by 14%. The nature of this concern could be due to the app's disappearing message system as well as its emphasis on sharing pictures.
For months, it seemed as if TikTok was a privacy concern for many, with various congressional hearings in America calling for it to be banned from U.S. app stores due to how the app collected users' data. It seems that concern has died down in the current news cycle, but parents haven’t seemed to forget, as evidenced by the above data.

Another recent development surrounding TikTok is the executive order signed by the president readying the app to be sold to a group of American investors. The long theorized TikTok deal could play a big part in the potential impacts on how parents view the app moving forward, but as of earlier this month, parents felt that TikTok posed more danger to their teens than any other app by a large margin.

Apps like Discord and Twitch, which are also synonymous with usage among younger generations, came in quite low on the parent-worry scale, with only 5% of parents feeling Discord was a danger and 2% having worries about Twitch.

Parents Weigh In On Whether Their Teen Is Addicted to Their Phone

TikTok is well known for its addictive algorithm that has been designed to keep people scrolling and engaged with its platform. This goal to keep users on their devices could be a reason why parents are worried about these apps. According to the AACP , children are spending 7 ½ hours a day watching or using screens. This prompted the All About Cookies team to see how parents felt about the amount of time their teens were spending on their phones.
Research shows younger generations could use a digital detox, and parents are recognizing a pattern. An alarming 60% have been worried that their teen is addicted to their phone.

With most children averaging 8 hours in school and 7.5 hours on their phones, they’re essentially averaging all of their free time on their phones that could be spent doing activities or with friends and family. So it comes as no surprise that many parents feel their children may be addicted to their phones

How Parents Feel About Phone Bans

With many parents having fears that their teen could be addicted to their phone, it's no surprise that many of them support the ongoing phone bans that are cropping up in high schools across America. Recently, 14 different states have active laws or executive orders that prohibit or limit phone usage in schools.

Research indicates two-thirds (68%) of parents support banning phones in high schools to some extent.


Children spend 9-10 months of the year in school, which means parents have next to no control over what they're doing for the majority of the day, most of the year. The lack of insight and control over their child during this time may lead parents to support phone bans in an attempt to keep their teens from becoming distracted in school, as well as limit their time using these devices.

Potentially more shocking is that less than one-fourth (13%) of parents at least somewhat oppose these phone bans. Many parents choose to give their children a cell phone as a way to stay connected to them during times when they’re apart. 2025 figures reinforce that parents are becoming concerned that their teens’ phone use will hinder their learning. These findings could be pointing to a potential generational shift not only in current Gen-Z/Alpha teens' phone usage, but in their millennial parents' opinions on phone access as well.

Final Thoughts

The technological shifts of this current generation have parents feeling more concerned than optimistic about their children’s phone usage. Many different statistics from the survey conducted by All About Cookies show that a majority of parents are worried about how their teens are interacting with technology. This is important as many larger conversations are being had on the ways to regulate tech, such as chatbots, that some parents are worried about, alongside the aforementioned phone bans. This increased worry could lead to parents taking more precautions with their teens' phone usage, such as increased screentime monitoring or using parental control apps. Technological advancements have strengthened communication and technological literacy, but these advancements are also things that parents and legislators need to be aware of when making decisions on how to navigate teens using tech.

Read next: 

Study Finds X Premium Users Gain Clear Reach Advantage

• Survey Finds Platforms, Not Governments, Should Decide Online Rules

by Irfan Ahmad via Digital Information World

Meta's Threads Introduces Communities to Group Conversations Around Interests

Meta is adding a new option to Threads that creates communities, spaces built around topics such as sports, music, books, television, and technology. The feature is being tested on the web and mobile versions of the app. It marks the latest step in Meta’s effort to give the platform more structure as its user base grows.

How the feature works

Users can join a community, see posts inside a dedicated feed, and display their membership on their profiles. Each community is listed in the menu so it is easy to move between them. More than 100 groups are already active, including NBA Threads, Book Threads, and Tech Threads. Within these spaces, posts are arranged to highlight material most relevant to the theme, rather than showing a mix of tagged content.

Personalization and design changes


Joining communities also affects the main feed. The app takes those choices into account when recommending other posts. Meta says this should make feeds less random and more focused on what people want to follow. Some communities use custom emoji for likes, such as a basketball symbol in NBA Threads.

Why Meta is adding communities

Threads passed 400 million users this year. Its feed has been criticized for relying heavily on suggested posts from unrelated accounts, which makes it harder for users to keep track of ongoing discussions. Communities are intended to reduce that problem by offering consistent spaces that stay tied to specific subjects.

Comparison with other platforms

The idea is not new. Reddit has long been built on topic-based groups, and X introduced a communities feature in 2021. Meta is now adapting that approach to make Threads more cohesive and to give users more control over the flow of content they see.

Notes: This post was edited/created using GenAI tools.

Read next: Cracking Bcrypt: Is New Gen Hardware and AI Making Passw

by Irfan Ahmad via Digital Information World

Thursday, October 2, 2025

Cracking Bcrypt: Is New Gen Hardware and AI Making Password Hacking Faster?

In the last two years, the boom of artificial intelligence has resulted in an arms-race in computing power, graphics performance, and consumer hardware. While on the surface this sounds like a beneficial thing, there is also a flip side. This artificial intelligence boom is also benefiting cybercriminals who aim to hack passwords better, faster, and more efficiently. With new generation hardware, such as Nvidia’s recent 50-series and AMD’s upcoming transition to ‘UDNA’ architecture, high-quality computing is becoming more affordable and more available. From this, the burden of entry for cybercriminals is reduced and hackers are able to run computations to crack passwords much faster and much more often.

What is Password Hashing?

It is a relatively standard protocol that enterprises protect their user’s passwords with hashing algorithms. Storing passwords as plain text is considered a bad practice, because anyone who can access a database – authorized or unauthorized – can simply read the passwords and take them if they so wish. Hashed passwords, however, aim to prevent people from being able to read the passwords. Think of hashed passwords as a mask on the true thing. The only way for people to know the true password beneath the mask is if each password is individually unscrambled through brute force techniques.

Brute force hacking is when hackers, with the assistance of quality hardware, guess possible password combinations through a series of trial and error guesses. Individually, this would be impractical due to how long it would take to go through every possible combination, however, now with access to such powerful hardware, hackers can run billions of these computations simultaneously, and thus faster.

The Bcrypt Hashing Algorithm

There are many different types of hashing algorithms, some of which are better than others. MD5, for example, is older and not as effective. It is hence frequently cited as one of the most common hash algorithms found in leaks, demonstrating its lack of efficacy.

Bcrypt is another hashing algorithm, developed in 1999. This one turns a user’s password into a string of characters in a one-way hashing function which is irreversible, meaning that it cannot be changed back to the original password. When a user logs in, this algorithm re-hashes the password and compares the value to the one already stored in the system memory to see if the passwords match. If the password is short to begin with, the bcrypt hashing process also can stretch it to become longer and more complex. Bcrypt also adds a random piece of data to each password hash, ensuring uniqueness. This increases the difficulty that passwords can be guessed with dictionaries or brute-force attacks, and is also known as salting where a 22-character string is put in front of the hash.

What differentiates bcrypt hashing from other algorithms is also that it has a cost factor to it. This shows how many password iterations were made before the hash is generated and is applied in front of the salt, thus significantly increasing the time, effort, and resources required to calculate the password.

How Does Bcrypt Stand Up Against New Generation Hardware?

While bcrypt hashing is generally considered effective, the boom of artificial intelligence and the increased affordability, capability and availability of new generation hardware has only improved the performance of brute force hacking techniques against hashed passwords and data sets. In a recent study by Specops Software, researchers used newer, more powerful hardware to determine how long exactly it takes to crack bcrypt. These findings were compared to a similar study from two years ago, with weaker hardware to show exactly how fast hackers are advancing alongside the hardware they use.

In the Specops Software study, a sample of 750,000 hashes was taken from well-known data leaks (RockYou2024, etc.), and put under the pressure of brute force hacking techniques. The findings showed that the mass investment in artificial intelligence infrastructure by major enterprises has significantly increased the availability of heavy compute hardware. A couple of years ago, one might expect that brute force attacks would be conducted by hardware like the RTX 4090 graphics card. But today’s RTX 5090 flagship cards are approximately 65% faster when up against bcrypt hashing.

The findings of the study found that short, non-complex passwords were able to be cracked relatively quickly by both older, less powerful hardware and with new generation artificial-intelligence powered hardware. Passwords like ‘password’, ‘123456’, ‘admin’, and the like have always been easily crackable, even in the beginning of the internet. Unfortunately, these passwords remain very commonly used even to this day. This is because a lot of users, both in and out of the workplace, experience password fatigue from all the different, unique credentials they must remember for all their different accounts.


However, even so, the newer hardware was able to crack slightly more complex passwords much faster. The older study found that bcrypt hashed passwords with 6 or 7 characters that were made of numbers only could be cracked instantly. The new generation hardware, however, was able to instantly crack hashed passwords of 4 to 6 characters including numbers, uppercase, and lowercase.

Password Best Practices

From the study’s findings, we can conclude that the longer, more-complex the password, the better. As complexity increases, so too does the length of time it takes to crack a hashed dataset. Indeed, once a password becomes over 12 characters long, with a combination of types of characters, the time to crack becomes nearly impossible for hackers.

For this reason, it is important that individuals and organizations always follow a few key practices to ensure a combination of protections. Passwords should ideally be at least 18 characters and they should also always be comprised of each of the following: lowercase, uppercase, numbers, and special characters.

Additional protections include enabling passphrases with at least 18 characters and including multiple or all character cases (uppercase, lowercase, digits, and special characters). Complexity in passphrases goes against advice since length is key over complexity, but this will make the phrase harder to crack. It's recommended to avoid lines from songs, poems, and films, and deliberately misspelling a word can be good practice. Organizations may also implement a custom dictionary for their employees which blocks words that are associated with the organization itself, such as words in the company name or products.

The Problem: Known, Compromised Passwords

Implementing strong password protocols is the first step to protecting against the brute force hacking of passwords. However, it is important to understand that once an attacker already has access to a password or dataset in question – whether because of re-use or because it has been leaked through infostealers – it becomes too late. In this case, it does not matter how complex a password is or how well it has been hashed. If someone in an organization reuses passwords across multiple accounts, then their single compromised password could be the difference between an entire company being hacked or not.

Complex hashing protocols should never be considered a replacement for good password security hygiene. In order to maintain good password security hygiene, passwords should be unique and never reused. The Specops Software study, after all, found that the time to crack known, compromised passwords was instantaneous, regardless of the kind of hardware that was used, and regardless of how well it was hashed. In order to prevent risk, organizations and individuals must continuously be aware of appropriate password hygiene, they must never allow the re-use of passwords, and they must constantly be on guard that their passwords do not become compromised.

Darren James is a Senior Product Manager at Specops Software , an Outpost24 company. Darren is a seasoned cybersecurity professional with more than 20 years of experience in the IT industry. He has worked as a consultant across various organizations and sectors, including central and local governments, retail and energy. His areas of specialization include identity and access management, Active Directory, and Azure AD. Darren has been with Specops Software for more than 12 years and brings his expertise to the support and development of world-class password security and authentication solutions.

Read next: AI Chatbots Use Emotional Pressure to Keep People From Logging Off


by Web Desk via Digital Information World

Israel Pours Millions Into AI and Influencer Campaigns to Shape Online Narratives

Israel is putting significant money into shaping how it appears on digital platforms and in artificial intelligence systems. Documents filed under US foreign agent rules show contracts worth millions aimed at building online campaigns, working with influencers, and even steering the way tools like ChatGPT respond to questions.

One of the biggest deals, as per ResponsibleStateCraft, involves Clock Tower X, a US firm linked to former Trump campaign manager Brad Parscale. The company has a $6 million contract to produce material for Israel. At least four-fifths of what it creates must focus on younger audiences using TikTok, Instagram, YouTube, podcasts, and other channels. Targets in the contract require at least 50 million impressions each month.

Part of the plan is to build websites that feed into the data used by AI systems, so that responses to political subjects reflect positions that Israel wants highlighted. To help the material rise in search results, Clock Tower is using MarketBrew, an AI platform that predicts how Google and Bing rank content. The contract also gives the firm scope to place narratives through Salem Media Network, a conservative Christian broadcaster in the US where Parscale now serves as chief strategist.

The filings say the project is framed as a campaign against antisemitism. Few details are given about the specific themes of the material, but Israel’s foreign ministry is closely involved, with senior adviser Eran Shayovich named as the main contact. He has previously described his work as expanding Israel’s public diplomacy under a project labeled 545.

Alongside this effort, a separate program has paid social media influencers large sums to post supportive content. Invoices from Bridges Partners, another firm linked to Israel’s ministry, show that around $900,000 was budgeted between June and November. After production and legal costs, more than half a million dollars went directly to influencers. The documents suggest each post on platforms like TikTok or Instagram brought in between $6,000 and $7,000 for those taking part. The campaign, called the Esther Project, was designed to reach Western audiences through lifestyle-style media.

Other moves point to wider spending. In June, Google began a $45 million advertising program on behalf of Israel’s prime minister’s office. The ads, spread through YouTube and the company’s display network, were listed as government-backed public relations. TikTok also recently hired Erica Mendel, a former Israeli army instructor and US State Department contractor, to oversee its hate-speech policy, raising questions about possible alignment with Israel’s approach.

All of this comes at a time when US polling shows weakening support for Israel. A Gallup survey over the summer found that only nine percent of Americans aged 18 to 34 backed Israeli military actions in Gaza. A New York Times and Siena poll later showed more respondents supporting Palestinians than Israel for the first time in that survey’s history. Quinnipiac University found that fewer than half of Americans think supporting Israel is in Washington’s interest, while only one in five hold a favorable view of Prime Minister Benjamin Netanyahu.

Netanyahu has underlined how vital he sees online communication in this struggle. He has said digital platforms are central to influencing opinion, comparing them to weapons that replace older tools of conflict. Investors close to Israel, including Oracle founder Larry Ellison, are also involved in bids to buy TikTok, a platform Netanyahu has suggested could become a decisive tool in shaping perception.

Taken together, the contracts show how Israel is concentrating resources on digital space, mixing influencer partnerships, targeted media buys, and AI-driven search manipulation. The effort reflects both the scale of its investment and the challenge it faces with younger audiences, where opinion polls reveal attitudes have shifted sharply.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next: Meta Denies Microphone Tracking As It Expands AI Ad Targeting
by Irfan Ahmad via Digital Information World