Friday, August 1, 2025

Tech Debt and Brand Trust: Travis Schreiber on Why Old Systems Erode Your Reputation

Your tech stack isn’t just about productivity. It’s tied directly to how customers see your business. Slow systems, broken workflows, and outdated tools frustrate users and quietly erode trust. Over time, these problems add up, and they show up in reviews, complaints, and even security risks.

Travis Schreiber, Director of Operations at Erase , has seen this play out repeatedly. He’s spent years helping companies connect their backend processes with their reputation strategies. “Most of the time, people don’t think about how their tech impacts perception until it’s too late,” he says. “You get a few bad reviews because your customer portal is clunky or an integration fails, and suddenly it’s a pattern that anyone Googling you can see.”

Here’s how old tech stacks chip away at trust, why it matters, and what businesses can do to fix it.

Tech Debt Isn’t Just Internal

When most teams talk about tech debt, they treat it as an internal issue, an IT headache or a project they’ll get to later. But customers notice it long before leadership does.

A 2024 Salesforce study found that 88% of customers say experience matters as much as the product itself. Laggy checkout flows, outdated design, or broken automations don’t just annoy people, they push them toward competitors.

Schreiber recalls working with a mid-sized car insurance company that ran on a legacy billing system. “It was fine until it wasn’t,” he says. “When their invoices started going out late, support tickets piled up, and people started posting screenshots of errors on social media. It wasn’t just about fixing the billing tool anymore. It became a reputation problem.”

How Security Risks Amplify

Old tools aren’t just clunky, they’re vulnerable. Legacy systems often miss modern security patches or require custom fixes that get deprioritized.

“Outdated CRMs are one of the biggest risks we see,” Schreiber notes. “We had a massive healthcare client whose internal communication platform accidentally indexed private internal chat logs on Google.”

The reputational damage from a single breach can outlast the technical fix. According to IBM’s 2023 Cost of a Data Breach Report, 51% of consumers say they won’t do business with a company after a breach.

Customers rarely care if the root cause was outdated middleware or an API misconfiguration. They care that their data wasn’t safe, and they’ll share it publicly.

The Link Between Tech and Perception

Even simple annoyances tie back to brand trust. Poor mobile optimization, email errors from bad automations, or slow response times due to clunky ticketing systems all create a paper trail online.

“Negative reviews almost never say, ‘Your backend API failed,’” Schreiber explains. “They say, ‘I couldn’t log in,’ or, ‘They didn’t get back to me for a week.’ The tech problem turns into a trust problem instantly.”

Over time, these touchpoints stack. You don’t just lose a sale. You lose credibility. Search results start surfacing complaints. Prospects see screenshots in forums. AI summaries and reputation tools pick up that chatter.

Fixing Tech Debt Before It Hits Reviews

The good news: this isn’t just an IT problem. It’s operational. It’s fixable if you treat tech debt as part of brand protection, not a separate track.

1. Audit Your Stack

Review every tool and integration that touches customers. “Look at it like a customer would,” Schreiber says. “Sign up for your own service. Click every email. Use your own support system. If it feels slow or clunky, they feel it too.”

2. Prioritize Patches Over New Features

Don’t ignore updates for the tools you already use. Companies obsess over adding flashy features while their login process still takes 45 seconds to load. Fix the basics before building anything new.

3. Secure Automations

Automated workflows save time, but unsecured or misconfigured ones expose data. Audit permissions and remove any stale connections.

4. Embed Reputation Monitoring

Set up alerts for complaints about broken systems. Tools like Brand24 or even simple Google Alerts help you catch issues early. If your billing portal is glitching and three people mention it on Reddit, you want to know before it’s on page one of your search results.

Bake Reputation Into Operations

Tech debt isn’t just about code. It’s about how your operational processes either protect or damage your reputation.

Automate Review Monitoring

If a system failure triggers a wave of bad reviews, you should know immediately. Integrate review tracking into your workflows. Assign someone ownership of responding quickly with context and resolution.

Standardize Communication Scripts

When tech fails, the response matters as much as the fix. Build scripts for customer-facing teams that explain outages or errors clearly. “The worst thing you can do is go silent,” Schreiber says. “Even a quick post saying, ‘We know, we’re fixing it,’ buys you goodwill.”

Document and Train

Tech fixes don’t stick if your team doesn’t know how to use them. Build simple documentation, and train staff on every major system.

Why Reputation Starts With Infrastructure

Reputation management is often seen as PR. In reality, it’s operational. The tools you use and how you maintain them directly shape how customers talk about you.

“You can spend six figures on brand campaigns, but if your login page times out, none of that matters,” Schreiber says.

Modern search amplifies this. AI summaries and review aggregators don’t care how strong your marketing is, they scrape whatever complaints or praise are most visible. If old tech is creating new problems, that’s what will surface first.

The Bottom Line

Your tech stack isn’t invisible. Customers feel it every time they interact with your business. When outdated systems or ignored fixes get in the way, they don’t just hurt efficiency. They quietly chip away at trust.

By treating tech maintenance and process design as part of reputation management, businesses can stay ahead. Audit systems, fix what customers feel first, and embed safeguards that keep problems from leaking into public view.

Because once it’s out there, it’s not just an IT ticket, it’s a Google result. And that’s a much harder fix.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next: 

• AI Models Write Code That Works, But Often Miss Security Basics

• Cybercrime Grows More Aggressive in 2025 as Identity Becomes a Central Target
by Asim BN via Digital Information World

Cybercrime Grows More Aggressive in 2025 as Identity Becomes a Central Target

A spike in infostealers and ransomware reveals how stolen credentials are now central to large-scale attacks.

The first half of 2025 has brought major changes to how cyberattacks are launched and carried out. Threat groups have begun relying more on tools that steal personal data from browsers and devices. This shift has helped them carry out more damaging attacks against companies and individuals around the world.

Flashpoint's latest analysis shows that identity-based intrusions have become the fastest-growing threat this year. The report tracks a steep rise in the use of infostealers, malware designed to extract saved passwords, login cookies, and payment details from everyday devices. These tools now serve as a starting point for more serious threats like ransomware and large data breaches.

Over six months, the number of stolen credentials has jumped by 800 percent. Analysts say this has allowed attackers to move quickly between targets using stolen access, rather than trying to break through with brute-force methods.

How infostealers shape larger breaches

Once malware gains access to a browser or device, it often retrieves saved account data. This can include email logins, work credentials, and session tokens. If attackers can get hold of even a single active session, they may be able to bypass multi-factor security or access internal systems without raising alarms.

This kind of access lets attackers explore deeper layers of a network. In many recent cases, a single compromised device has led to a full-scale breach of a company’s data.

Flashpoint identifies several malware types leading the surge. Lumma and RedLine remain the most active, although other families like StealC and Acreed are appearing more often on cybercrime forums. The tools are often sold at low cost and used repeatedly across different targets.

Ransomware spreads through the same infection points

The same stolen credentials often help ransomware groups break into corporate systems. This type of malware locks files, demands payment, and can also leak sensitive data. Since January, the number of ransomware incidents has risen 179 percent.

Many of these attacks trace back to earlier infostealer infections. The initial access gained through stolen logins often opens a path to internal systems, where attackers then install ransomware. This two-step approach has become a common pattern this year.

Security teams now face threats that combine multiple tools and stages, rather than relying on a single method. The ability to link these threats early is becoming essential.

Public vulnerabilities grow faster than defenses can keep up

Another issue making things worse is the sharp rise in known software flaws. Since February, public disclosures of vulnerabilities have grown by 246 percent. Exploit code for many of these flaws is widely available, up 179 percent in the same time.

Researchers also points to a major lag in public databases that track vulnerabilities. Tens of thousands of issues remain unanalyzed in sources like the National Vulnerability Database. This leaves security teams without critical information as they try to manage growing exposure.

The speed at which attackers take advantage of newly published exploits continues to shrink. In some cases, malware begins using a vulnerability within hours of it appearing online.

Data breaches reflect a wider failure to contain access

Data breaches have also spiked. So far in 2025, their frequency has climbed 235 percent. In 78 percent of the cases tracked, attackers got in through unauthorized access, most often by using stolen credentials.

The United States has been the most affected, with two-thirds of global breaches recorded there. Much of the stolen data includes personal information, which is often used for fraud or resold on dark web platforms. Once released, this kind of data tends to circulate for years.

Some of the biggest breaches in recent months have been linked to logs from infostealers. These logs are often posted on underground sites shortly after collection and then reused in follow-up attacks. Industries like healthcare, telecommunications, and legal services remain especially vulnerable.

Geographic spread of infostealer infections


Flashpoint’s research lists the countries where the most infostealer logs have been uploaded. India ranks first, followed by the United States, Brazil, and Indonesia. Other nations with high infection rates include Pakistan, Mexico, Egypt, the Philippines, Vietnam, and Argentina.

These countries have become prime sources of stolen credentials now circulating online. In many cases, the malware behind these logs was never detected by the original user.

Broader patterns in a shifting landscape

This year’s attacks show a move toward layered threats. A typical campaign might begin with a cheap malware infection, move into credential theft, and end in ransomware or data extortion. This structure allows attackers to cause more damage without increasing effort.

At the same time, the boundary between cybercrime and global conflict is becoming less clear. Threat actors tied to state interests or working in politically unstable regions are using similar tools and tactics. This makes it harder for defenders to separate criminal groups from state-aligned campaigns.

Security teams now face both technical and strategic challenges. Many organizations are still focused on incident response, but that approach no longer matches the speed or complexity of current threats.

A shift toward early detection, attack surface reduction, and more timely intelligence will be critical for stopping these threats before they spread.

Notes: This post was edited/created using GenAI tools. 

Read next: AI Models Write Code That Works, But Often Miss Security Basics


by Web Desk via Digital Information World

OpenAI Pulls ChatGPT Search Feature After User Chats Appear in Google

OpenAI has taken down a feature that briefly made ChatGPT conversations searchable on Google after users began discovering that private discussions, some containing personal or sensitive corporate information, had become publicly visible online. The change followed a surge in attention on social media where people shared how entire conversations, including prompts and responses, could be found using a targeted search format. These shared links had been generated by users through ChatGPT’s own interface, where they had the option to make a conversation public. Once the public link was placed somewhere accessible to search engines, it became indexed just like any other webpage.

Although the feature required an explicit opt-in, many users either misunderstood its reach or failed to realize that clicking a simple checkbox would allow search engines to index the full content of a chat. As a result, people found examples that revealed names, job roles, locations, and even internal planning notes. In some cases, the content involved real business data, including references to client work or strategic decisions. One widely circulated example showed details about a consultant, including their name and job title, which had been picked up by Google's crawler and appeared in the open web results.


The company moved quickly to pull the feature within hours of the issue gaining traction online. But the incident highlighted a growing tension between collaborative AI use and the risks that come with publishing generated content, especially when privacy expectations are not made fully clear at the point of sharing. Even though the interface technically required users to go through multiple steps to make a conversation shareable, the design failed to convey the full extent of the consequences. A checkbox and a share link proved too easy to overlook, especially when users were focused on sharing something helpful or interesting.

Image: @wavefnx / X

This event is not the first time AI tools have allowed sensitive content to leak into public view. In previous cases, platforms like Google’s Bard and Meta’s chatbot tools also saw user conversations appear in search results or on public feeds. While those companies eventually responded with changes to their systems, the pattern remains familiar. AI products often launch with limited controls, and only after issues arise do the developers begin closing the gaps. What’s become clear is that privacy needs to be a core part of the design process rather than an afterthought fixed in response to public backlash.

In this case, OpenAI stated that enterprise users were not affected, since those accounts include different data protections. But the broader exposure still created risks for regular users, including those working in professional settings who use ChatGPT for early-stage writing, content drafting, or even internal planning. If a team member shared a conversation without understanding the public nature of the link, their company’s ideas could have been made accessible to anyone who knew where to look.

Some experts urged users to take action by checking their ChatGPT settings and reviewing which conversations had been shared in the past. Users can visit the data controls menu, view the shared links, and delete any that remain active. Searching for a brand name using Google’s “site:chatgpt.com/share” format can also reveal whether any indexed material is still visible. In many cases, people shared content innocently, but once those links are indexed, they become part of the searchable web until removed manually or delisted by the platform.
The situation also pointed to a wider challenge for companies adopting generative AI tools in business settings. Many organizations have begun integrating AI into daily work, whether to brainstorm marketing strategies or write client-facing drafts. But they may not always realize that a single act of sharing could expose internal knowledge far beyond its intended audience. Without strict internal policies or staff training, mistakes can happen quickly and remain unnoticed until they show up in a search result.

OpenAI’s swift response likely limited the spread of these conversations, though some content had already been cached or archived by the time the feature was taken offline. What remains uncertain is how many users were affected, or how widely their shared material circulated before the links were removed. Regardless of the numbers, the case has prompted new questions about how AI tools handle public visibility, and whether existing safeguards are enough to protect users from accidental exposure.

While the original intention behind the share feature may have been to encourage collaboration or allow useful chats to be viewed by others, its rollout showed how easily privacy can be compromised when interface design does not match the complexity of real-world use. Even when technical consent is given, it may not be informed. That gap between what users intend and what systems permit has now created a reputational cost for the company, and a learning moment for anyone deploying AI at scale.

For businesses, the incident serves as a reminder that data shared with AI tools should be treated with the same care as internal documents. Conversations with chatbots may feel informal or experimental, but once shared, they can end up outside the company’s control. To avoid similar issues, enterprises should conduct audits, clarify usage policies, and establish guardrails before allowing employees to rely on AI for confidential or strategic work. The risks are not always visible at first, but when exposed, the impact can be immediate and difficult to reverse.

This episode has shown how even a small checkbox can open the door to unintended consequences. As AI tools become more powerful and widely used, both companies and users will need stronger frameworks to ensure that privacy, once granted, isn’t quietly lost along the way.

Notes: This post was edited/created using GenAI tools.

Read next: AI-Powered Apps Are Redefining Mobile Categories in 2025
by Web Desk via Digital Information World

Thursday, July 31, 2025

AI-Powered Apps Are Redefining Mobile Categories in 2025

Artificial intelligence is now a regular part of mobile software. In 2025, more app developers have built AI features into their tools. Users have responded by applying those features across a wide range of tasks, many outside traditional work or learning environments.

ChatGPT Sees Broader Use Outside Work Hours

One sign of this shift is how people are using AI assistants like ChatGPT. Last year, most usage happened during weekdays. This was common among tools designed for jobs or school. But this year, ChatGPT's weekend usage has grown. The pattern looks more like general search apps, which people rely on whether they are working or not.

Lifestyle Prompts Overtake Educational Tasks


Prompt data shows a rise in lifestyle and entertainment queries. Topics such as wellness, travel, shopping, and meal planning have grown. These prompt types now take up a larger share compared to 2024. Educational and technical queries remain popular, but their share has declined. People are using AI in more personal ways.

Functional Apps Face Pressure from General AI Tools

Many category-specific apps are now competing with flexible AI platforms. Apps for budgeting, nutrition tracking, and study help are seeing some users shift to general chatbots. These chatbots can answer many kinds of questions, so people explore them for a broader range of needs.

AI Mentions Surge Across App Stores

Thousands of mobile apps launched with AI-related keywords this year. Software tools remain the most active category for these updates. Apps in wellness, employment, learning, and finance also added AI references. Developers are adapting to the demand for smarter app experiences.

Mixed Outcomes for Traditional App Subgenres

Some mobile app subgenres are growing despite the presence of AI competition. A few are even seeing better performance on user metrics. The difference seems to depend on how they use AI. Apps that add task-specific AI tools tend to hold their ground. For example, nutrition apps now include image-based calorie tracking powered by AI. This gives users quicker results than traditional logging.

Tailored Features Help Apps Stay Competitive

Generic chatbot tools can handle many tasks, but often miss the fine detail users expect from niche apps. To compete, developers are building features that solve narrow problems more efficiently. Apps that respond to this need may retain their users. Those that don’t may be replaced.

H/T: Sensor Tower.

Read next: ChatGPT and Google AI Give Different Answers to the Same Questions, Study Finds


by Irfan Ahmad via Digital Information World

YouTube Relaxes Its Rules on Swear Words in Early Video Content

YouTube has loosened its restrictions around how bad language affects video monetization, making it easier for creators to earn money even if their clips include profanity in the opening seconds. The company has updated its Advertiser Friendly Guidelines, easing one of the more contentious policies that had caused frustration among content creators in recent years.

Reversal of Previous Tightening

This change rolls back a stricter rule introduced in 2023, which had made any video that featured strong language in its first few seconds ineligible for full advertising revenue. That earlier revision had followed an even broader update in 2022, when YouTube first tightened its rules to limit the use of violence and offensive language in monetized content. The policy especially impacted gaming creators, whose streams often include in-game violence and spontaneous speech that could contain swear words.
After a wave of criticism, YouTube softened its approach slightly in 2023 by narrowing the restriction window to the first seven seconds of a video. But even that adjustment didn’t fully address creators’ concerns, as many videos were still receiving limited monetization, marked by the platform’s yellow icon that indicates reduced ad income.

What’s Changing Now

Under the latest update, videos that include profanity within the first seven seconds will no longer be automatically penalized. This means that creators can now retain full advertising revenue, even if strong language appears near the start of their content. While this adjustment makes the rules more flexible, it does not entirely lift all limits related to language.
Creators should still be aware that titles and thumbnails containing bad language will continue to trigger monetization restrictions. In addition, if profanity appears too frequently within a video, even if the early seconds are allowed, monetization may still be reduced under the platform’s guidelines.

The Role of Ad Placement

The earlier policy around bad language in a video’s opening moments had mainly stemmed from concerns about how close a brand’s advertisement appeared to offensive material. Advertisers typically prefer a buffer between their message and any strong language. But changes in advertiser tools now allow brands to fine-tune where and how their ads appear, including setting limits on content sensitivity. That flexibility has given YouTube more room to relax its own rules without risking ad relationships.

By shifting the responsibility onto advertisers to control the kinds of content they want to appear next to, YouTube can now allow creators more freedom in how they speak, without necessarily hurting its advertising model.

Limitations Still Apply

Although this update gives creators more breathing room, it’s not a free pass for excessive swearing. Videos that rely heavily on profanity, or repeatedly use strong language throughout, may still see limited monetization. And inappropriate language in text elements like video titles and thumbnails remains a red flag for YouTube’s ad systems.

So while early swearing will no longer automatically lead to reduced income, creators still need to moderate how much strong language they use if they want to fully benefit from the change.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen

Read next: Meta Reshapes AI Strategy Amid Talent Surge, Billion-Dollar Bets, and Rising Caution on Openness
by Web Desk via Digital Information World

Meta Reshapes AI Strategy Amid Talent Surge, Billion-Dollar Bets, and Rising Caution on Openness

Mark Zuckerberg has introduced new language around Meta's AI development plans that indicates a potential shift in the company’s direction. Over the last several months, Meta has invested heavily in both infrastructure and talent. At the same time, internal communications and earnings remarks have started to reflect a more cautious approach to openness, particularly as the company begins building what it calls “personal superintelligence.”

A recently published note from Zuckerberg laid out this vision. It described a future where highly capable AI systems assist individuals in their personal goals. He did not define superintelligence in technical terms or explain how such systems would be built. Instead, the memo outlined a broad idea of personalized tools that could support users across creative, social, and productivity tasks.

This approach contrasts with other companies that are developing general-purpose models aimed at automating workflows. Zuckerberg's message suggested that Meta wants to keep the focus on individual empowerment. This vision also connects to the company’s longer-term goal of moving away from reliance on mobile devices. Meta has spent years developing smart glasses and has signaled interest in making them central to future computing experiences.

Talent Shifts and Billion-Dollar Recruiting

Meta has increased its hiring of AI experts since early this year. In one move, it brought in the founder of Scale AI through a $14.8 billion investment. That deal placed Alexandr Wang as the company’s new Chief AI Officer. The company has also recruited individuals from OpenAI and Apple, including engineers who contributed to major large language models.

Recent reporting indicated that Meta extended multiyear offers worth hundreds of millions, and in one case more than $1 billion, to staff from Thinking Machines Lab. The startup was founded by a former OpenAI executive. Meta did not confirm the financial details but acknowledged the interest in expanding the team.

Alongside those moves, Meta has committed over $72 billion to AI infrastructure. That includes compute power, model training capacity, and scaling systems. These steps suggest the company is preparing to build more advanced AI models, even as it evaluates how much of that work to make public.

Open Source Remains Unclear

For years, Meta positioned open-source AI as a safer and more inclusive approach. Company leaders argued that transparency could prevent misuse and help governments understand how models work. More recently, Zuckerberg indicated that the company may not share some of its largest models in the future.

His recent statements during Meta’s second-quarter earnings call included references to safety and practicality. According to him, some of the models now being developed are so complex that releasing them would have little benefit for outside developers. In some cases, he added, sharing might give an advantage to competing firms.

These comments followed a memo that suggested Meta would remain a leader in open-source work but would be more selective about what gets released. While this is not a reversal of past policy, it shows a growing awareness inside Meta that some advanced AI models may carry risks that make full transparency harder to justify.

Usage Gains Tied to AI Integration

Meta’s recent product performance also reflects increased use of AI to drive engagement. Time spent on Facebook rose 5 percent in the second quarter. Instagram saw a 6 percent gain. Both trends were attributed to updates in recommendation systems, which now use large language models to present more relevant content.

The company also noted that video viewership grew by 20 percent over the past year. Instagram played a large role in that growth, as Meta has focused on promoting original material and improving content ranking methods. Threads, its messaging-based app, has seen an increase in daily use following the integration of new AI tools.


All in all, Meta reported 3.4 billion family daily active people across its platforms in June. That figure included Facebook, Instagram, Messenger, and WhatsApp. It marked a 6 percent increase from the previous year and supported a 22 percent rise in revenue across those apps, reaching $47.1 billion in the quarter.

Broader Shifts in AI Positioning

Zuckerberg’s internal memo came just before Meta’s Q2 earnings report. The timing appeared aligned with the company’s efforts to frame its AI investments to investors. With delays affecting the launch of its larger Llama 4 model, internal reports suggested that Meta’s leadership had been reevaluating its approach. Some concerns were raised about the tradeoff between openness and competitive advantage.

There has also been tension around the slow progress of Meta’s generative AI roadmap. Executives inside the company have reportedly questioned whether its development pipeline can keep pace with external labs. These concerns may have shaped the more cautious stance reflected in the memo and earnings discussion.

At the same time, Meta seems to be preparing for a future in which its computing platforms are less dependent on Apple and Google. Smart glasses, which the company continues to develop, were described as key devices in future AI use. Zuckerberg pointed to this shift as an opportunity to move users toward platforms owned and controlled directly by Meta.

The company’s strategy remains focused on scaling up its AI capabilities, refining its product experience, and adjusting its messaging around transparency. While the long-term details remain unclear, the recent changes suggest that Meta is actively shaping its next phase of AI development around tighter control, personal devices, and internal platforms.

Notes: This post was edited/created using GenAI tools.

Read next: Most Adults Use AI Without Realizing, But True Power Remains Untapped


by Irfan Ahmad via Digital Information World

Wednesday, July 30, 2025

Most Adults Use AI Without Realizing, But True Power Remains Untapped

A new poll has found that most adults in the United States have used artificial intelligence for online searches. Younger people report using it more frequently than older age groups, and for more types of tasks.

Online Search Remains the Main Use

Among all surveyed adults, 6 in 10 said they use AI at least sometimes to find information online. That rate rises to nearly three-quarters for people under the age of 30. Searching is the most common AI-related activity, based on the eight categories included in the poll.

This may understate its true usage, since many search engines now include AI-generated summaries automatically. People may be receiving answers produced by AI without realising it.

Work-Related Use Is Still Limited

The data also shows that AI has not become a major part of most workplaces. About 4 in 10 adults said they have used AI to assist with work tasks. A smaller share mentioned using it for email writing, creative projects, or entertainment. Fewer than one in four reported using AI for shopping.

Younger users are more likely to include AI in their work. Some use it to plan meals or generate ideas, while others rely on it to help write or code. Still, this type of usage remains less common among the general public.

Generational Differences Are Clear

Younger adults are more engaged with AI overall. Around 6 in 10 of those under 30 said they have used it to brainstorm. Only about 2 in 10 older adults said the same. Daily use for idea generation is more frequent among people in their twenties.

Older users show less interest in applying AI beyond basic information lookups. They tend to avoid using it for more personal or technical tasks.

AI Companionship Is Rare

The least common form of interaction with AI was companionship. Fewer than 2 in 10 adults overall reported using AI for that purpose. Among people under 30, the rate rises to about a quarter.

The survey results suggest that this type of usage remains outside the mainstream. Most people do not view AI as a substitute for personal interaction, although some younger users said they understand why others might explore it.

Overall Usage Remains Focused

The findings indicate that while AI tools have entered public use, they are still seen as limited-purpose systems. Most interactions involve information searches, and regular use beyond that is less frequent. Adoption has grown, but remains uneven across tasks and age groups.

The poll was conducted by the Associated Press and NORC Center for Public Affairs Research between July 10 and July 14. It included 1,437 adults drawn from a representative national sample, with a margin of error of 3.6 percentage points.


Notes: This post was edited/created using GenAI tools.

Read next: How Hidden Bluetooth and WiFi Signals Let Mobile Apps Track You Indoors
by Irfan Ahmad via Digital Information World