Saturday, July 26, 2025

Think Your Data Stays Private? AI Tools Are Proving Otherwise

AI tools are appearing in nearly every corner of daily life. Phones, apps, search engines, and even drive-throughs have started embedding some form of automation. What used to be a straightforward browser is now bundled with built-in assistants that try to answer questions, summarize tasks, and streamline routines. But these conveniences come at a growing cost: your data.

The requests for access from AI apps have grown broader and more aggressive. Where once people questioned why a flashlight app needed their location or contacts, similar requests are now made under the banner of productivity. Only now, the data these apps ask for cuts far deeper.

One recent case involved a web browser called Comet, developed by Perplexity. It includes an AI system designed to handle tasks like reading calendar entries or drafting emails. To do that, it asks users to connect their Google account. But the list of permissions it seeks goes far beyond what many would expect. It asks for the ability to manage email drafts, send messages, download contact lists, and view or edit every event across calendars. In some cases, it even tries to access entire employee directories from workplace accounts.

Perplexity claims that this data remains on a user’s device, but the terms still hand over a wide range of control. The fine print often includes the right to use this information to improve their AI systems. That benefit flows back to the company, not necessarily to the person who shared their data in the first place.

Other apps are following similar patterns. Some record voice calls or meetings for transcription. Others need access to real-time calendars, contacts, and messaging apps. Meta has also tested features that sift through a phone’s camera roll, including photos that haven’t been shared.
The permissions these tools request aren't always obvious, yet once granted, the decision is hard to reverse. From a single tap, an assistant can view years of emails, messages, calendar entries, and contact history. All of that gets absorbed into a system designed to learn from what it sees.

Security experts have flagged this trend as a risk. Some liken it to giving a stranger keys to your entire life, hoping they won’t open the wrong door. There’s also the issue of reliability. AI tools still make mistakes, sometimes guessing wrong or inventing details to fill in gaps. And when that happens, the companies behind the technology often scan user prompts to understand what went wrong, putting even private interactions under review.

Some AI products even act on behalf of users. That means the app could open web pages, fill in saved passwords, access credit card info, and use the browser history. It might also mark dates on a calendar or send a booking to someone in your contact list. Each of these actions requires trust, both in the technology and the company behind it.

Even when companies promise that your personal data stays on the device, the reality is more complicated, as highlighted by u/robogame_dev or Reddit. Most people assume this means photos, messages, or location logs remain untouched. But what often slips under the radar is how that raw information gets transformed into something else, something just as personal.
Modern AI tools extract condensed representations from your data. These might look like numerical vectors, interest segments, or hashed signals. While the raw voice clip or image may stay local, the fingerprint it generates, a voice embedding, a cohort ID, or a face vector, often gets sent back to the server. These compact data points can still identify you or be linked with other datasets across apps, devices, and even companies.

Over time, that creates a shadow profile. It doesn’t need your full browsing history or photo albums to be useful. A few attributes, like the categories of content you read, the way you speak, or your heart rate trends, can reveal more than expected. Advertisers, insurers, or third-party brokers may use this information to shape pricing, predict preferences, or infer sensitive traits.

So while on-device processing helps limit exposure, it doesn’t erase the risk. Much like measuring your face without keeping the photo, what gets extracted and exported can still follow you around the digital world.

If an app/tool asks for too much, it may be worth stepping back. The logic is simple: just because a tool can help with a task doesn’t mean it should get full access to your digital life. Think about the trade. What you’re getting is usually convenience. What you’re giving up is your data, your habits, and sometimes, control.

When everyday tools become entry points for deep data collection, it's important to pause and ask whether the exchange feels fair. As more of these apps blur the line between helpful and invasive, users may need to draw that line themselves.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next: Study Finds Most Marketers Use GenAI, But Few Have Worked with Agentic AI
by Irfan Ahmad via Digital Information World

Study Finds Most Marketers Use GenAI, But Few Have Worked with Agentic AI

Cannes Lions this year has marked an important shift in the discourse of AI adaptation in marketing: no longer just promises, but actual implementation and performance. A common perception across multiple panels, or even brand showcases regarding this new integration, is that there should be no separation between generative AI and a creative and strategic approach.

From efficient content creation to personalizing campaigns at a large scale, the AI tech is clearly helping marketers achieve unprecedented productivity. But do they really know in-depth about their tools? What will be the next development of AI? And are they ready to face this new wave?

To answer these questions, Outcomes Rocket has conducted a comprehensive survey to understand the current situation in the field of marketing under this new wave of AI. From different industries, in different roles and organization sizes, 1,299 marketers have participated and expressed their perspective. The findings showed that marketers’ daily work has been assisted with generative AI massively; however, only a third of them have exposure to other forms of AI, such as agentic AI. The survey not only revealed positive angles of this adoption but also fear among newcomers in terms of job security in this field within the next two or three years.

Widespread AI Adoption in Marketing

The data showed that almost all of the participants (89.5%) use AI in their work. This is true cross-industry no matter what the job level or the size of the organizations, especially small businesses that do not have a large budget for marketing but need to compete with other big corporations.


Generative AI is dominating the AI market in marketing, with 93.5% of marketers utilizing the tools for content creation, including blog posts, advertisement copy, social media content, and other creative brainstorming purposes.

Standing at the top of the list is ChatGPT, with 94.8% of users choosing it as their main platform. This result is expected due to its ease of use, versatility, real-time interaction, and the ability to generate output across a wide range of formats and tones. The biggest differentiator for ChatGPT is that OpenAI was the first mover in the game, allowing this tool to be tested and experimented with by the public before any other competitor rolled out their own. As a result, ChatGPT will always be mentioned in any AI conversation, so naturally, it became the first pick for many users when it comes to generative AI.

Early Stage but Growing Interest in Agentic AI

However, generative AI is not the only answer to marketing. Agentic AI could be the answer to a fully autonomous AI solution. This single tool is capable of running a complete marketing campaign with minimum human interaction, from developing strategy, segmenting the audience, to creating content and distributing across channels, and even analyzing performance. This model will analyze historical data, competitor trends to determine which action is most suitable to take. It could be a new campaign, a new ad strategy, or an entire test of A/B testing. Unlike the traditional automation method, agentic AI learn from its results to produce the most optimal course of action.

Nonetheless, despite the immense potential, the implementation of agentic AI is still at the primary level, with only 33.3% of marketers having experimented with agentic AI. The current atmosphere around this tool is mostly reserved or unfamiliar. However, this is not a bad indicator; instead, it shows that agentic AI has promising growth and could be transformative to the entire field in the next 12 to 24 months. Once more, marketers and organizations have experimented with this model, and agentic AI can surely become the top contender in marketing technology.

Trust Issues and Training Gaps in AI Use

Accuracy has always been the top concern for every AI model that has been introduced, and application in marketing is not an exception. Over 93% of marketers experience the common problems with AI-generated content: inaccuracy, bias, or irrelevancy. Therefore, around 70% spend a lot of time revising or proofreading the output before publishing. It could be surprising that with such a high frequency of working alongside AI, only 42% of them are confident in spotting AI-generated content. Lack of official guidance and education could be the explanation. The survey revealed that 80% of participants did not have any formal AI training from their company. Not only does this figure demonstrate the lack of preparedness in the workforce, but it also sheds light on the inefficiency of the integration process of a new tool.

Job Security Concerns Are Rising

The survey also explored the sentiment towards the effects of AI regarding employment, which turned out to be quite negative. The pressure is present and high, with nearly 89% of marketers believing that AI will result in job losses in the next two or three years.

The statistics indicate that one-third of marketing activities will potentially become automated shortly, and junior positions are the most likely to be impacted. Automation will mostly target routine or repetitive work, which is the main responsibility of junior positions. Therefore, these fears among the less experienced marketers concerning long-term career security are truly reasonable and expected.

While most of them believe AI will take over lots of marketing roles, the advantages of using AI are undeniable, with 63% of participants viewing AI as a helpful assistant rather than a potential substitute. Their perception focuses on applying AI to boost overall productivity, reduce their time on routine work, which allows more time on creative tasks or strategy. On the other hand, a rather small percentage of participants (16%) believe that AI will take away all the marketing jobs. Although the concern for job security is high, the vast majority of them (over 70%) have not experienced any direct impact from these AI-driven threats. Looking at the big picture, advocacy for AI is stronger, suggesting a new transformation in the field for the better and not necessarily a takeover by AI.

Future Outlook: Continued Growth and Investment

Despite the undeniable ambivalence regarding marketing job security with this high rate of AI adoption, marketers are still open and eager to take the full advantage of this new technology. Overall, the feeling regarding the possibilities that AI has to offer is very positive. Almost eight out of ten marketers believe that generative AI will be the largest game-changer shortly and will bring significant change in creating content, engaging audiences, and developing campaigns.

In addition to content creation, predictive analytics and hyper-personalization are gaining rapid attention. Over 50% of the respondents believe that these data-powered tools will be used more often in the future to get a closer look at customer behavior, thus allowing the team to create highly personalized experiences.

In the meantime, 41.7% of marketers believe their institutions will invest more in AI tools and technologies within the coming year. Such an investment is evidence of the transformative power of AI. Investors are willing to bet their money on it, in the hope that AI will lead to growth and innovation.

Read next: Financial Cybercrime Risks Vary Sharply Across U.S. States, Report Finds


by Irfan Ahmad via Digital Information World

Friday, July 25, 2025

Apple Updates App Store Age Ratings to Strengthen Parental Controls

Apple has introduced new age categories on the App Store, changing how apps are rated for children and teenagers. From now on, apps will be classified under five age brackets: 4+, 9+, 13+, 16+, and 18+. The previous 12+ and 17+ labels have been dropped.

All apps and games have been automatically updated to match the new system. The changes are live in beta versions of Apple’s upcoming software releases, including iOS 26, iPadOS 26, and macOS Tahoe. A full public rollout is expected in September.

App developers are now being asked to complete new questions covering areas such as in-app features, medical or wellness content, and themes involving violence. This will allow Apple to assign age ratings more precisely. Developers can see and, if needed, revise their app’s rating through App Store Connect.

Parents browsing the App Store will begin to see more information about each app. Details will include whether it contains user-generated content, shows adverts, or has tools for parental control. These additions are designed to make it easier for families to decide whether an app is suitable.

Apps that fall outside a user’s allowed age range will be less visible on the platform. For example, they won’t appear in featured sections such as Today or Games if the account belongs to a child. This could influence how developers build and promote their apps, especially if they’re targeting younger audiences.

As part of the same update, Apple has improved the setup process for child accounts. Parents can now enter a child’s age during setup, which will be shared with developers using a new API. The API gives developers access only to the age range, not the exact birthdate, which Apple says helps personalise content without compromising privacy.

For this to work, developers must integrate the API into their apps. If they don’t, the system won’t adjust the experience based on the user’s age.

The timing of Apple’s update comes as lawmakers in the United States continue to propose legislation aimed at protecting children online. Some states are calling for app stores to confirm user ages and collect parental consent before downloads are allowed. Apple and other major platforms, including Google, have argued that app developers should handle this responsibility.

The revised rating system is Apple’s way of addressing those concerns. While it won’t stop all misuse, the company believes that giving parents better tools, and making developers more accountable, can help reduce the risks children face online.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next: UK Begins Online Age Checks to Limit Children’s Access to Harmful Content
by Asim BN via Digital Information World

UK Begins Online Age Checks to Limit Children’s Access to Harmful Content

New rules aimed at keeping children away from harmful online material have taken effect in the United Kingdom. The measures apply to websites and apps that display content involving pornography, violence, or subjects like suicide, self-harm, or eating disorders. Companies operating these services are now required to check users’ ages through approved methods such as credit card verification or facial image analysis.

The law assigns enforcement responsibilities to the country’s media regulator. Platforms that don’t follow the rules may face fines of up to £18 million or 10% of global revenue, depending on which amount is higher. If companies ignore official information requests, senior managers may face legal consequences.

The requirement follows the 2023 Online Safety Act, which outlined duties for digital platforms to reduce harm for both children and adults. After a preparation period, the enforcement phase has started. Regulators have confirmed that thousands of adult websites are now using age checks. Social media platforms are being monitored for compliance with the same standards.
Recent findings from the regulator show that about half a million children between the ages of eight and fourteen viewed online pornography in the last month. The figures have drawn concern from child protection groups and public officials. The changes are intended to reduce the chances of similar exposure going forward.

While some gaps in enforcement remain, the introduction of mandatory checks is seen as a shift toward a more controlled online environment for minors. The aim is to create fewer pathways for children to reach dangerous or inappropriate content.

Additional measures are being considered. Officials have mentioned the possibility of setting time limits for how long children can spend on social apps each day. Any future changes will be introduced through separate decisions or legislative updates.

Digital platforms are now expected to meet technical and procedural requirements to show they are protecting young users. Oversight will continue as the regulator reviews how well the new rules are being followed.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next: Financial Cybercrime Risks Vary Sharply Across U.S. States, Report Finds
by Irfan Ahmad via Digital Information World

Financial Cybercrime Risks Vary Sharply Across U.S. States, Report Finds

A new analysis of online financial crime data shows wide variation in how states are affected by scams, fraud, and digital theft. According to 2024 figures compiled from FBI reports, Mississippi ranks as the safest state, followed by Texas, Minnesota, Alabama, and South Dakota. Each of these states reported low victim counts, lower financial losses per resident, or smaller increases in scam activity over recent years.

Mississippi had the fewest victims per million people, and its total financial losses remained among the lowest nationwide. Even with fewer cybercrime laws in place, it showed relative stability, which may suggest that a lower rate of targeting plays a role. Texas performed well despite its population size, aided by a stronger legal framework. Minnesota’s ranking came from a balance of low victim numbers and strong legislative presence.

High Losses Concentrated in D.C., Iowa, and Nevada

Washington, D.C. stood out with the highest per-capita losses in the country. Between 2022 and 2024, its total financial damages rose sharply, reaching over $400 million when adjusted for population. Most of the losses were linked to high-value scams, including investment fraud and business email compromise. Iowa also showed a steep rise in both victims and losses. Nevada experienced similarly high numbers, driven by identity theft and phishing attacks.

In these states, legislative measures have been slow to match the growing risk. D.C. and Iowa, in particular, have limited policy protections in place. Nevada’s high urban population and large number of visitors may also contribute to its increased exposure.

Rising Trend in Cybercrime Across the Country

The report confirms a national rise in financial cybercrime. Between 2022 and 2024, most states recorded double-digit growth in either victim count or total losses. In some cases, both increased. The overall impact has expanded rapidly, with total U.S. financial losses from online scams more than doubling in just two years.

Many scams follow familiar formats. Phishing remains the most widespread. Tech support scams, SIM swapping, and investment fraud continue to cause substantial damage. Some states report high financial losses even with fewer victims, which points to the growing value of individual attacks.

Not All Safe States Have Strong Cybercrime Laws

Legislation plays a role, but its effect varies. Several top-ranked states, including Mississippi and South Dakota, have minimal cybercrime laws. Still, they recorded relatively low exposure. On the other hand, some states with more active legislative efforts still reported high losses. Illinois, for example, had more cybersecurity rules than average but did not rank among the safer states.

This gap between laws and outcomes suggests that other factors, such as demographic patterns, scammer focus, or digital awareness, may influence risk more than legislation alone.

Five Key Metrics Used for State Rankings

The ranking system used five measures. Victim count and total financial losses each made up 30 percent of the overall score. The percentage change in those two figures from 2022 to 2024 accounted for another 30 percent. The number of state-level cybercrime laws contributed the final 10 percent.

While legislation was included, it carried less weight because many laws do not directly address financial scams. The intent was to reflect actual exposure and risk, not just legal preparedness.

Most Common Forms of Financial Cybercrime

The report categorized a wide range of scams. These included phishing, data breaches, SIM swap fraud, credit card theft, business email compromise, and ransomware. Confidence scams, like romance fraud and fake job offers, were also common. In many cases, high-value scams concentrated in fewer but more severe incidents caused larger losses.

States with higher tourist traffic or dense urban centers appeared to attract more of these schemes. Fraud involving fake support services, lottery scams, and overpayment traps were especially active in Nevada, Arizona, and Florida.

Arizona and Oregon Among the Fastest-Rising Risk States

Arizona showed a high number of victims and significant financial losses in 2024. Its rate of legislative action remained low, despite its exposure. Oregon saw the sharpest increase in losses of any state, with a jump of over 400 percent in two years. These changes pushed both states further down the safety rankings.

Oregon, in particular, experienced a rise in investment-related scams and phishing attacks. This was coupled with a lack of robust legal protections, suggesting a mismatch between the scale of the problem and available countermeasures.

Report Reflects Ongoing Digital Threats

Although state-level variation is significant, the broader trend is clear. Financial cybercrime continues to spread across the country. While some areas show signs of control or resilience, others remain vulnerable. The study points to a need for coordinated efforts that combine legal tools, public awareness, and individual digital safety habits.

No state is fully immune. Even in places with low current exposure, changing attack patterns could raise future risk. For now, a mix of laws, education, and technology use appears to make the most difference.


H/T: Cloudwards.

Notes: This post was edited/created using GenAI tools.

Read next: Sam Altman Sees Short Video Apps, Not AI, as Bigger Threat to Kids' Minds
by Asim BN via Digital Information World

Thursday, July 24, 2025

Google Tests Web Guide, a New Way to Sort Search Results

Google has introduced a new experimental feature that reshapes how search results are displayed. The feature, named Web Guide, is part of Search Labs and allows users to view web results in a grouped layout instead of a long list of links. It uses artificial intelligence to sort search results based on specific parts of the user’s query.

Search results arranged by themes


Web Guide works by identifying different directions a question might take. Then it displays sections based on those distinctions. For example, if someone asks about traveling alone in Japan, one part of the results may cover safety tips while another includes planning resources or personal travel stories. The content is still drawn from the web, but it is displayed with more structure.

Focus on broad and complex searches

This system is aimed at people who enter long or open-ended searches. These kinds of queries often bring mixed results, which can be harder to interpret quickly. Web Guide uses Gemini, Google's AI model, to read the intent behind the search and match it with distinct categories that reflect the variety of responses across the internet.

Limited release with opt-in access

Right now, Web Guide is only available in the Web tab of Google Search for selective users in some regions. Users who turn it on will see the grouped layout when they search. If someone prefers the original format, they can switch it back from the same place without leaving the experiment.

Plans to expand to more parts of Search

Over time, Google plans to bring Web Guide to other parts of its platform, including the "All" tab. The company has not shared a timeline for this. For now, the experiment joins other features in Search Labs, such as AI Mode, Notebook LM, and several smaller tests that focus on creative or informational tools.

Read next: Giving Smartphones to Children Too Early May Be Harming Mental Health in Adulthood


by Irfan Ahmad via Digital Information World

Giving Smartphones to Children Too Early May Be Harming Mental Health in Adulthood

A major new global study has found strong evidence that early smartphone ownership during childhood is linked with significant mental health problems later in life. The findings point to a consistent pattern across regions, languages, and age groups, with younger users facing sharper declines in mental wellbeing as they reach adulthood.

Early Smartphone Ownership Shows Clear Mental Health Decline

The research draws from the Global Mind Project, which includes data from nearly two million people across 163 countries. When the team focused on over 100,000 participants aged 18 to 24, a clear trend emerged: the earlier someone received a smartphone, the worse their mental health score as a young adult. Those who had a phone at age five scored far lower on emotional, social, and cognitive wellbeing compared to peers who received theirs at age 13 or later.

This mental health shift wasn’t isolated to one place. The same effect was found in every region, but was especially intense in English-speaking countries. Girls appeared to suffer more than boys, reporting much higher rates of suicidal thoughts, emotional detachment, and reduced self-worth.

Social Media Appears to Be the Biggest Trigger

What matters most isn't simply owning a smartphone, it’s what those phones connect children to. Early access to social media plays a major role in weakening emotional resilience. The study shows that around 40 percent of the negative impact from early smartphone use can be traced back to social media access alone.

The platforms use machine learning systems that adaptively feed content to users to keep them engaged. This constant cycle of comparison, pressure, and exposure to extreme or inappropriate material can shape a child’s thinking in damaging ways. The report highlights that these digital spaces often displace vital real-world development, including in-person relationships and healthy sleep routines.

Cyberbullying and Family Disconnection Follow Early Exposure

Among the most significant secondary factors were disrupted sleep, cyberbullying, and poor family relationships. In English-speaking countries especially, those who accessed social media early were much more likely to experience toxic online interactions and deteriorating family bonds. In some regions, early access also correlated with higher exposure to sexual abuse, especially among girls.

The study found that these negative experiences were often not directly linked to total screen time, but instead to the simple fact of having early, unsupervised access to algorithm-driven platforms. That alone increased the odds of entering harmful digital environments before a child had the tools or maturity to cope with them.

The Mental Health Impact Is Measurable and Widespread

Researchers used the Mind Health Quotient (MHQ), a broad tool that evaluates 47 emotional, cognitive, and social functions. For those who got smartphones at age 13, the average score was 30. For those who received their first phone by age five, the average dropped to just 1.

Almost half of females who had a smartphone by age six reported severe suicidal thoughts, compared to fewer than a third among those who got a phone at 13. Across the board, earlier access correlated with higher rates of aggression, hallucinations, emotional instability, and reduced self-image.

These patterns suggest a deep shift in mental functioning, not just a spike in stress or sadness. Many of those affected reported a persistent sense of disconnection from reality and found it hard to regulate their emotions or maintain confidence.

The Impact Is Growing Faster Than Most Adults Realize

As more children are handed smartphones even before middle school, the effects may be intensifying. Most social platforms officially restrict access for children under 13, but the study shows that enforcement is weak, and many kids gain access years earlier. In English-speaking countries, the average age of first smartphone use is now around 11.

The researchers stress that these findings are not isolated to a few outliers. Across the global dataset, each year earlier a child receives a smartphone is linked with a steady drop in mental wellbeing scores. This trajectory, they warn, could mean a generation increasingly at risk of chronic emotional distress, lower resilience, and disrupted development.

Experts Recommend a Public Health Approach, Not Parental Burden

The study’s authors argue that expecting parents to individually police smartphone access is unrealistic. Children whose access is restricted may feel excluded if their peers are already online. Worse, even children with limited access at home may still be exposed to the effects of aggressive behavior from peers shaped by early digital environments.

To deal with this, the team recommends public policies similar to those that govern alcohol and tobacco. This includes restrictions on under-13 smartphone ownership, mandatory digital literacy education, stronger enforcement against age violations by tech firms, and the introduction of basic phones that lack internet apps.

These measures are aimed not at blocking progress, but at protecting critical stages of mental development. Without such intervention, the report warns, the decline in mind health could carry long-term consequences for learning, social participation, and even economic productivity.

Policy Proposals Focus on Four Key Areas

The researchers outlined four actions that governments could adopt immediately:

  1. Introduce mandatory mental health and digital literacy education in schools, so children understand the risks of social media before they use it.
  2. Hold technology companies accountable for underage users, including enforcing age verification and introducing penalties for non-compliance.
  3. Restrict all social media access under age 13, across devices and platforms, through improved enforcement and technical controls.
  4. Create age-appropriate phones for younger children, offering basic functions like calling and texting but no access to algorithmic or user-generated content.

Action Urged Before Long-Term Damage Becomes Permanent

With current trends moving quickly, the authors emphasize that delaying action may close the window for meaningful prevention. If the current age of first phone ownership keeps falling, projections show that up to 20 percent of the next generation could face regular suicidal thoughts, and as much as a third may struggle with core emotional and cognitive functions in adulthood.

The researchers aren’t calling for bans, but for smarter timing. They suggest the world treat smartphones and social media as tools that can be harmful when introduced before key developmental milestones. If society can accept age limits for driving, drinking, and smoking, then the same logic should apply to digital access that directly affects a child’s brain.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next: Teens Increasingly Use AI to Make Personal Decisions, Prompting Industry Attention
by Irfan Ahmad via Digital Information World