Thursday, July 31, 2025

AI-Powered Apps Are Redefining Mobile Categories in 2025

Artificial intelligence is now a regular part of mobile software. In 2025, more app developers have built AI features into their tools. Users have responded by applying those features across a wide range of tasks, many outside traditional work or learning environments.

ChatGPT Sees Broader Use Outside Work Hours

One sign of this shift is how people are using AI assistants like ChatGPT. Last year, most usage happened during weekdays. This was common among tools designed for jobs or school. But this year, ChatGPT's weekend usage has grown. The pattern looks more like general search apps, which people rely on whether they are working or not.

Lifestyle Prompts Overtake Educational Tasks


Prompt data shows a rise in lifestyle and entertainment queries. Topics such as wellness, travel, shopping, and meal planning have grown. These prompt types now take up a larger share compared to 2024. Educational and technical queries remain popular, but their share has declined. People are using AI in more personal ways.

Functional Apps Face Pressure from General AI Tools

Many category-specific apps are now competing with flexible AI platforms. Apps for budgeting, nutrition tracking, and study help are seeing some users shift to general chatbots. These chatbots can answer many kinds of questions, so people explore them for a broader range of needs.

AI Mentions Surge Across App Stores

Thousands of mobile apps launched with AI-related keywords this year. Software tools remain the most active category for these updates. Apps in wellness, employment, learning, and finance also added AI references. Developers are adapting to the demand for smarter app experiences.

Mixed Outcomes for Traditional App Subgenres

Some mobile app subgenres are growing despite the presence of AI competition. A few are even seeing better performance on user metrics. The difference seems to depend on how they use AI. Apps that add task-specific AI tools tend to hold their ground. For example, nutrition apps now include image-based calorie tracking powered by AI. This gives users quicker results than traditional logging.

Tailored Features Help Apps Stay Competitive

Generic chatbot tools can handle many tasks, but often miss the fine detail users expect from niche apps. To compete, developers are building features that solve narrow problems more efficiently. Apps that respond to this need may retain their users. Those that don’t may be replaced.

H/T: Sensor Tower.

Read next: ChatGPT and Google AI Give Different Answers to the Same Questions, Study Finds


by Irfan Ahmad via Digital Information World

YouTube Relaxes Its Rules on Swear Words in Early Video Content

YouTube has loosened its restrictions around how bad language affects video monetization, making it easier for creators to earn money even if their clips include profanity in the opening seconds. The company has updated its Advertiser Friendly Guidelines, easing one of the more contentious policies that had caused frustration among content creators in recent years.

Reversal of Previous Tightening

This change rolls back a stricter rule introduced in 2023, which had made any video that featured strong language in its first few seconds ineligible for full advertising revenue. That earlier revision had followed an even broader update in 2022, when YouTube first tightened its rules to limit the use of violence and offensive language in monetized content. The policy especially impacted gaming creators, whose streams often include in-game violence and spontaneous speech that could contain swear words.
After a wave of criticism, YouTube softened its approach slightly in 2023 by narrowing the restriction window to the first seven seconds of a video. But even that adjustment didn’t fully address creators’ concerns, as many videos were still receiving limited monetization, marked by the platform’s yellow icon that indicates reduced ad income.

What’s Changing Now

Under the latest update, videos that include profanity within the first seven seconds will no longer be automatically penalized. This means that creators can now retain full advertising revenue, even if strong language appears near the start of their content. While this adjustment makes the rules more flexible, it does not entirely lift all limits related to language.
Creators should still be aware that titles and thumbnails containing bad language will continue to trigger monetization restrictions. In addition, if profanity appears too frequently within a video, even if the early seconds are allowed, monetization may still be reduced under the platform’s guidelines.

The Role of Ad Placement

The earlier policy around bad language in a video’s opening moments had mainly stemmed from concerns about how close a brand’s advertisement appeared to offensive material. Advertisers typically prefer a buffer between their message and any strong language. But changes in advertiser tools now allow brands to fine-tune where and how their ads appear, including setting limits on content sensitivity. That flexibility has given YouTube more room to relax its own rules without risking ad relationships.

By shifting the responsibility onto advertisers to control the kinds of content they want to appear next to, YouTube can now allow creators more freedom in how they speak, without necessarily hurting its advertising model.

Limitations Still Apply

Although this update gives creators more breathing room, it’s not a free pass for excessive swearing. Videos that rely heavily on profanity, or repeatedly use strong language throughout, may still see limited monetization. And inappropriate language in text elements like video titles and thumbnails remains a red flag for YouTube’s ad systems.

So while early swearing will no longer automatically lead to reduced income, creators still need to moderate how much strong language they use if they want to fully benefit from the change.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen

Read next: Meta Reshapes AI Strategy Amid Talent Surge, Billion-Dollar Bets, and Rising Caution on Openness
by Web Desk via Digital Information World

Meta Reshapes AI Strategy Amid Talent Surge, Billion-Dollar Bets, and Rising Caution on Openness

Mark Zuckerberg has introduced new language around Meta's AI development plans that indicates a potential shift in the company’s direction. Over the last several months, Meta has invested heavily in both infrastructure and talent. At the same time, internal communications and earnings remarks have started to reflect a more cautious approach to openness, particularly as the company begins building what it calls “personal superintelligence.”

A recently published note from Zuckerberg laid out this vision. It described a future where highly capable AI systems assist individuals in their personal goals. He did not define superintelligence in technical terms or explain how such systems would be built. Instead, the memo outlined a broad idea of personalized tools that could support users across creative, social, and productivity tasks.

This approach contrasts with other companies that are developing general-purpose models aimed at automating workflows. Zuckerberg's message suggested that Meta wants to keep the focus on individual empowerment. This vision also connects to the company’s longer-term goal of moving away from reliance on mobile devices. Meta has spent years developing smart glasses and has signaled interest in making them central to future computing experiences.

Talent Shifts and Billion-Dollar Recruiting

Meta has increased its hiring of AI experts since early this year. In one move, it brought in the founder of Scale AI through a $14.8 billion investment. That deal placed Alexandr Wang as the company’s new Chief AI Officer. The company has also recruited individuals from OpenAI and Apple, including engineers who contributed to major large language models.

Recent reporting indicated that Meta extended multiyear offers worth hundreds of millions, and in one case more than $1 billion, to staff from Thinking Machines Lab. The startup was founded by a former OpenAI executive. Meta did not confirm the financial details but acknowledged the interest in expanding the team.

Alongside those moves, Meta has committed over $72 billion to AI infrastructure. That includes compute power, model training capacity, and scaling systems. These steps suggest the company is preparing to build more advanced AI models, even as it evaluates how much of that work to make public.

Open Source Remains Unclear

For years, Meta positioned open-source AI as a safer and more inclusive approach. Company leaders argued that transparency could prevent misuse and help governments understand how models work. More recently, Zuckerberg indicated that the company may not share some of its largest models in the future.

His recent statements during Meta’s second-quarter earnings call included references to safety and practicality. According to him, some of the models now being developed are so complex that releasing them would have little benefit for outside developers. In some cases, he added, sharing might give an advantage to competing firms.

These comments followed a memo that suggested Meta would remain a leader in open-source work but would be more selective about what gets released. While this is not a reversal of past policy, it shows a growing awareness inside Meta that some advanced AI models may carry risks that make full transparency harder to justify.

Usage Gains Tied to AI Integration

Meta’s recent product performance also reflects increased use of AI to drive engagement. Time spent on Facebook rose 5 percent in the second quarter. Instagram saw a 6 percent gain. Both trends were attributed to updates in recommendation systems, which now use large language models to present more relevant content.

The company also noted that video viewership grew by 20 percent over the past year. Instagram played a large role in that growth, as Meta has focused on promoting original material and improving content ranking methods. Threads, its messaging-based app, has seen an increase in daily use following the integration of new AI tools.


All in all, Meta reported 3.4 billion family daily active people across its platforms in June. That figure included Facebook, Instagram, Messenger, and WhatsApp. It marked a 6 percent increase from the previous year and supported a 22 percent rise in revenue across those apps, reaching $47.1 billion in the quarter.

Broader Shifts in AI Positioning

Zuckerberg’s internal memo came just before Meta’s Q2 earnings report. The timing appeared aligned with the company’s efforts to frame its AI investments to investors. With delays affecting the launch of its larger Llama 4 model, internal reports suggested that Meta’s leadership had been reevaluating its approach. Some concerns were raised about the tradeoff between openness and competitive advantage.

There has also been tension around the slow progress of Meta’s generative AI roadmap. Executives inside the company have reportedly questioned whether its development pipeline can keep pace with external labs. These concerns may have shaped the more cautious stance reflected in the memo and earnings discussion.

At the same time, Meta seems to be preparing for a future in which its computing platforms are less dependent on Apple and Google. Smart glasses, which the company continues to develop, were described as key devices in future AI use. Zuckerberg pointed to this shift as an opportunity to move users toward platforms owned and controlled directly by Meta.

The company’s strategy remains focused on scaling up its AI capabilities, refining its product experience, and adjusting its messaging around transparency. While the long-term details remain unclear, the recent changes suggest that Meta is actively shaping its next phase of AI development around tighter control, personal devices, and internal platforms.

Notes: This post was edited/created using GenAI tools.

Read next: Most Adults Use AI Without Realizing, But True Power Remains Untapped


by Irfan Ahmad via Digital Information World

Wednesday, July 30, 2025

Most Adults Use AI Without Realizing, But True Power Remains Untapped

A new poll has found that most adults in the United States have used artificial intelligence for online searches. Younger people report using it more frequently than older age groups, and for more types of tasks.

Online Search Remains the Main Use

Among all surveyed adults, 6 in 10 said they use AI at least sometimes to find information online. That rate rises to nearly three-quarters for people under the age of 30. Searching is the most common AI-related activity, based on the eight categories included in the poll.

This may understate its true usage, since many search engines now include AI-generated summaries automatically. People may be receiving answers produced by AI without realising it.

Work-Related Use Is Still Limited

The data also shows that AI has not become a major part of most workplaces. About 4 in 10 adults said they have used AI to assist with work tasks. A smaller share mentioned using it for email writing, creative projects, or entertainment. Fewer than one in four reported using AI for shopping.

Younger users are more likely to include AI in their work. Some use it to plan meals or generate ideas, while others rely on it to help write or code. Still, this type of usage remains less common among the general public.

Generational Differences Are Clear

Younger adults are more engaged with AI overall. Around 6 in 10 of those under 30 said they have used it to brainstorm. Only about 2 in 10 older adults said the same. Daily use for idea generation is more frequent among people in their twenties.

Older users show less interest in applying AI beyond basic information lookups. They tend to avoid using it for more personal or technical tasks.

AI Companionship Is Rare

The least common form of interaction with AI was companionship. Fewer than 2 in 10 adults overall reported using AI for that purpose. Among people under 30, the rate rises to about a quarter.

The survey results suggest that this type of usage remains outside the mainstream. Most people do not view AI as a substitute for personal interaction, although some younger users said they understand why others might explore it.

Overall Usage Remains Focused

The findings indicate that while AI tools have entered public use, they are still seen as limited-purpose systems. Most interactions involve information searches, and regular use beyond that is less frequent. Adoption has grown, but remains uneven across tasks and age groups.

The poll was conducted by the Associated Press and NORC Center for Public Affairs Research between July 10 and July 14. It included 1,437 adults drawn from a representative national sample, with a margin of error of 3.6 percentage points.


Notes: This post was edited/created using GenAI tools.

Read next: How Hidden Bluetooth and WiFi Signals Let Mobile Apps Track You Indoors
by Irfan Ahmad via Digital Information World

Walmart Tops Global 500 Again as U.S., China, and Tech Titans Dominate $41.7 Trillion List

Walmart has landed in first place again, as it has for over a decade now. The U.S. retail giant sits at the head of this year’s Fortune Global 500, the annual list ranking companies by revenue. Amazon took second. Behind them came State Grid of China, followed by Saudi Aramco, and then China National Petroleum. It’s a line-up that doesn’t look unfamiliar, but the weight behind these names keeps growing. Taken together, just the ten highest-ranked companies earned more than $4.7 trillion last year. Most of that came from retail, oil, healthcare, or finance, sectors that continue to stretch across borders and markets.

UnitedHealth rises, Apple slips

Apple is still inside the top ten. But not quite where it was. It dropped one spot this year, pushed aside by UnitedHealth Group, which moved from eighth to seventh. That same reshuffle appeared in the U.S.-only Fortune 500 a month earlier, so the shift didn’t come as a surprise. Apple’s fall wasn’t dramatic, but it did reflect the fact that healthcare, as an industry, isn’t slowing. CVS Health and Berkshire Hathaway also stayed strong in the upper tier, keeping U.S. firms in control of most of the top ten.

Global revenue grows slowly, but steadily

The full list of 500 companies brought in $41.7 trillion in revenue last year. That’s about 1.8 percent higher than the year before. Profits came in just under $3 trillion, the second highest total Fortune has ever recorded for its global list. Saudi Aramco, once again, took the lead on earnings. It posted $105 billion in profit alone, the fourth straight year it has held the top position in that category.

Sectors with staying power

Finance continues to dominate in size. There are 121 financial companies in the Global 500. Energy, long a staple of the list, came next with 79. Then came motor vehicles and parts, with 35 firms. The tech sector wasn’t far behind, landing 34. Healthcare had 33. These five areas together made up most of the list and drove nearly two-thirds of total revenue. So while innovation is constant, the largest corporate machines, oil, banks, manufacturers, still hold most of the weight.

China and the U.S. still lead, close together

There were 138 American companies on this year’s list. Greater China, including Hong Kong, Macau, and Taiwan, came in just behind, with 130. Nine out of the ten most profitable companies came from these two countries. That balance hasn’t changed much lately. It’s still China and the United States shaping what this list looks like. Between them, they hold the pulse of global corporate power, even as growth slows in some industries and picks up in others.

The biggest tech names earned big, even with lower ranks

Amazon stayed at number two. Apple remained in the top ten. Alphabet landed at 13, Microsoft showed up at 22. Then came Meta at 41, Nvidia at 66, and Tesla at 106. None of them topped the revenue rankings, but their profits stood out. The group brought in $2 trillion in revenue and earned $484 billion in net income. That’s more than most countries generate in GDP. They didn’t move much in rank, but their financial output still dwarfs what most businesses achieve in years. No other group, outside energy, posted returns that high.

Record number of women CEOs

There are 33 women now leading companies on the Fortune Global 500 list. That’s about 6.6 percent of the total. It's still low, but it’s the highest count so far. Most of them are in the U.S., though China, France, the U.K., and Brazil have some as well. Some names stood out more than others, Mary Barra at GM, Jane Fraser at Citigroup, Sarah London at Centene, Sandy Xu at JD.com, but the broader pattern is slow, steady increase.

Where these companies are based

Companies on the list are spread across 243 cities and 36 countries. Beijing, Tokyo, New York, Shanghai, and London are home to more of them than anywhere else. London re-entered the top five after a few years off the leaderboard. These cities, often finance hubs or government capitals, keep drawing the biggest corporations. Their infrastructure, access, and labor markets still offer scale.

New companies join, even as others fall away

Nine companies made it onto the list for the first time. Among them were QNB Group, ICICI Bank, and Lithia Motors. They entered from different industries and countries, but their arrival signals movement in the sectors they belong to, especially banking and auto retail. Their exact rankings weren’t near the top, but entry alone is a sign of growth or change worth noting.

What the numbers don’t show

Revenue climbed. Profits held steady. But the backdrop is far less calm. Geopolitical shifts, trade disputes, and fast-moving changes in AI policy are already starting to affect how companies grow, spend, and plan. These issues haven’t upended the rankings just yet, but the next few years may look different. The Fortune Global 500 still captures the biggest players, it always has, but what happens next may depend less on size and more on how these firms navigate what's coming.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next: Study Finds SpaceX’s Starlink Satellites Leak Radio Noise Into Protected Bands, Disrupting Earth Telescopes
by Irfan Ahmad via Digital Information World

YouTube Begins Using Age-Guessing Technology in the U.S.

YouTube has started testing a new system in the United States that tries to figure out how old users are based on how they use the platform. This technology looks at different pieces of information to estimate whether someone might be a teenager, even if they said something else when they created their account.

The new system is being introduced in stages. Only a small group of users will see it at first. YouTube plans to check how it performs before making it more widely available.

Why This Matters for Teen Users

If the system decides that someone is under 18, YouTube will automatically switch on features that are meant to offer a safer experience. This includes turning off personalized ads and reducing how often certain types of videos are shown repeatedly. It will also activate reminders for screen time and bedtime, along with other tools that support healthy use of the platform.

These settings already exist, but until now, they were only applied to users who had confirmed their age. Many teens skip that step or enter a different date when signing up. YouTube is trying to cover that gap by using patterns in user behavior instead of relying on what people enter.

How Mistakes Are Handled

Some adults may be flagged by mistake. If that happens, they’ll be asked to prove their age. They can do this by uploading a photo of a government ID, using a credit card, or submitting a live selfie. Once verified, they can access content that is only available to users over 18.

If the system believes someone is an adult based on their account history or usage habits, they won’t need to go through the verification process.

Background on YouTube’s Plans

YouTube had already mentioned its plan to use this kind of technology earlier this year. The move fits into a broader effort to add more safety features for young users. In the past, the platform launched a separate app for children and introduced supervised accounts for teens.

The new system builds on that approach and focuses on users who are signed in. Those who aren’t logged in already face limits on what they can watch, especially when it comes to age-restricted videos.

What Data Is Being Used

YouTube hasn’t listed every detail, but it said it will look at how long an account has been active and how people interact with videos. The goal is to make a reasonable guess without asking for too much personal data upfront.

This is part of a larger trend where companies are being pushed to do more to protect minors online. Lawmakers in several U.S. states are working on or have passed new rules that make age checks or parental consent a requirement. These include places like Texas, Georgia, Florida, Utah, Maryland, and Connecticut.

Some of those laws are already being challenged in court, and a few haven’t taken effect yet. But the direction is clear. Governments want more responsibility from platforms when it comes to younger users.

Other Countries Are Moving Too

In the United Kingdom, a new law passed in 2023 has also started to take effect. It requires websites to check the age of their users. Platforms that don’t follow the rules could face penalties.

YouTube’s update is one example of how companies are responding to this shift. The use of machine learning to estimate age is becoming more common, even though the full list of signals being used is usually kept private.

For now, the rollout remains small. The company said it will expand once it’s sure the system works as intended.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next: Facebook Most Cited in Online Abuse Reports from Environmental Activists
by Irfan Ahmad via Digital Information World

Tuesday, July 29, 2025

From Coders to Cleaners: Which Jobs AI Is Supporting, and Which Are Out of Reach

A recent study by researchers at Microsoft examined how artificial intelligence is being used in real work conversations. By analyzing 200,000 anonymized chats between people and Microsoft Copilot, the team created a detailed picture of where generative AI fits into modern work, and where it does not.

They tracked how often users asked AI for help with work tasks, how well AI completed those tasks, and how broadly those tasks applied to each job. Using that data, they created a numerical score for over 900 jobs. A high score meant AI was frequently used for important parts of the job and performed those tasks well. A low score meant little to no overlap.

The results showed a sharp divide. Some occupations matched closely with the way people already use AI. Others showed almost no connection. This report focuses on both sides by listing the 40 most and least AI-compatible jobs based on actual user behavior.

Where AI Works Well

The top scoring roles mostly involve language, information, or communication. These are jobs that depend on gathering details, answering questions, drafting content, or presenting knowledge to others. In many cases, the AI served as a digital assistant that helped users write, explain, translate, or summarize.

At the top of the list were interpreters and translators. Their work involves transforming written or spoken language across contexts, and AI has already shown strength in performing these tasks quickly and accurately. Writers, editors, and proofreaders also scored high, as many people are already using AI tools to generate, revise, or polish documents.

Other top-ranked roles include customer service agents, sales representatives, journalists, PR specialists, and educators. These jobs often require giving people information, preparing written materials, or guiding others through a process. These are areas where AI responses are more likely to be useful and well received.

The AI was not replacing these workers. Instead, the study showed that users were using AI to assist with parts of their tasks. This distinction was key to the way the research was designed. It separated what the user was trying to achieve from what the AI actually did during the conversation.

The 40 Most AI-Compatible Occupations

Each of these roles scored high in three areas: the share of job tasks that overlapped with AI usage, how well AI completed those tasks, and how much of the occupation those tasks covered.

  • Interpreters and Translators
  • Historians
  • Passenger Attendants
  • Sales Representatives (Services)
  • Writers and Authors
  • Customer Service Representatives
  • CNC Tool Programmers
  • Telephone Operators
  • Ticket Agents and Travel Clerks
  • Broadcast Announcers and Radio DJs
  • Brokerage Clerks
  • Farm and Home Management Educators
  • Telemarketers
  • Concierges
  • Political Scientists
  • News Analysts, Reporters, and Journalists
  • Mathematicians
  • Technical Writers
  • Proofreaders and Copy Markers
  • Hosts and Hostesses
  • Editors
  • Business Teachers (Postsecondary)
  • Public Relations Specialists
  • Demonstrators and Product Promoters
  • Advertising Sales Agents
  • New Accounts Clerks
  • Statistical Assistants
  • Counter and Rental Clerks
  • Data Scientists
  • Personal Financial Advisors
  • Archivists
  • Economics Teachers (Postsecondary)
  • Web Developers
  • Management Analysts
  • Geographers
  • Models
  • Market Research Analysts
  • Public Safety Telecommunicators
  • Switchboard Operators
  • Library Science Teachers (Postsecondary)

Most of these jobs involve structured knowledge work. Some include writing technical guides, while others involve answering questions or responding to common customer issues. The overlap with AI in these jobs was not just frequent, but successful. Conversations where AI helped with these tasks often ended with a completed goal or positive user feedback.

Where AI Has No Role So Far

On the other end, the researchers found dozens of jobs where AI showed no real connection to the work being done. These occupations had an AI applicability score of zero. That meant no significant overlap between their daily tasks and what AI was used for in the dataset.

In nearly every case, these jobs required physical skills, specialized equipment, or real-world handling. Many involved cleaning, operating machinery, preparing food, or providing in-person care. Even if AI could offer instructions, the actual task still had to be done by a person, on site, using physical tools or touch.

These occupations also tended to be hands-on in a way that language models are not designed for. They required moving, lifting, installing, or interacting with the environment in ways that AI cannot simulate. Some jobs required high precision, others involved safety risks or regulatory requirements. In all cases, the study found no practical use of AI for their work.

The 40 Least AI-Compatible Occupations

These jobs showed no measurable overlap with AI use in the study. They had zero coverage, meaning none of their key work activities appeared in AI-assisted conversations with users.

  • Water Treatment Plant and System Operators
  • Pile Driver Operators
  • Dredge Operators
  • Bridge and Lock Tenders
  • Foundry Mold and Coremakers
  • Rail-Track Laying and Maintenance Equipment Operators
  • Floor Sanders and Finishers
  • Orderlies
  • Motorboat Operators
  • Logging Equipment Operators
  • Paving, Surfacing, and Tamping Equipment Operators
  • Maids and Housekeeping Cleaners
  • Roustabouts, Oil and Gas
  • Roofers
  • Helpers, Roofers
  • Tire Builders
  • Surgical Assistants
  • Massage Therapists
  • Gas Compressor and Pumping Station Operators
  • Cement Masons and Concrete Finishers
  • Dishwashers
  • Machine Feeders and Offbearers
  • Packaging and Filling Machine Operators
  • Medical Equipment Preparers
  • Highway Maintenance Workers
  • Helpers, Production Workers
  • Prosthodontists
  • Tire Repairers and Changers
  • Ship Engineers
  • Automotive Glass Installers and Repairers
  • Oral and Maxillofacial Surgeons
  • Plant and System Operators (All Other)
  • Embalmers
  • Helpers, Painters and Plasterers
  • Hazardous Materials Removal Workers
  • Nursing Assistants
  • Phlebotomists
  • Ophthalmic Medical Technicians
  • Floor Sanders
  • Bridge and Lock Tenders

These occupations span fields like healthcare, heavy industry, transportation, construction, and cleaning. Many involve specialized tools, patient care, or site-specific duties. For these jobs, AI was neither asked to help nor observed completing any relevant work activity.

A Divide Shaped by Task Type, Not Income or Industry

The researchers also examined whether salary or education level influenced AI applicability. They found only weak patterns. Some lower-wage jobs scored high, while some high-wage roles showed little AI overlap. There was a slight trend where jobs requiring a bachelor’s degree showed more applicability, but even that effect was modest.

The key factor was the type of task. If the job involved writing, explaining, organizing knowledge, or communicating, it was more likely to match how AI is currently being used. If the job involved physical motion, hands-on problem-solving, or direct care, it was unlikely to match.

Study Focused on Measured Use, Not Predictions

This study looked only at actual use. It did not attempt to forecast future changes to job markets or make claims about automation risk. It did not track how employers use AI internally, nor did it consider how jobs might evolve over time. The scores only reflect current patterns in how people used Copilot to help with tasks that align to occupations listed in federal labor data.

Still, the data offers a real-world snapshot of how AI is beginning to fit into everyday work. Some jobs already show clear patterns of use, while others remain disconnected. As AI tools grow and change, those patterns may shift. For now, the gap between roles where AI helps and those it doesn't remains wide.


Read next: Facebook Most Cited in Online Abuse Reports from Environmental Activists
by Irfan Ahmad via Digital Information World

PayPal Launches Crypto Checkout for U.S. Merchants, Enabling Instant Dollar Settlement from 100+ Tokens

PayPal has introduced a new payment system in the United States that lets businesses accept over 100 different cryptocurrencies. The update provides merchants with a direct way to receive digital asset payments without handling wallets or volatile tokens themselves.

Under the new setup, customers can pay with assets like Bitcoin, Ethereum, USDT, and Solana. Some smaller coins, including memecoins, are also supported. Merchants receive the equivalent amount in either U.S. dollars or PayPal’s own stablecoin, PYUSD, at the moment of transaction. There is no need to wait for network confirmations or manage exchange processes.

Lower Transaction Fees for Cross-Border Payments

PayPal is offering the service at a 0.99% fee for the first year. After that, the rate increases to 1.5%. That is still lower than international card payments, which usually cost around 2% to 4%. For small businesses that serve global buyers, the savings could make a noticeable difference.

Most traditional cross-border transactions move through several financial intermediaries. That often creates delays, raises costs, and causes currency conversion losses. PayPal’s crypto tool avoids those steps by converting the crypto to dollars in real time. The funds then appear in the merchant’s PayPal account without additional processing.

Wallet Support and Settlement Options

To use the service, customers can connect wallets from platforms like MetaMask, Coinbase, Binance, Kraken, and others. The checkout accepts payments made from any of the supported wallets and tokens. On the merchant side, the system handles the conversion and settles the funds instantly.

If a payment comes in a coin that is either not supported or thinly traded, PayPal’s system may route the transaction through decentralized exchanges. From there, the funds are converted to PYUSD or USD before being deposited. The process is automated and does not require merchants to take action.

Regulatory Limits Still Apply

While the tool is active across most of the U.S., it is currently unavailable to merchants in New York. The state’s regulators have not yet cleared the use of PYUSD for local businesses or residents. This limits the service in one of the country’s largest financial markets.

In addition, like most digital assets, PYUSD and other supported coins do not carry federal protections. They are not insured by the FDIC or the SIPC. In the event of wallet compromise, insolvency, or technical failure, funds could be lost without reimbursement.

Legislation Prompted Changes to Stablecoin Use

The feature was released after the GENIUS Act became law. This legislation restricts how stablecoins can earn interest and steers their design toward payments and trading use cases. For platforms like PayPal, that shift led to changes in how stablecoins fit into their services.

Since early 2025, PYUSD’s market capitalization has grown sharply. This suggests more users and businesses are starting to adopt it for transactions instead of only holding it. While other platforms like Stripe and Coinbase are also rolling out crypto-based tools for merchants, PayPal’s approach focuses on built-in conversion and direct access for sellers.

Market Reach and System Growth

PayPal’s network now connects with wallets used by hundreds of millions of crypto holders worldwide. By allowing them to pay with crypto while settling in fiat, the system gives U.S. merchants a way to reach overseas customers without expanding banking infrastructure.

The company said its service connects to more than $3 trillion in crypto market value. It supports over 100 tokens and plugs into major wallet platforms. The system is designed to make use of APIs and automated agents, which means the payments can be triggered by software, not only human users.

PYUSD also includes the option to hold funds within PayPal and earn yield on balances. That feature may appeal to sellers who prefer keeping earnings inside the system instead of moving them to external accounts.

Future Will Depend on Regulatory Stability

The crypto payments tool offers faster settlement and lower costs than many existing methods. Even so, its growth will likely depend on how regulators shape stablecoin policies in the months ahead. PayPal’s wider crypto plans are tied to approval in key markets and broader trust in PYUSD as a payment asset.

The company is continuing to build out the system and extend wallet integrations. If adoption continues, more platforms and merchants may treat digital currency not just as a speculative asset, but as part of day-to-day business.


Notes: This post was edited/created using GenAI tools.

Read next: How to Write Better Prompts for AI Chatbots That Actually Do What You Want
by Irfan Ahmad via Digital Information World

Monday, July 28, 2025

AI Jobs Are Paying More Than Ever as Startups Compete for Top Engineers

A wave of competition among artificial intelligence startups is changing how much tech professionals earn, with some roles now paying well into the upper six figures. Federal disclosure documents, required for companies hiring foreign workers under the H-1B visa program, reveal how aggressively AI companies are offering high base salaries to attract skilled employees.

These records show only base compensation and leave out other incentives such as stock options, signing bonuses, or performance awards, which often make total pay significantly higher. Still, the raw numbers offer a rare look into how much top startups are paying to stay ahead in the race for AI talent.

OpenAI Offers Broad Salary Bands for Core Staff

OpenAI, the San Francisco-based research lab behind ChatGPT, reported some of the highest base salaries across its technical and operations teams. The role of software engineer appears in several listings, with salaries ranging from $200,000 to $440,000, depending on specialization. Research-focused roles, such as research engineers and research scientists, were reported at up to $440,000.

Hardware engineering roles also made the list, with pay reaching $360,000. Other technical jobs, including security engineers and intelligence investigators, were listed between $310,000 and $382,500. Non-technical positions such as program staff, finance, and community support carried base salaries between $220,000 and $270,000. In some cases, data scientists and design professionals were paid up to $385,000.

These salaries do not reflect the company's broader compensation package, which includes equity and variable bonuses. Some recent listings on OpenAI’s website suggest software engineers can earn as much as $590,000 annually when all components are included.

Anthropic Lists the Highest Research Engineering Compensation

Anthropic, another high-profile AI startup based in the U.S., disclosed particularly high pay for its research roles. A research engineer at the company can earn between $340,000 and $690,000. Some listings for members of the technical staff also reached $405,000.

The highest-paid technical staff manager received $690,000. Roles in finance and strategy were also well-compensated, typically falling between $230,000 and $285,000. Anthropic’s hiring includes other departments as well, with recruiters earning $170,000 and operations staff around $230,000. A regulatory lead position was listed at $210,000, while account executives in sales were paid about $126,000.

Cohere, Glean, and Abridge Are Paying Above Traditional Tech Levels

Other AI startups are competing with similar offers, even if their operations are smaller. Cohere, which builds large language models for business use, offered $240,000 to members of its technical staff.

Glean, a company focused on enterprise search tools, paid software engineers between $190,000 and $260,000. Its machine learning engineering roles came with base salaries of $210,000, while software engineering managers reached $250,000. A data science team lead at Glean was listed with a base salary of $230,000.

Abridge, a health tech company using AI to automate clinical documentation, reported software engineers earning between $235,000 and $240,000. These are senior roles with experience requirements, placing them near the upper salary range for most health-related AI positions.

Grammarly Offers Strong Compensation Across Teams

Grammarly, which integrates AI into writing and grammar tools, has expanded its technical hiring with salaries that reflect the sector's competition. Machine learning software engineers earned up to $318,000, while engineers working on data infrastructure could receive $315,000.

AI engineering positions were listed at $223,000. Other technical roles such as analytics engineers and platform developers were typically paid between $250,000 and $280,000. Even non-engineering roles in user research, marketing, and product design received high salaries. Product designers, for instance, were paid up to $230,000, while user researchers earned more than $221,000.

Mistral AI and Safe Superintelligence Use Wide Salary Ranges

Mistral AI, based in Europe but hiring in the U.S., listed salary bands for AI scientists between $280,000 and $350,000. A spokesperson later clarified that salaries in the U.S. could fall anywhere between $150,000 and $450,000.

Safe Superintelligence Inc., a new company co-founded by former OpenAI researchers, showed a wide range for its technical staff as well. Listed salaries ranged from $150,000 to $500,000, which reflects a diverse hiring strategy across seniority levels.

Smaller Startups Show Upward Trends in Design and Strategy Roles

Anysphere, the team behind Cursor (an AI code assistant), offered $250,000 to a product designer. Synthesia, which creates AI avatars for video generation, listed a customer success manager role at $120,000. Despite being smaller firms, these companies appear to be aligning with the pay expectations set by larger AI labs.

Poolside, another startup with a focus on software and AI integration, listed a performance director at $230,000.

Thinking Machines Lab Matches Top-Level Compensation

Thinking Machines Lab, led by Mira Murati, has entered the high-salary tier with listings of $450,000 to $500,000 for core technical roles. The company also disclosed that a co-founder and machine learning scientist earned within the same range. These figures suggest a tight alignment with compensation levels offered by longer-established AI firms.

The Competitive Landscape Is Pushing Salaries Higher

These figures come from mandatory government filings, which companies submit when sponsoring employees through the H-1B visa system. While they exclude other elements of compensation, they still provide a clear benchmark.

Companies working in AI are consistently raising base pay to compete for a narrow slice of talent capable of building the next generation of machine learning models. From product design to platform engineering, salaries reflect a landscape where specialized knowledge continues to carry a premium.

Six-Figure Wars: AI Startups Battle for Top Tech Talent

Company Role Annual Salary
OpenAI Software Engineer $200K - $440K
OpenAI Research Engineer $210K - $440K
OpenAI Member of Technical Staff $210K - $530K
OpenAI Hardware Engineer $360K
OpenAI Security Engineer $310K
OpenAI Finance Staff $265K
OpenAI Go To Market Staff $220K - $280K
Anthropic Research Engineer $340K - $690K
Anthropic Member of Technical Staff $300K - $405K
Anthropic Member of Technical Staff (Mgr) $690K
Anthropic Finance & Strategy $230K
Anthropic Recruiter $170K
Cohere Member of Technical Staff $240K
Glean Software Engineer $190K - $260K
Glean ML Engineer $210K
Glean Software Eng. Manager $250K
Grammarly Machine Learning Engineer $318K
Grammarly Software Engineer $180K - $302.1K
Grammarly Analytics Engineer $250K
Grammarly Data Scientist $170K
Grammarly Engineering Manager $315K
Abridge Software Engineer $235K - $240K
Thinking Machines Lab Member of Technical Staff $450K - $500K
Mistral AI AI Scientist $280K - $350K
Mistral AI General Technical Staff $150K - $450K
Safe Superintelligence Member of Technical Staff $150K - $500K
Synthesia Customer Success Manager $120K
Poolside Performance Director $230K
Anysphere Product Designer $250K

Methodology

To analyze compensation trends across top artificial intelligence startups, Insider examined publicly available salary disclosures submitted to the U.S. Department of Labor. These filings are part of the H-1B visa application process, which requires employers to report salary information when seeking to hire foreign workers for specialized roles. Only base annual salaries were considered for this analysis. Supplemental compensation such as equity grants, signing bonuses, or performance-based incentives was not included, as these details are not required in the federal disclosure process.

The data collection focused exclusively on AI-focused startups with private market valuations exceeding $2 billion, excluding companies that specialize in robotics, autonomous vehicles, or military-related technologies. Salary listings were drawn from the most recent filings available in 2024 and early 2025.

Where multiple salary entries existed for the same job title within a company, reported figures were expressed as ranges to reflect variation by experience, location, or specialization. In cases where company representatives provided updated or clarifying salary bands, those figures were included to provide more accurate context.

This approach offers a rare look into base compensation practices within leading AI startups, given that most companies in the sector do not publicly share pay structures.

Read next: Trust Gaps Remain: Everyday Scenarios Where AI Chatbots Struggle to Deliver
by Irfan Ahmad via Digital Information World

How to Write Better Prompts for AI Chatbots That Actually Do What You Want

AI-powered chatbots have started showing up in writing, planning, coding, and research tasks across every kind of workspace. People are using them to outline reports, summarize documents, make to-do lists, and even troubleshoot code. But what most users eventually realize is that typing a vague request into the prompt box often leads to disappointing results.

The issue isn't just with the chatbot’s limitations. It’s also with the way the request is written. Chatbots, even the most advanced ones, don’t guess well. They depend entirely on the information they’re given in that moment. That’s why knowing how to write a clear, structured, and thoughtful prompt can make the difference between something useful and something you have to redo from scratch.

Here’s a breakdown of how to get the most out of chatbots like ChatGPT, Claude, Gemini and others, without needing to memorize any technical jargon or follow rigid templates.

Begin with a Clear, Simple Goal in Your Mind

Before typing anything, think through what you actually want the chatbot to produce. Do you need a short summary or a full article? Are you looking for step-by-step instructions, or do you just want ideas to build from? These differences matter more than most people expect. A chatbot doesn’t know your purpose unless you explain it. Even a small clue may lead you quickly to the answers you need.

It helps to take thirty seconds and sketch out your thoughts in a few quick notes. Once you know the format and the intended use of the response, you can steer the chatbot more clearly toward that goal.

Explain the Task Like You’re Talking to a New Assistant

Imagine explaining a task to someone who’s bright but unfamiliar with how you work. That’s roughly how you want to speak to a chatbot. They don’t have memory of past interactions (unless you’re using advanced setups), and they don’t know your personal preferences, writing tone, or goals unless you include them in your request.

If you’re asking it to write something, tell it who the audience is. If you need data sorted or rephrased, give context for why. If you want it written in a certain style or format, say so clearly. Otherwise, the response might be technically correct but stylistically off.

Use Lists to Lay Out What You Want

Sometimes, long paragraphs full of instructions tend to confuse chatbots. They might miss steps or misinterpret something halfway through. To avoid that, try laying out your instructions as bullet points or a simple numbered list. That creates structure, and structure helps the AI organize its response.

Think of it like handing someone a checklist instead of a paragraph. You’re not dumbing it down, you’re reducing the chance of error.

Add Examples When You Can

One of the most effective tools for better prompting is to include a short example of what you want. If you’re asking for a tone, a format, or a structure, giving a sample makes your expectations much clearer.

This approach doesn’t require perfect examples. Even rough sketches help the chatbot latch onto patterns. That could mean showing one version of a response, a headline style you like, or even a few phrases that capture your preferred tone. You don’t have to give many, just one or two is often enough.

Give the System Time to Think Things Through

Sometimes users rush the system by expecting final answers right away. For more complex tasks, a better way is to ask the chatbot to reason through each part step-by-step. This means breaking things down into small phases.

If you're solving a problem, planning something complex, or evaluating options, ask the chatbot to write out its thinking process first. That structure helps the AI clarify its own logic before giving a final answer. It might not sound intuitive, but this step-by-step flow often leads to better outcomes.

Assign a Role to Guide the Tone and Depth

If you're not happy with how a chatbot answers, try giving it a role to play. This doesn’t mean pretending, it means giving it a working perspective. Asking it to write like a teacher, financial advisor, editor, or consultant often improves the tone and structure.

Roles give the chatbot an anchor to guide how it responds. If you want short, fast, clear answers, try assigning the role of a help desk agent or journalist. If you need deeper analysis, try a role like researcher or strategist. The shift can be surprisingly effective.

Let It Say "I Don’t Know" When Needed

Chatbots are more confident than accurate. If a question has no clear answer, the system might still generate something that sounds convincing but isn’t based on real information. That’s why it helps to give the AI permission to admit uncertainty.

You can add something like, “If you’re unsure, say so,” at the end of your prompt. This helps reduce the chance of made-up answers. You can also ask it to find a source or confirm claims before including them in its final reply. If it can’t confirm something, you can ask it to leave that part out.

Refining Prompts Is Normal, Don’t Expect Perfection on the First Try

Even a great prompt won’t always get you the exact result you had in mind. Sometimes the wording feels off. Other times the structure isn’t quite right. That’s normal. Most people revise their prompts once or twice before the result hits the mark.

You can treat your first prompt as a draft. After seeing what the chatbot returns, adjust your instructions to be clearer or more focused. This trial-and-error process doesn’t mean your prompt failed. It means you’re shaping the interaction more deliberately.

As explained by Google: "Prompting is a skill we can all learn. You will likely need to try a few different approaches for your prompt if you don’t get your desired outcome."

Strong Prompts Make AI Tools More Useful, Not Smarter

A well-written prompt won’t make a chatbot more intelligent, but it will make the output more aligned with your needs. Think of prompting as a form of teaching. You’re showing the AI how to respond within your context. The clearer your directions, the better the results.

For people who use AI tools often, whether for writing, planning, coding, or organizing, prompting becomes less about clever tricks and more about communication. And like any skill, it gets better with practice.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next: The Overlooked Flaws of ChatGPT: The Hidden Costs Behind the Hype
by Irfan Ahmad via Digital Information World

Sunday, July 27, 2025

Trust Gaps Remain: Everyday Scenarios Where AI Chatbots Struggle to Deliver

AI chatbots are everywhere now, showing up in workplaces, homes, and phones. For people under pressure, they often seem like an easy way to save time or simplify a task. You can ask them to write an email, explain a concept, or plan your week. But even as the tools improve, some situations still call for human attention. Relying on AI in the wrong places can hurt your reputation, your finances, or even your peace of mind. Based on experience and repeated testing, here are nine common areas where chatbots aren’t as helpful as they seem.

1. Important, High-Stakes Tasks

Once you start using a chatbot regularly, it becomes easy to overextend its role. Many people find themselves relying on AI for everything from managing stress to interpreting medical symptoms, preparing tax forms, or handling legal paperwork. These are areas where wrong information can have lasting consequences.

AI chatbots work by predicting language, not verifying facts. That difference matters when decisions carry risk. People sometimes treat chatbots as if they’re as reliable as a doctor or lawyer, but the gap in expertise is wide. A tool that sounds confident can still be completely wrong, and that risk is bigger when users forget they’re talking to a program, not a trained professional. It helps to think of chatbots like a friend who talks a lot but doesn’t always know what they’re saying. They might sound convincing, but they’re not the person you’d want in charge of something serious.

2. Replacing a Real Personal Assistant

Some AI features advertise themselves as assistant-grade tools, but most can't deliver what they promise. ChatGPT and Gemini, for example, still can’t handle simple recurring tasks like scheduling calls, ordering groceries, or managing notifications effectively. They may offer itinerary suggestions or help answer questions, but that’s a far cry from managing real-time demands or working across multiple systems smoothly.
Even newer tools designed to function more like personal assistants, such as Gemini’s Gems or ChatGPT’s Custom GPTs, continue to hit technical walls. In many tests, they failed to perform routine tasks or showed inconsistent results. Some users report that AI helpers get stuck, misinterpret requests, or simply skip steps altogether. It might feel convenient to hand over a list, but the results don’t always match the promise. For now, using chatbots to manage daily logistics can create more mess than order.

3. Writing Personal or Professional Emails

AI can help improve grammar or suggest phrasing, but relying on it to compose personal emails creates distance. The tone often feels off. Sometimes it sounds robotic, other times it comes across as vague or generic. That disconnect can matter, especially when the message is tied to trust, something people notice when your words don’t sound like you.

Some email platforms now use AI to build full messages that match past communication styles. On paper, that looks advanced. In reality, it can feel strange to receive a message that seems hollow. There’s also the question of privacy. Granting a chatbot access to your inbox means handing over sensitive conversations to a system that doesn’t understand context. When tone matters or when privacy is a concern, it’s safer to write your own messages. People know the difference, and how you say something often matters more than what you say.

4. Searching for Jobs

Asking a chatbot to help with a job search might seem efficient at first, but the follow-through is often weak. You might get a few tips or website links, but most chatbots don’t scan actual listings or filter opportunities based on your real qualifications. They rarely match experience with relevant roles and often skip the details that matter.
In practice, the results feel generic. For example, a prompt asking for writing jobs might bring up a basic list of job boards or refer you to outdated resources. You’re left with vague direction instead of practical leads. Platforms like LinkedIn or Indeed still do a better job surfacing up-to-date roles, filtering by skill or location, and highlighting legitimate openings. If you’re hoping AI can simplify the search process, it might save a few minutes early on, but it doesn’t replace targeted research or reliable job platforms.

5. Building Resumes or Cover Letters

Chatbots can offer structure and surface-level suggestions, but they don’t understand your experience. That matters when applying for jobs. A resume needs to reflect what you’ve done, how you’ve grown, and where you’re headed. The best versions are honest and sharp, and that’s difficult for a bot to produce.

AI-generated cover letters often miss the mark, too. They tend to repeat clichés or leave out the specifics that show why you're a fit. Recruiters read a lot of applications. It’s not hard to spot writing that feels stiff, lifeless, or padded with filler. While AI tools might help with formatting or refining individual sentences, creating your full application that way risks making you look careless or disengaged. Most hiring managers want to hear your own voice, even if it’s not perfect.

6. Finishing Homework or Academic Projects

For students, chatbots can be tempting shortcuts. A quick prompt can return a full essay, answer a math problem, or explain a historical event. But these answers aren’t always accurate. In science and math, AI often stumbles over logic. In creative writing, it produces generic results that are easy to flag. As schools grow more watchful of AI use, even honest students are getting caught up in detection efforts.

Academic tools are getting sharper at identifying AI content. That means even if you tweak the response, there's still a good chance a teacher, or the system, will recognize it. And when the content itself is flawed or misleading, you lose more time fixing the problem than you would’ve spent doing the work properly. When grades are on the line, it pays to double-check everything or start from scratch.

7. Comparing Products or Planning Purchases

AI features like ChatGPT’s shopping assistant or Gemini’s product-matching tool are still hit or miss. Sometimes the results are useful, but often they leave out top products or fail to explain how they ranked the items. When you’re making a purchase, especially an expensive one, unclear sourcing makes recommendations hard to trust.

In testing, ChatGPT missed several popular laptops in its suggestions. Gemini did a bit better, but the answers still lacked consistency. And with no clear explanation for the rankings, it’s hard to know whether the AI reviewed real data or just repeated outdated information. For shopping advice, review sites, comparison charts, or hands-on videos still provide better guidance. They’re also easier to fact-check. With your money on the line, solid research beats shortcuts every time.

8. Backing You Up in an Argument

It’s common to use a chatbot to check a fact or support a point in a disagreement. The problem is, chatbots are designed to mirror your question. If you come in with a bias, they often respond in a way that confirms it. That feedback loop can twist the truth and make you feel more right than you are.

In casual tests, people have prompted chatbots with flawed reasoning, and the bots still agreed. That might feel validating, but it doesn’t help when you're wrong. In heated discussions, this tendency can damage relationships, especially if you lean on AI to win rather than to understand. It’s smarter to stick with trusted sources when facts matter, and better to talk through disagreements than drag a chatbot into the middle.

9. They Reflect the Biases of the Data They Were Trained On

Language models often fall short when asked to navigate politically sensitive or emotionally charged topics, especially those involving conflict or oppression. During ongoing events like the Palestine-Israel war, responses have been shown to reflect uneven perspectives. The AI might avoid acknowledging war crimes, downplay civilian suffering, or echo only the dominant geopolitical narrative. These issues arise not from malice, but from how the model is trained on public internet data, which includes biases embedded in dominant media sources.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next: Teens Are Turning to AI Companions, But Many Still Feel Uneasy
by Web Desk via Digital Information World

Saturday, July 26, 2025

Think Your Data Stays Private? AI Tools Are Proving Otherwise

AI tools are appearing in nearly every corner of daily life. Phones, apps, search engines, and even drive-throughs have started embedding some form of automation. What used to be a straightforward browser is now bundled with built-in assistants that try to answer questions, summarize tasks, and streamline routines. But these conveniences come at a growing cost: your data.

The requests for access from AI apps have grown broader and more aggressive. Where once people questioned why a flashlight app needed their location or contacts, similar requests are now made under the banner of productivity. Only now, the data these apps ask for cuts far deeper.

One recent case involved a web browser called Comet, developed by Perplexity. It includes an AI system designed to handle tasks like reading calendar entries or drafting emails. To do that, it asks users to connect their Google account. But the list of permissions it seeks goes far beyond what many would expect. It asks for the ability to manage email drafts, send messages, download contact lists, and view or edit every event across calendars. In some cases, it even tries to access entire employee directories from workplace accounts.

Perplexity claims that this data remains on a user’s device, but the terms still hand over a wide range of control. The fine print often includes the right to use this information to improve their AI systems. That benefit flows back to the company, not necessarily to the person who shared their data in the first place.

Other apps are following similar patterns. Some record voice calls or meetings for transcription. Others need access to real-time calendars, contacts, and messaging apps. Meta has also tested features that sift through a phone’s camera roll, including photos that haven’t been shared.
The permissions these tools request aren't always obvious, yet once granted, the decision is hard to reverse. From a single tap, an assistant can view years of emails, messages, calendar entries, and contact history. All of that gets absorbed into a system designed to learn from what it sees.

Security experts have flagged this trend as a risk. Some liken it to giving a stranger keys to your entire life, hoping they won’t open the wrong door. There’s also the issue of reliability. AI tools still make mistakes, sometimes guessing wrong or inventing details to fill in gaps. And when that happens, the companies behind the technology often scan user prompts to understand what went wrong, putting even private interactions under review.

Some AI products even act on behalf of users. That means the app could open web pages, fill in saved passwords, access credit card info, and use the browser history. It might also mark dates on a calendar or send a booking to someone in your contact list. Each of these actions requires trust, both in the technology and the company behind it.

Even when companies promise that your personal data stays on the device, the reality is more complicated, as highlighted by u/robogame_dev or Reddit. Most people assume this means photos, messages, or location logs remain untouched. But what often slips under the radar is how that raw information gets transformed into something else, something just as personal.
Modern AI tools extract condensed representations from your data. These might look like numerical vectors, interest segments, or hashed signals. While the raw voice clip or image may stay local, the fingerprint it generates, a voice embedding, a cohort ID, or a face vector, often gets sent back to the server. These compact data points can still identify you or be linked with other datasets across apps, devices, and even companies.

Over time, that creates a shadow profile. It doesn’t need your full browsing history or photo albums to be useful. A few attributes, like the categories of content you read, the way you speak, or your heart rate trends, can reveal more than expected. Advertisers, insurers, or third-party brokers may use this information to shape pricing, predict preferences, or infer sensitive traits.

So while on-device processing helps limit exposure, it doesn’t erase the risk. Much like measuring your face without keeping the photo, what gets extracted and exported can still follow you around the digital world.

If an app/tool asks for too much, it may be worth stepping back. The logic is simple: just because a tool can help with a task doesn’t mean it should get full access to your digital life. Think about the trade. What you’re getting is usually convenience. What you’re giving up is your data, your habits, and sometimes, control.

When everyday tools become entry points for deep data collection, it's important to pause and ask whether the exchange feels fair. As more of these apps blur the line between helpful and invasive, users may need to draw that line themselves.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next: Study Finds Most Marketers Use GenAI, But Few Have Worked with Agentic AI
by Irfan Ahmad via Digital Information World

Study Finds Most Marketers Use GenAI, But Few Have Worked with Agentic AI

Cannes Lions this year has marked an important shift in the discourse of AI adaptation in marketing: no longer just promises, but actual implementation and performance. A common perception across multiple panels, or even brand showcases regarding this new integration, is that there should be no separation between generative AI and a creative and strategic approach.

From efficient content creation to personalizing campaigns at a large scale, the AI tech is clearly helping marketers achieve unprecedented productivity. But do they really know in-depth about their tools? What will be the next development of AI? And are they ready to face this new wave?

To answer these questions, Outcomes Rocket has conducted a comprehensive survey to understand the current situation in the field of marketing under this new wave of AI. From different industries, in different roles and organization sizes, 1,299 marketers have participated and expressed their perspective. The findings showed that marketers’ daily work has been assisted with generative AI massively; however, only a third of them have exposure to other forms of AI, such as agentic AI. The survey not only revealed positive angles of this adoption but also fear among newcomers in terms of job security in this field within the next two or three years.

Widespread AI Adoption in Marketing

The data showed that almost all of the participants (89.5%) use AI in their work. This is true cross-industry no matter what the job level or the size of the organizations, especially small businesses that do not have a large budget for marketing but need to compete with other big corporations.


Generative AI is dominating the AI market in marketing, with 93.5% of marketers utilizing the tools for content creation, including blog posts, advertisement copy, social media content, and other creative brainstorming purposes.

Standing at the top of the list is ChatGPT, with 94.8% of users choosing it as their main platform. This result is expected due to its ease of use, versatility, real-time interaction, and the ability to generate output across a wide range of formats and tones. The biggest differentiator for ChatGPT is that OpenAI was the first mover in the game, allowing this tool to be tested and experimented with by the public before any other competitor rolled out their own. As a result, ChatGPT will always be mentioned in any AI conversation, so naturally, it became the first pick for many users when it comes to generative AI.

Early Stage but Growing Interest in Agentic AI

However, generative AI is not the only answer to marketing. Agentic AI could be the answer to a fully autonomous AI solution. This single tool is capable of running a complete marketing campaign with minimum human interaction, from developing strategy, segmenting the audience, to creating content and distributing across channels, and even analyzing performance. This model will analyze historical data, competitor trends to determine which action is most suitable to take. It could be a new campaign, a new ad strategy, or an entire test of A/B testing. Unlike the traditional automation method, agentic AI learn from its results to produce the most optimal course of action.

Nonetheless, despite the immense potential, the implementation of agentic AI is still at the primary level, with only 33.3% of marketers having experimented with agentic AI. The current atmosphere around this tool is mostly reserved or unfamiliar. However, this is not a bad indicator; instead, it shows that agentic AI has promising growth and could be transformative to the entire field in the next 12 to 24 months. Once more, marketers and organizations have experimented with this model, and agentic AI can surely become the top contender in marketing technology.

Trust Issues and Training Gaps in AI Use

Accuracy has always been the top concern for every AI model that has been introduced, and application in marketing is not an exception. Over 93% of marketers experience the common problems with AI-generated content: inaccuracy, bias, or irrelevancy. Therefore, around 70% spend a lot of time revising or proofreading the output before publishing. It could be surprising that with such a high frequency of working alongside AI, only 42% of them are confident in spotting AI-generated content. Lack of official guidance and education could be the explanation. The survey revealed that 80% of participants did not have any formal AI training from their company. Not only does this figure demonstrate the lack of preparedness in the workforce, but it also sheds light on the inefficiency of the integration process of a new tool.

Job Security Concerns Are Rising

The survey also explored the sentiment towards the effects of AI regarding employment, which turned out to be quite negative. The pressure is present and high, with nearly 89% of marketers believing that AI will result in job losses in the next two or three years.

The statistics indicate that one-third of marketing activities will potentially become automated shortly, and junior positions are the most likely to be impacted. Automation will mostly target routine or repetitive work, which is the main responsibility of junior positions. Therefore, these fears among the less experienced marketers concerning long-term career security are truly reasonable and expected.

While most of them believe AI will take over lots of marketing roles, the advantages of using AI are undeniable, with 63% of participants viewing AI as a helpful assistant rather than a potential substitute. Their perception focuses on applying AI to boost overall productivity, reduce their time on routine work, which allows more time on creative tasks or strategy. On the other hand, a rather small percentage of participants (16%) believe that AI will take away all the marketing jobs. Although the concern for job security is high, the vast majority of them (over 70%) have not experienced any direct impact from these AI-driven threats. Looking at the big picture, advocacy for AI is stronger, suggesting a new transformation in the field for the better and not necessarily a takeover by AI.

Future Outlook: Continued Growth and Investment

Despite the undeniable ambivalence regarding marketing job security with this high rate of AI adoption, marketers are still open and eager to take the full advantage of this new technology. Overall, the feeling regarding the possibilities that AI has to offer is very positive. Almost eight out of ten marketers believe that generative AI will be the largest game-changer shortly and will bring significant change in creating content, engaging audiences, and developing campaigns.

In addition to content creation, predictive analytics and hyper-personalization are gaining rapid attention. Over 50% of the respondents believe that these data-powered tools will be used more often in the future to get a closer look at customer behavior, thus allowing the team to create highly personalized experiences.

In the meantime, 41.7% of marketers believe their institutions will invest more in AI tools and technologies within the coming year. Such an investment is evidence of the transformative power of AI. Investors are willing to bet their money on it, in the hope that AI will lead to growth and innovation.

Read next: Financial Cybercrime Risks Vary Sharply Across U.S. States, Report Finds


by Irfan Ahmad via Digital Information World

Friday, July 25, 2025

Apple Updates App Store Age Ratings to Strengthen Parental Controls

Apple has introduced new age categories on the App Store, changing how apps are rated for children and teenagers. From now on, apps will be classified under five age brackets: 4+, 9+, 13+, 16+, and 18+. The previous 12+ and 17+ labels have been dropped.

All apps and games have been automatically updated to match the new system. The changes are live in beta versions of Apple’s upcoming software releases, including iOS 26, iPadOS 26, and macOS Tahoe. A full public rollout is expected in September.

App developers are now being asked to complete new questions covering areas such as in-app features, medical or wellness content, and themes involving violence. This will allow Apple to assign age ratings more precisely. Developers can see and, if needed, revise their app’s rating through App Store Connect.

Parents browsing the App Store will begin to see more information about each app. Details will include whether it contains user-generated content, shows adverts, or has tools for parental control. These additions are designed to make it easier for families to decide whether an app is suitable.

Apps that fall outside a user’s allowed age range will be less visible on the platform. For example, they won’t appear in featured sections such as Today or Games if the account belongs to a child. This could influence how developers build and promote their apps, especially if they’re targeting younger audiences.

As part of the same update, Apple has improved the setup process for child accounts. Parents can now enter a child’s age during setup, which will be shared with developers using a new API. The API gives developers access only to the age range, not the exact birthdate, which Apple says helps personalise content without compromising privacy.

For this to work, developers must integrate the API into their apps. If they don’t, the system won’t adjust the experience based on the user’s age.

The timing of Apple’s update comes as lawmakers in the United States continue to propose legislation aimed at protecting children online. Some states are calling for app stores to confirm user ages and collect parental consent before downloads are allowed. Apple and other major platforms, including Google, have argued that app developers should handle this responsibility.

The revised rating system is Apple’s way of addressing those concerns. While it won’t stop all misuse, the company believes that giving parents better tools, and making developers more accountable, can help reduce the risks children face online.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next: UK Begins Online Age Checks to Limit Children’s Access to Harmful Content
by Asim BN via Digital Information World

UK Begins Online Age Checks to Limit Children’s Access to Harmful Content

New rules aimed at keeping children away from harmful online material have taken effect in the United Kingdom. The measures apply to websites and apps that display content involving pornography, violence, or subjects like suicide, self-harm, or eating disorders. Companies operating these services are now required to check users’ ages through approved methods such as credit card verification or facial image analysis.

The law assigns enforcement responsibilities to the country’s media regulator. Platforms that don’t follow the rules may face fines of up to £18 million or 10% of global revenue, depending on which amount is higher. If companies ignore official information requests, senior managers may face legal consequences.

The requirement follows the 2023 Online Safety Act, which outlined duties for digital platforms to reduce harm for both children and adults. After a preparation period, the enforcement phase has started. Regulators have confirmed that thousands of adult websites are now using age checks. Social media platforms are being monitored for compliance with the same standards.
Recent findings from the regulator show that about half a million children between the ages of eight and fourteen viewed online pornography in the last month. The figures have drawn concern from child protection groups and public officials. The changes are intended to reduce the chances of similar exposure going forward.

While some gaps in enforcement remain, the introduction of mandatory checks is seen as a shift toward a more controlled online environment for minors. The aim is to create fewer pathways for children to reach dangerous or inappropriate content.

Additional measures are being considered. Officials have mentioned the possibility of setting time limits for how long children can spend on social apps each day. Any future changes will be introduced through separate decisions or legislative updates.

Digital platforms are now expected to meet technical and procedural requirements to show they are protecting young users. Oversight will continue as the regulator reviews how well the new rules are being followed.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next: Financial Cybercrime Risks Vary Sharply Across U.S. States, Report Finds
by Irfan Ahmad via Digital Information World