"Mr Branding" is a blog based on RSS for everything related to website branding and website design, it collects its posts from many sites in order to facilitate the updating to the latest technology.
To suggest any source, please contact me: Taha.baba@consultant.com
Friday, July 25, 2025
Apple Updates App Store Age Ratings to Strengthen Parental Controls
All apps and games have been automatically updated to match the new system. The changes are live in beta versions of Apple’s upcoming software releases, including iOS 26, iPadOS 26, and macOS Tahoe. A full public rollout is expected in September.
App developers are now being asked to complete new questions covering areas such as in-app features, medical or wellness content, and themes involving violence. This will allow Apple to assign age ratings more precisely. Developers can see and, if needed, revise their app’s rating through App Store Connect.
Parents browsing the App Store will begin to see more information about each app. Details will include whether it contains user-generated content, shows adverts, or has tools for parental control. These additions are designed to make it easier for families to decide whether an app is suitable.
Apps that fall outside a user’s allowed age range will be less visible on the platform. For example, they won’t appear in featured sections such as Today or Games if the account belongs to a child. This could influence how developers build and promote their apps, especially if they’re targeting younger audiences.
As part of the same update, Apple has improved the setup process for child accounts. Parents can now enter a child’s age during setup, which will be shared with developers using a new API. The API gives developers access only to the age range, not the exact birthdate, which Apple says helps personalise content without compromising privacy.
For this to work, developers must integrate the API into their apps. If they don’t, the system won’t adjust the experience based on the user’s age.
The timing of Apple’s update comes as lawmakers in the United States continue to propose legislation aimed at protecting children online. Some states are calling for app stores to confirm user ages and collect parental consent before downloads are allowed. Apple and other major platforms, including Google, have argued that app developers should handle this responsibility.
The revised rating system is Apple’s way of addressing those concerns. While it won’t stop all misuse, the company believes that giving parents better tools, and making developers more accountable, can help reduce the risks children face online.
Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.
Read next: UK Begins Online Age Checks to Limit Children’s Access to Harmful Content
by Asim BN via Digital Information World
UK Begins Online Age Checks to Limit Children’s Access to Harmful Content
The law assigns enforcement responsibilities to the country’s media regulator. Platforms that don’t follow the rules may face fines of up to £18 million or 10% of global revenue, depending on which amount is higher. If companies ignore official information requests, senior managers may face legal consequences.
The requirement follows the 2023 Online Safety Act, which outlined duties for digital platforms to reduce harm for both children and adults. After a preparation period, the enforcement phase has started. Regulators have confirmed that thousands of adult websites are now using age checks. Social media platforms are being monitored for compliance with the same standards.
Recent findings from the regulator show that about half a million children between the ages of eight and fourteen viewed online pornography in the last month. The figures have drawn concern from child protection groups and public officials. The changes are intended to reduce the chances of similar exposure going forward.
While some gaps in enforcement remain, the introduction of mandatory checks is seen as a shift toward a more controlled online environment for minors. The aim is to create fewer pathways for children to reach dangerous or inappropriate content.
Additional measures are being considered. Officials have mentioned the possibility of setting time limits for how long children can spend on social apps each day. Any future changes will be introduced through separate decisions or legislative updates.
Digital platforms are now expected to meet technical and procedural requirements to show they are protecting young users. Oversight will continue as the regulator reviews how well the new rules are being followed.
Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.
Read next: Financial Cybercrime Risks Vary Sharply Across U.S. States, Report Finds
by Irfan Ahmad via Digital Information World
Financial Cybercrime Risks Vary Sharply Across U.S. States, Report Finds
A new analysis of online financial crime data shows wide variation in how states are affected by scams, fraud, and digital theft. According to 2024 figures compiled from FBI reports, Mississippi ranks as the safest state, followed by Texas, Minnesota, Alabama, and South Dakota. Each of these states reported low victim counts, lower financial losses per resident, or smaller increases in scam activity over recent years.
Mississippi had the fewest victims per million people, and its total financial losses remained among the lowest nationwide. Even with fewer cybercrime laws in place, it showed relative stability, which may suggest that a lower rate of targeting plays a role. Texas performed well despite its population size, aided by a stronger legal framework. Minnesota’s ranking came from a balance of low victim numbers and strong legislative presence.
High Losses Concentrated in D.C., Iowa, and Nevada
Washington, D.C. stood out with the highest per-capita losses in the country. Between 2022 and 2024, its total financial damages rose sharply, reaching over $400 million when adjusted for population. Most of the losses were linked to high-value scams, including investment fraud and business email compromise. Iowa also showed a steep rise in both victims and losses. Nevada experienced similarly high numbers, driven by identity theft and phishing attacks.
In these states, legislative measures have been slow to match the growing risk. D.C. and Iowa, in particular, have limited policy protections in place. Nevada’s high urban population and large number of visitors may also contribute to its increased exposure.
Rising Trend in Cybercrime Across the Country
The report confirms a national rise in financial cybercrime. Between 2022 and 2024, most states recorded double-digit growth in either victim count or total losses. In some cases, both increased. The overall impact has expanded rapidly, with total U.S. financial losses from online scams more than doubling in just two years.
Many scams follow familiar formats. Phishing remains the most widespread. Tech support scams, SIM swapping, and investment fraud continue to cause substantial damage. Some states report high financial losses even with fewer victims, which points to the growing value of individual attacks.
Not All Safe States Have Strong Cybercrime Laws
Legislation plays a role, but its effect varies. Several top-ranked states, including Mississippi and South Dakota, have minimal cybercrime laws. Still, they recorded relatively low exposure. On the other hand, some states with more active legislative efforts still reported high losses. Illinois, for example, had more cybersecurity rules than average but did not rank among the safer states.
This gap between laws and outcomes suggests that other factors, such as demographic patterns, scammer focus, or digital awareness, may influence risk more than legislation alone.
Five Key Metrics Used for State Rankings
The ranking system used five measures. Victim count and total financial losses each made up 30 percent of the overall score. The percentage change in those two figures from 2022 to 2024 accounted for another 30 percent. The number of state-level cybercrime laws contributed the final 10 percent.
While legislation was included, it carried less weight because many laws do not directly address financial scams. The intent was to reflect actual exposure and risk, not just legal preparedness.
Most Common Forms of Financial Cybercrime
The report categorized a wide range of scams. These included phishing, data breaches, SIM swap fraud, credit card theft, business email compromise, and ransomware. Confidence scams, like romance fraud and fake job offers, were also common. In many cases, high-value scams concentrated in fewer but more severe incidents caused larger losses.
States with higher tourist traffic or dense urban centers appeared to attract more of these schemes. Fraud involving fake support services, lottery scams, and overpayment traps were especially active in Nevada, Arizona, and Florida.
Arizona and Oregon Among the Fastest-Rising Risk States
Arizona showed a high number of victims and significant financial losses in 2024. Its rate of legislative action remained low, despite its exposure. Oregon saw the sharpest increase in losses of any state, with a jump of over 400 percent in two years. These changes pushed both states further down the safety rankings.
Oregon, in particular, experienced a rise in investment-related scams and phishing attacks. This was coupled with a lack of robust legal protections, suggesting a mismatch between the scale of the problem and available countermeasures.
Report Reflects Ongoing Digital Threats
Although state-level variation is significant, the broader trend is clear. Financial cybercrime continues to spread across the country. While some areas show signs of control or resilience, others remain vulnerable. The study points to a need for coordinated efforts that combine legal tools, public awareness, and individual digital safety habits.
No state is fully immune. Even in places with low current exposure, changing attack patterns could raise future risk. For now, a mix of laws, education, and technology use appears to make the most difference.
H/T: Cloudwards.
Notes: This post was edited/created using GenAI tools.
Read next: Sam Altman Sees Short Video Apps, Not AI, as Bigger Threat to Kids' Minds
by Asim BN via Digital Information World
Thursday, July 24, 2025
Google Tests Web Guide, a New Way to Sort Search Results
Google has introduced a new experimental feature that reshapes how search results are displayed. The feature, named Web Guide, is part of Search Labs and allows users to view web results in a grouped layout instead of a long list of links. It uses artificial intelligence to sort search results based on specific parts of the user’s query.
Search results arranged by themes
Web Guide works by identifying different directions a question might take. Then it displays sections based on those distinctions. For example, if someone asks about traveling alone in Japan, one part of the results may cover safety tips while another includes planning resources or personal travel stories. The content is still drawn from the web, but it is displayed with more structure.
Focus on broad and complex searches
This system is aimed at people who enter long or open-ended searches. These kinds of queries often bring mixed results, which can be harder to interpret quickly. Web Guide uses Gemini, Google's AI model, to read the intent behind the search and match it with distinct categories that reflect the variety of responses across the internet.
Limited release with opt-in access
Right now, Web Guide is only available in the Web tab of Google Search for selective users in some regions. Users who turn it on will see the grouped layout when they search. If someone prefers the original format, they can switch it back from the same place without leaving the experiment.
Plans to expand to more parts of Search
Over time, Google plans to bring Web Guide to other parts of its platform, including the "All" tab. The company has not shared a timeline for this. For now, the experiment joins other features in Search Labs, such as AI Mode, Notebook LM, and several smaller tests that focus on creative or informational tools.
Read next: Giving Smartphones to Children Too Early May Be Harming Mental Health in Adulthood
by Irfan Ahmad via Digital Information World
Giving Smartphones to Children Too Early May Be Harming Mental Health in Adulthood
A major new global study has found strong evidence that early smartphone ownership during childhood is linked with significant mental health problems later in life. The findings point to a consistent pattern across regions, languages, and age groups, with younger users facing sharper declines in mental wellbeing as they reach adulthood.
Early Smartphone Ownership Shows Clear Mental Health Decline
The research draws from the Global Mind Project, which includes data from nearly two million people across 163 countries. When the team focused on over 100,000 participants aged 18 to 24, a clear trend emerged: the earlier someone received a smartphone, the worse their mental health score as a young adult. Those who had a phone at age five scored far lower on emotional, social, and cognitive wellbeing compared to peers who received theirs at age 13 or later.
This mental health shift wasn’t isolated to one place. The same effect was found in every region, but was especially intense in English-speaking countries. Girls appeared to suffer more than boys, reporting much higher rates of suicidal thoughts, emotional detachment, and reduced self-worth.
Social Media Appears to Be the Biggest Trigger
What matters most isn't simply owning a smartphone, it’s what those phones connect children to. Early access to social media plays a major role in weakening emotional resilience. The study shows that around 40 percent of the negative impact from early smartphone use can be traced back to social media access alone.
The platforms use machine learning systems that adaptively feed content to users to keep them engaged. This constant cycle of comparison, pressure, and exposure to extreme or inappropriate material can shape a child’s thinking in damaging ways. The report highlights that these digital spaces often displace vital real-world development, including in-person relationships and healthy sleep routines.
Cyberbullying and Family Disconnection Follow Early Exposure
Among the most significant secondary factors were disrupted sleep, cyberbullying, and poor family relationships. In English-speaking countries especially, those who accessed social media early were much more likely to experience toxic online interactions and deteriorating family bonds. In some regions, early access also correlated with higher exposure to sexual abuse, especially among girls.
The study found that these negative experiences were often not directly linked to total screen time, but instead to the simple fact of having early, unsupervised access to algorithm-driven platforms. That alone increased the odds of entering harmful digital environments before a child had the tools or maturity to cope with them.
The Mental Health Impact Is Measurable and Widespread
Researchers used the Mind Health Quotient (MHQ), a broad tool that evaluates 47 emotional, cognitive, and social functions. For those who got smartphones at age 13, the average score was 30. For those who received their first phone by age five, the average dropped to just 1.
Almost half of females who had a smartphone by age six reported severe suicidal thoughts, compared to fewer than a third among those who got a phone at 13. Across the board, earlier access correlated with higher rates of aggression, hallucinations, emotional instability, and reduced self-image.
These patterns suggest a deep shift in mental functioning, not just a spike in stress or sadness. Many of those affected reported a persistent sense of disconnection from reality and found it hard to regulate their emotions or maintain confidence.
The Impact Is Growing Faster Than Most Adults Realize
As more children are handed smartphones even before middle school, the effects may be intensifying. Most social platforms officially restrict access for children under 13, but the study shows that enforcement is weak, and many kids gain access years earlier. In English-speaking countries, the average age of first smartphone use is now around 11.
The researchers stress that these findings are not isolated to a few outliers. Across the global dataset, each year earlier a child receives a smartphone is linked with a steady drop in mental wellbeing scores. This trajectory, they warn, could mean a generation increasingly at risk of chronic emotional distress, lower resilience, and disrupted development.
Experts Recommend a Public Health Approach, Not Parental Burden
The study’s authors argue that expecting parents to individually police smartphone access is unrealistic. Children whose access is restricted may feel excluded if their peers are already online. Worse, even children with limited access at home may still be exposed to the effects of aggressive behavior from peers shaped by early digital environments.
To deal with this, the team recommends public policies similar to those that govern alcohol and tobacco. This includes restrictions on under-13 smartphone ownership, mandatory digital literacy education, stronger enforcement against age violations by tech firms, and the introduction of basic phones that lack internet apps.
These measures are aimed not at blocking progress, but at protecting critical stages of mental development. Without such intervention, the report warns, the decline in mind health could carry long-term consequences for learning, social participation, and even economic productivity.
Policy Proposals Focus on Four Key Areas
The researchers outlined four actions that governments could adopt immediately:
- Introduce mandatory mental health and digital literacy education in schools, so children understand the risks of social media before they use it.
- Hold technology companies accountable for underage users, including enforcing age verification and introducing penalties for non-compliance.
- Restrict all social media access under age 13, across devices and platforms, through improved enforcement and technical controls.
- Create age-appropriate phones for younger children, offering basic functions like calling and texting but no access to algorithmic or user-generated content.
Action Urged Before Long-Term Damage Becomes Permanent
With current trends moving quickly, the authors emphasize that delaying action may close the window for meaningful prevention. If the current age of first phone ownership keeps falling, projections show that up to 20 percent of the next generation could face regular suicidal thoughts, and as much as a third may struggle with core emotional and cognitive functions in adulthood.
The researchers aren’t calling for bans, but for smarter timing. They suggest the world treat smartphones and social media as tools that can be harmful when introduced before key developmental milestones. If society can accept age limits for driving, drinking, and smoking, then the same logic should apply to digital access that directly affects a child’s brain.
Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.
Read next: Teens Increasingly Use AI to Make Personal Decisions, Prompting Industry Attention
by Irfan Ahmad via Digital Information World
Wednesday, July 23, 2025
Google Search and YouTube Drive Alphabet’s 14% Revenue Surge, AI Overviews Hit 2 Billion Users
Beyond video and search advertising, Alphabet highlighted usage increases across its AI products. Google Search’s AI Overviews, which offer quick summaries for certain search results, are now available in 200 regions and are being used by two billion people each month. In May, that number was 1.5 billion. AI Mode, a tool that provides responses in a conversational format within Search, has reached one hundred million monthly users in the United States and India. Daily activity on Gemini, Google’s AI assistant app, rose by more than fifty percent since the first quarter, and it now has around 450 million active users. Google said these features are prompting people to make more searches overall, especially younger users who appear more comfortable interacting with AI-driven systems.
In its video AI efforts, the company pointed to the growing role of its Veo model. Since May, users have generated more than seventy million videos using Veo 3. Developers working with Google’s Gemini models now number over nine million, and within Workspace, the company’s video creation feature has gained nearly one million monthly users. Google Meet, the video conferencing product, also saw over fifty million people using AI-generated meeting notes during the quarter.
Token processing across all Google AI products and APIs hit 980 trillion per month, which is double the number it reported just two months earlier at its developer conference. That spike in activity appears to have been one of the reasons behind Alphabet's decision to increase its capital spending in 2025, which it has now set at eighty-five billion dollars. The report also confirmed that Google's cloud division saw an increase in revenue and profit, with its annual revenue run rate climbing above the fifty billion dollar mark.
YouTube Shorts also received attention on the earnings call. The product now gets more than two hundred million daily views, and in several countries, revenue per watch hour has matched what the core YouTube platform delivers. The company is also preparing to broadcast an NFL game globally this September without charging viewers. The game will be streamed live and marks a new step in YouTube’s long-term push into live sports.
Alphabet’s overall revenue for the quarter came in at 96.4 billion dollars, which represents a 14 percent rise from the same quarter in 2024. Net income increased by 19 percent, reaching 28.2 billion. Operating income grew by the same percentage as total revenue, and services revenue reached 82.5 billion. Compared with the first quarter of 2025, advertising and total revenue both rose by 6.6 percent, and profits were up by 2.17 percent.
The report shows Alphabet remains focused on expanding the scale of its AI infrastructure and growing adoption across its products, especially where it directly influences user engagement and search volume. AI is now integrated into nearly every part of its consumer-facing services. The numbers also reflect how Alphabet’s long-term investments in AI systems and cloud infrastructure are beginning to show measurable results in usage and revenue across multiple areas of its business.
Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.
Read next: AI Use Up 75%, Daily Usage at 43%, Yet 40% Say Complex Queries Fail
by Irfan Ahmad via Digital Information World
AI Use Up 75%, Daily Usage at 43%, Yet 40% Say Complex Queries Fail
The study, conducted by Yext, based on a survey of 2,237 adults from four countries, the United States, United Kingdom, France, and Germany, shows how people are blending AI search into their online routines. Three out of every four respondents (or 75% or be exact) said they are using AI search more than they were a year ago. Meanwhile, 43% said they now use AI tools daily or more. This signals a clear behavioral shift, especially as traditional search methods are starting to lose ground.
But the shift hasn’t translated into full trust. Many users say the experience falls short in specific, practical ways. When asked what frustrates them most about AI-powered search, 40% pointed to poor handling of complex or multi-part queries. These are not abstract complaints. For instance, a single travel plan involving multiple stops, price filters, or scheduling conditions often leads to inconsistent or shallow AI results. This leaves users forced to double-check information elsewhere or reconstruct questions to get anything useful back.
Another 37% of users cited a lack of clear, trustworthy answers with proper sources. When an AI model produces content, it often does so without visibly citing where that information came from, making it hard for people to verify what they’re reading. That absence of traceability affects not only personal confidence in the result but also the user’s willingness to act on it.
Beyond credibility and logic, usability came into question as well. Roughly one-third of respondents (34%) said AI tools do not provide actionable next steps, particularly when dealing with service-related queries such as “how do I switch mobile providers” or “what to do after applying for a loan.” Without clear direction or links to take further action, users are left with generic advice that lacks follow-through.
The difficulty in comparing local options was a common frustration for 31% of respondents. For local discovery, such as finding the best plumber nearby or comparing prices between local clinics, AI tools tend to return broad answers, often missing location-specific context. In these cases, users still rely more heavily on traditional search platforms or directory-style services to get detailed comparisons.
Personalization also remains a weak point. Thirty percent of users said the results don’t reflect their preferences or search history, which makes AI outputs feel disconnected or too generalized. The tools often provide a “one-size-fits-all” answer, even in cases where a returning user expects some continuity in recommendations.
Smaller but still significant issues were also flagged. One in five users (20%) noted that AI tools fail to summarize long-form content accurately, especially when the content requires interpretation or nuance, such as policy briefings, academic papers, or medical information sheets.
Across all these shortcomings, only 3% of respondents chose “Other,” suggesting that the main issues identified, complexity, trustworthiness, comparability, actionability, personalization, and summarization, capture the vast majority of user concerns today.
This disconnect between rising usage and persistent doubts has a direct impact on how brands show up in AI-driven environments. On one side, people are turning to AI with increasing frequency. On the other, they’re second-guessing the very results they receive. That tension offers both a warning and an opportunity.
The warning is straightforward: if the data used by AI tools to represent a brand is incomplete, inconsistent, or not updated in structured form, the brand risks being misrepresented, or worse, excluded entirely. A system that relies on pattern recognition and aggregated knowledge can easily skip over businesses that haven’t prepared their information in a machine-readable way. If an address is missing, a product spec is wrong, or a business category is unclear, AI systems may simply route users elsewhere.
The opportunity, however, lies in precision. Trust can be built by filling in the accuracy gaps. That starts by verifying that every piece of information, from store hours to product attributes to customer reviews, is both correct and formatted in a way that AI models can interpret cleanly. Structured data doesn’t just improve visibility, it directly improves the quality of answers that AI systems generate, which in turn shapes user trust.
In environments where AI tools generate summaries, compare listings, or offer direct responses instead of links, brands must take control of the raw data that fuels those outcomes. The more accurate the information is at the source, the less likely the system is to produce misleading summaries or omit a brand entirely.
As people use AI more, they’re expecting more. That means brands can no longer treat AI visibility as a bonus, it is fast becoming a baseline requirement. But usage alone doesn't equal loyalty. Accuracy, context, and trust are still the currency that determines whether people follow through after asking a question.
The takeaway is clear: while AI-powered search has become routine for many, satisfaction is still conditional. The next phase of competition won’t revolve solely around presence in AI tools, but on how trustworthy, complete, and actionable that presence feels to the person using it.
Read next: AI Chatbots Often Overconfident Despite Errors, Researchers Say
by Irfan Ahmad via Digital Information World






