"Mr Branding" is a blog based on RSS for everything related to website branding and website design, it collects its posts from many sites in order to facilitate the updating to the latest technology.
To suggest any source, please contact me: Taha.baba@consultant.com
Thursday, August 7, 2025
Google Defends AI Search Features as Publishers Report Traffic Losses
There’s no chart or public breakdown to support this, just a general explanation. “Quality clicks,” as Google calls them, are visits where users don’t immediately leave. It says those are going up. It also says people are running more queries, and those queries are getting longer. Users are asking different kinds of questions, and search results now show more links per page, according to the company.
But not everyone agrees with this take.
A Pew Research study, which tracked 900 users’ search sessions, found a drop in click activity when AI Overviews were present. Only 8% of those sessions led to a click on a traditional link. When AI Overviews didn’t appear, the number nearly doubled to 15%. Google responded by saying the sample size was too small and the data was unreliable.
Publishers have reported clear traffic losses. Business Insider, for example, saw a 55% drop in traffic from Google searches between April 2022 and April 2025, based on data from Similarweb. The site cut its staff by over 20% earlier this year. HuffPost and The Washington Post are facing similar declines.
There’s more. According to Digiday, the number of U.S. news-related searches that end without any site click jumped from 56% to 69% after AI Overviews launched in May 2024. Authoritas also reported that if a site used to rank first but now appears below an AI Overview, it could lose nearly 80% of the traffic it got for that query.
At the same time, other sites are gaining. Reddit and YouTube have both been showing up more often in search results, especially near or under AI-generated content. Google says users are now leaning toward content with a personal tone, forum threads, podcast clips, review videos. It says people want “authentic voices” and are more likely to click on material that sounds human or offers unique perspective.
This trend overlaps with the growing appearance of Reddit links in results. Whether that’s because of Google's partnership with Reddit isn’t confirmed, but the shift is hard to ignore. Reddit content now regularly appears just beneath ads or summaries, often in one of the most visible spots on the page.
For publishers, these changes are forcing a re-think. Even if impressions are holding steady, clicks may not be. Many are noticing the split: users see the link, but they don’t click it. This pattern, called the “great decoupling” by some, means more traffic is stuck at the search level.
To make matters more confusing, Google also says AI referrals to websites have jumped 357% since June 2024. So while traditional publishers may be losing visits, other sites are seeing gains. This doesn’t necessarily point to less traffic, just a different shape. But for those losing ground, that’s little comfort.
Google encourages site owners to review their own numbers. It suggests checking click-through rates for complex queries and watching how content types are performing. Forums, original reviews, and media-rich posts appear to be doing better than standard informational pages.
Right now, AI Overviews are only active in around 20% of desktop searches. That number is expected to climb. What happens then is anyone’s guess. Google says it supports the open web and understands the stakes. But without sharing full details, it’s hard for publishers to track where the traffic is going or how to win it back.
Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.
Read next: Internet Boosts Solo Brainstorming but Reduces Idea Variety in Groups, Carnegie Mellon Researchers Report
by Irfan Ahmad via Digital Information World
Wednesday, August 6, 2025
Internet Boosts Solo Brainstorming but Reduces Idea Variety in Groups, Carnegie Mellon Researchers Report
Researchers at Carnegie Mellon examined how internet access influences creativity. The study, published in Memory & Cognition, looked at whether using Google helps or hinders people when brainstorming. While individuals using the internet produced more ideas in some cases, the researchers found that group creativity often dropped when everyone relied on search engines.
The experiment involved 244 undergraduate students who completed a three-minute brainstorming task. Each person had to think of alternative uses for either a shield or an umbrella. Half the group had internet access during the task. The rest were told to stay offline.
The umbrella prompt gave online users an edge. Google searches turned up long lists of creative ideas. Those users came up with more suggestions compared to the offline group. But when the object was a shield, which returned fewer useful results in search, there was no meaningful difference in idea count.
Groups Without Internet Performed Better
To see how group creativity compared, researchers used a method called nominal group analysis. This technique combines responses from individuals into simulated groups. The goal was to measure how many distinct ideas each group generated.
Larger groups without internet access performed better across the board. They produced more unique and less repetitive ideas. As group size increased, the benefit of staying offline became more obvious. People who used Google often repeated the same ideas and listed them in similar order.
Even when the internet led to more suggestions per person, those ideas tended to overlap across the group. This led to less variety overall. In contrast, participants working without online help offered a wider spread of ideas, some of which stood out as more original.
Ratings Confirm Offline Advantage
To rate quality, independent coders scored each idea on creativity, novelty, and effectiveness. Ideas judged to be more original or useful were counted separately. Across multiple comparisons, the offline groups produced higher-scoring ideas more consistently.
In one part of the analysis, researchers re-examined a separate dataset from an earlier study. Even with a five-idea cap in that version, the pattern held. Larger nominal groups without Google still outperformed those with it. Among the 20 best-rated ideas across both studies, 19 came from users who stayed offline.
This finding adds weight to concerns about digital tools shaping how people think. When multiple users rely on the same search engine, they often land on the same information. That overlap can stifle variety, especially in group settings where idea diversity matters.
Fixation Linked to Search Engine Use
The researchers connected their findings to a cognitive phenomenon known as fixation. This happens when people get stuck on a familiar example and fail to think beyond it. Seeing a few common ideas in a search result may cause others to fade into the background. That effect can limit creative thinking, especially when many people see the same prompts.
Even though Google can boost idea quantity for individuals, it seems to limit originality when used by a group. The internet serves up popular suggestions first. As a result, people often travel down the same mental paths. The study found that in online groups, responses tended to cluster around those shared routes.
Human Thinking Still Has an Edge
Study author Danny Oppenheimer emphasized that the findings don't mean the internet makes people less intelligent. Instead, he pointed out that how people use tools like Google matters more than the tools themselves. “The internet isn’t making us dumb,” he told Smithsonian Magazine. “But we may be using it in ways that aren’t helpful.”
Coauthor Mark Patterson also stressed the value of human thought in solving complex problems. He said that even though search engines and AI tools keep evolving, individuals bring unique perspectives that can’t be replicated. “It feels like every week there’s some sort of mind-blowing, new advance,” Patterson said. “But our own thinking, unaided by tech, still has serious value.”
The researchers pointed out that search results tend to direct people toward conventional solutions. This behavior can limit creative options, especially in group settings. As a way to avoid these “fixation effects,” they suggest doing a round of offline brainstorming before turning to the internet.
The team is now exploring whether different prompt strategies, sometimes called prompt engineering, can help people use digital tools more effectively. Their goal is to find approaches that preserve creativity while making smart use of online resources.
For everyday tasks, fixation may not cause much harm. But for broader challenges that require original solutions, Patterson noted that encouraging more diversity in thought could make a difference. “Solving big problems often means finding solutions that others haven’t thought of yet,” he said.
Study Limitations and Next Steps
The authors noted several constraints in their research. All participants were university students, and the study only used two objects, an umbrella and a shield. That narrow scope might not reflect how broader populations respond in other settings. The time limit may also have limited how deeply participants could explore search results.
Despite these limits, the pattern repeated across different measures, coders, and datasets. In tests of both quantity and quality, offline groups came out ahead more often. Even when different methods were used to define what counted as a good idea, the outcome leaned in the same direction.
The researchers are now exploring how people might use search tools or language models more effectively. Future work could focus on guiding users to avoid getting stuck on similar ideas, especially when working in teams.
Key Takeaway
When working alone, a quick search might help get the ball rolling. But when brainstorming as a group, turning to the internet too soon could narrow the creative field. Sometimes, keeping it offline leaves more room for fresh ideas to take root.
Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.
Read next: Your Phone May Be a Germ Hub and You’re Likely Cleaning It All Wrong
by Irfan Ahmad via Digital Information World
OpenAI Releases Two Open-Source Models to Regain Ground Against Global Competition
The two models, released under the permissive Apache 2.0 license, can be freely downloaded, modified, and deployed without fees or restrictive conditions. This licensing move aligns OpenAI with a growing field of open-weight rivals, especially from China and Europe, and reflects an effort to meet developer demand for transparency and enterprise adaptability. With these releases, OpenAI aims to offer high-performance models that users can run locally, granting complete control and enhanced data privacy.
Technically, the gpt-oss-120b model includes 120 billion parameters and is designed to run on a single Nvidia H100 GPU, while the lighter 20-billion parameter gpt-oss-20b model is suitable for local use on consumer-grade hardware. Both models are optimized for reasoning, code generation, mathematical tasks, and general problem-solving. They also support multilingual processing and perform competitively against proprietary counterparts like OpenAI’s own o4-mini and o3-mini models in industry-standard benchmarks.
The models utilize a Mixture-of-Experts architecture with Rotary Positional Embeddings and offer a 128,000-token context window. OpenAI has also made the tokenizer, named o200k_harmony, open source. These models support adjustable reasoning depth and offer fine-tuning capabilities, allowing developers to calibrate their performance based on latency and complexity requirements. Furthermore, tool use capabilities such as web search and code execution are modular, giving developers flexibility in integration without relying on OpenAI's infrastructure.
OpenAI emphasizes safety in the release, incorporating safeguards such as filtering of sensitive CBRN content during training and advanced post-training safety mechanisms. The company conducted rigorous internal and external evaluations, including malicious fine-tuning simulations and third-party reviews, concluding that the models remain below high-risk thresholds in cybersecurity and biosecurity domains. These results contributed to OpenAI's decision to release the models openly.
Deployment options are already available across major platforms like Hugging Face, Azure, AWS, and Databricks. Hardware partners include NVIDIA, AMD, and Cerebras, while optimized builds are being rolled out for Windows users. To further test model robustness, OpenAI has launched a $500,000 Red Teaming Challenge on Kaggle, inviting security researchers and developers to identify misuse vectors. The company also plans to release a public evaluation dataset to promote open research on model safety.
This release comes as OpenAI faces mounting competition from an expanding group of open-source AI developers worldwide. From DeepSeek’s high-efficiency R1 models in China to Europe’s Mistral series and Meta’s Llama family in the U.S., OpenAI now joins a crowded field of models offering increasingly comparable performance with fewer restrictions. The availability of high-performance open-weight models has spurred enterprise adoption, especially in regulated sectors where local deployment is crucial.
The decision to reintroduce open models also appears to be a strategic response to internal and external pressures. While OpenAI continues to see substantial revenue from proprietary offerings like GPT-4o and its API services, the dominance of open-source alternatives among enterprise customers has likely influenced this shift. OpenAI reported strong financials, with $13 billion in annual recurring revenue and a user base of over 700 million weekly active users, but the appeal of unrestricted, locally hosted models may divert usage away from paid platforms.
By offering robust, open-weight alternatives, OpenAI positions itself as a one-stop AI provider, spanning both proprietary and open ecosystems. The release may not generate direct revenue, but it helps OpenAI retain relevance among developers and enterprises exploring cost-effective and private AI solutions. The company is also reportedly deploying in-house engineers to help enterprise clients customize these models, potentially opening new service-based revenue channels.
The launch of gpt-oss may signal a long-term strategy to balance openness and safety while expanding the reach of AI tools across industries. Whether this approach can sustain OpenAI’s growth amid intensifying global competition remains an open question, but the release marks a renewed commitment to the principles of transparency and developer empowerment that originally defined the organization’s mission.
Notes: This post was created using GenAI tools. Image: DIW-Aigen.
Read next: Your Phone May Be a Germ Hub and You’re Likely Cleaning It All Wrong
by Irfan Ahmad via Digital Information World
Tuesday, August 5, 2025
Your Phone May Be a Germ Hub and You’re Likely Cleaning It All Wrong
Phones can be contaminated with many kinds of potential germs . When was the last time you wiped down yours – and with what?
If you use the wrong cleaning agents or tools, you could strip your phone’s protective coatings, degrade waterproof seals, or even affect its touch sensitivity.
Do phones really need cleaning?
Touchscreens get covered in fingerprints and smudges, so there are aesthetic and functional reasons to wipe down your screen.
Another reason comes down to potential health concerns . Whenever mobile phones are swabbed for microorganisms , scientists inevitably find hundreds of species of bacteria and viruses .
While not all of these cause sickness, the potential for transmission is there. We use phones while in the bathroom and then put them near our mouths, touch them while eating, and pass them between people in meetings, cafes, parties and classrooms.
Unlike hands, which can be washed many times a day, phones are rarely cleaned properly – if at all.
If you do want to sanitise your phone, it’s also important to not damage it in the process.
Some cleaning products will damage your phone
You might think a quick swipe with a household cleaner or hand sanitiser is a clever shortcut to keeping your phone clean. However, many of these products can actually degrade your device’s surface and internal components over time.
For example, both Apple and Samsung advise against using bleach, hydrogen peroxide, vinegar, aerosol sprays, window cleaners or high-concentration alcohol wipes (above 70%) on their devices.
Most smartphones are coated with an oleophobic layer – a thin film that helps resist fingerprints and smudges. Harsh chemicals such as alcohols, acetone or ammonia-based cleaners can strip this coating, making your screen more vulnerable to smudging, and diminished touch responsiveness.
Vinegar, a common DIY disinfectant , can corrode aluminium or plastic edges due to its high acidity. Bleach and hydrogen peroxide, though highly effective as disinfectants, are also too aggressive for the delicate materials used in consumer electronics.
High-alcohol content wipes may dry out plastics and make them brittle with repeated use .
In short: if the cleaner is tough enough to disinfect your kitchen bench, it is probably too harsh for your phone.
The oleophobic coating on a device screen can help repel fingerprints – but can be destroyed with harsh cleaning chemicals. Shuvro Mojumder/Unsplash
How should I clean my phone then?
The good news is that cleaning your phone properly is simple and inexpensive. You just need to follow the guidelines backed by major manufacturers. You should also unplug and remove any protective cases or accessories when cleaning your phone.
Most tech companies recommend using 70% isopropyl alcohol wipes (not higher), soft microfibre cloths, and anti-static soft-bristled brushes made of nylon, horsehair or goat hair to clean delicate areas like speaker grills and charging ports.
During the COVID pandemic, Apple revised its cleaning guidelines to permit the use of Clorox disinfecting wipes and 70% isopropyl alcohol on iPhones, provided they are used gently to avoid damaging screen coatings or allowing moisture to seep into the device.
Samsung offers similar advice , recommending users wipe down their phones with a microfibre cloth lightly dampened with a 70% alcohol solution, while steering clear of direct application to ports and openings.
Prevent accidental damage when using these tips
Never spray liquid directly onto the phone , as moisture can seep into ports and internal components, leading to short circuits or corrosion.
Submerging your phone in any cleaning solution is also risky, even for water-resistant models : the seals that prevent water from getting in, such as rubber gaskets, adhesives, nano-coatings and silicone layers, can degrade over time.
Avoid using paper towels, tissues, or rough cloths which may leave scratches on the screen or shed lint that clogs openings.
Finally, be cautious about over-cleaning. Excessive wiping or scrubbing can wear down protective coatings, making your phone more susceptible to fingerprints, smudges, and long-term surface damage.
How often should I clean my phone?
While there is no strict rule for how often you should clean your phone, giving it a proper wipe-down at least once a week under normal use would make sense.
If you regularly take your phone into high-risk environments such as public transport, hospitals, gyms, or bathrooms it is wise to clean it more frequently.
If you’re serious about hygiene, cleaning not just your hands but one of the things you touch most every single day makes sense.
Doing it wrong can slowly damage your device. But doing it right is simple, affordable, and doesn’t take much time.
This post was originally published on Theconversation.
Read next: AI Is Reshaping the Developer Role, and GitHub Says the Shift Is Already Underway
by Web Desk via Digital Information World
Monday, August 4, 2025
ChatGPT Close to 700 Million Weekly Users as OpenAI Wraps Up GPT-5 Rollout and Adjusts for User Wellbeing
The timing isn’t random. OpenAI is finalizing its rollout of GPT-5, the company’s newest model and its most capable so far. The update is designed to improve language while adding built-in reasoning, signaling a broader shift in how the model is being developed. Until now, OpenAI had kept logic-based systems like the o3 models separate from its core language tools. GPT-5 will likely merge these threads into one track.
The decision to integrate reasoning into the main platform means fewer decisions for users. There’s no longer a need to figure out which model fits a task. Everything will run through a single system, tuned to respond better across a wider range of requests.
Internally, this move signals something else. It shows OpenAI is gradually working toward more general forms of intelligence, although that threshold remains distant. GPT-5 may move the bar forward, but OpenAI has said that the model will need time before reaching what it considers a polished stage. Even so, some business terms tied to AGI could eventually be affected. Microsoft’s agreement with OpenAI includes triggers for revenue and licensing that depend on the capabilities of the system.
GPT-5 will launch in multiple sizes. In addition to the main model, smaller versions will be available to developers and commercial customers through the API. These lightweight options are meant to offer flexibility, especially where computing power or speed is limited.
ChatGPT is growing fastest inside companies. OpenAI now reports five million paying business users, up by two million in less than two months. That growth has helped push the platform to more than three billion daily messages. Numbers like that suggest heavy usage, not just high interest.
Revenue is rising with it. OpenAI is currently operating at a $13 billion annual run rate. Just a few weeks earlier, it was at $10 billion. Forecasts point higher still. Some analysts expect that number to cross $20 billion before the end of the year.
Supporting this kind of scale takes enormous infrastructure. OpenAI recently secured a $30 billion lease agreement with Oracle for server capacity. Another $11.9 billion is going to CoreWeave, a specialized cloud provider. The company is also expanding internationally, with a large data center project underway in Abu Dhabi and other activity in Norway.
But on the other hand competitors are also gaining ground. Google’s AI Overviews has claimed billions of monthly users and continues to expand, as its Gemini assistant is closing in on half a billion regular users. Meta is also scaling its Llama models. Anthropic is in the middle of a new funding round that could value it at $170 billion. While, Musk’s xAI venture remains under development but continues to attract attention and money.
The surge in global activity has sparked a race for researchers. Microsoft recently hired more than twenty former employees from Google’s DeepMind team. Some had worked on Gemini. These moves show how quickly the talent market is shifting as AI products mature.
Alongside growth and upgrades, OpenAI has started making changes in how ChatGPT interacts with users. Break reminders are now part of the interface. After longer sessions, users may see a gentle prompt asking whether it’s a good time to pause. The feature is designed to reduce overuse and bring more awareness to interaction time.
The company has also faced criticism about how ChatGPT responds to emotionally sensitive queries. In some cases, the system reinforced harmful thoughts or gave information in ways that weren’t responsible. OpenAI says future versions will be more careful in these areas. Instead of handing back fast answers, the model will help people think through their questions, especially when decisions carry risk or emotional weight.
A previous update caused the assistant to become overly agreeable. Many users found it unhelpful or frustrating. That version was pulled back. Since then, engineers have been adjusting tone and response behavior to improve quality without slipping into excessive compliance.
OpenAI is building a platform that’s meant to last. Reaching this many users in a short time isn’t just about scale. It changes expectations. Businesses want consistency, performance, and the sense that the tools they use won’t vanish or stop evolving.
High usage also feeds the system itself. More activity brings in more feedback. More feedback shapes new iterations. The data cycle tightens. Each version can be trained and fine-tuned with a broader sample of real-world interaction.
The launch of GPT-5 and the changes to ChatGPT’s behavior are coming during a volatile period in the AI sector. OpenAI now finds itself at a point where leadership must be defended every day. With billions flowing into competitors and pressure rising from customers, regulators, and the public, the company’s next moves will shape more than its own future.
Read next: Students Turn to AI While Colleges Struggle to Keep Up
by Irfan Ahmad via Digital Information World
Industry Breakdown: Where Generative AI Is Gaining Ground
More employees across different sectors are using generative AI tools to complete tasks. This shift has moved quickly over the past year. Based on new data from McKinsey most industries now report some level of adoption among their workers, though the pace varies by field.
Technology and Consulting Are Ahead
The technology sector reports the highest share of generative AI use. About 88% of respondents say they use it in at least one part of their job. This includes functions like marketing, sales, and customer-facing content creation. It’s common for teams to rely on AI tools to help write, analyze, or organize materials across digital platforms.
Professional services firms are not far behind. About 80% of employees in consulting, legal, and accounting firms use AI tools in some way. Many of these tasks involve research, document processing, and data review.
Advanced Manufacturing and Media Use Varies
In areas like aerospace, electronics, and semiconductor firms, usage is also high. Around 79% of workers in these fields have used generative AI. Marketing leads the list of functions, but regulatory tasks remain less common for automation.
Media and telecom workers show similar adoption levels. These companies often experiment with video, audio, and written content powered by AI systems. While not universal, creative departments tend to use these tools more than technical or legal teams.
Retail, Finance, and Health Are Catching Up
Consumer-facing companies, like those in retail or packaged goods, show lower but still strong engagement. Around 68% of workers say they’ve used AI in one or more functions. In finance, the figure is close to 65%. Banks and insurers often apply AI in customer support or internal data handling.
Healthcare, pharmaceutical, and medical product companies also report adoption at about 63%. In these areas, the use is often tied to administrative tasks, reporting, or patient communication tools.
Some Sectors Are Slower
Energy and materials companies sit at the lower end of the list. About 59% of their workers say they’ve used generative AI in any part of their role. This includes tasks like operations support or planning, but manual or field work tends to limit usage.
Across all industries, some functions remain mostly untouched by generative tools. In particular, workers in manufacturing and supply chain roles report the lowest usage. Only 5% of employees in manufacturing, and 7% in supply chains, said they use AI in their regular tasks. This may reflect the slower integration of automation in hands-on or logistics-heavy environments.
A Snapshot of Change, But Still Uneven
Overall, the data shows that generative AI is already present in most workplaces, but it’s not evenly spread. Some industries are far ahead, using these tools across multiple teams. Others are still exploring what role it can play.
As more companies test new systems, and as tools evolve to handle more types of tasks, the current gaps may shift. For now, though, the areas seeing the most direct use are those where digital workflows, content, or analysis already play a central role.
Notes: This post was edited/created using GenAI tools. Image: DIW
Read next: Students Turn to AI While Colleges Struggle to Keep Up
by Irfan Ahmad via Digital Information World
Students Turn to AI While Colleges Struggle to Keep Up
The most common use of AI, mentioned by 49% of respondents, was for brainstorming ideas. Another 42% said they use it to fix grammar and spelling mistakes, while 41% turn to it for help in understanding class topics that are hard to follow. Around 35% said AI helps them grasp issues unrelated to school, such as taxes or planning travel. A slightly smaller share, at 34%, reported using AI to expand on early ideas after brainstorming. About 29% said they use it for questions they’re too embarrassed to ask in person, and 25% said they rely on AI for general life advice or for improving their résumés. A quarter also use it to build study aids like notecards, and 22% said they’ve used AI to get ready for interviews. These numbers point to how embedded these tools have become across nearly every part of a student's week.
Altogether, 87% of students said they use AI for academic tasks, while 90% said they also apply it to life outside the classroom. On average, respondents reported spending five hours per week using AI tools for schoolwork and another five hours using them for personal reasons. But despite how common the tools have become, many said they feel unprepared or unsupported when it comes to using them properly.
Around 55% of students said they are learning to use AI without real guidance. Another 46% admitted they’re worried about getting in trouble for how they use it, and 10% said they already have faced consequences. While most students said their school has an official policy on AI, what those rules look like differs widely. About 30% said the use of AI is allowed for specific tasks only, and 31% reported that they can use it more broadly if sources are cited. But 32% said their college or university doesn’t allow the use of AI at all.
Even in places where rules exist, students say support is limited. Nearly 69% said most of their professors talk about the policy. However, only 11% said their instructors encourage AI use. That mismatch has left many students operating in an unclear space, where policies are known but not reinforced with helpful instruction.
Peer attitudes also vary. Roughly 37% of students said AI use is acceptable if disclosed, while 25% said classmates view it as cheating. About 22% felt that using AI is seen as an efficient or smart move by others in their circle.
When asked what they believe will matter most after graduation, half of the students said that knowing how to use AI is the top skill they expect to leave college with. Around 62% also linked their future career success to the ability to use AI responsibly. The degree itself didn’t rank as highly.
Only 28% of students said their institution is lagging behind when it comes to adopting new tech. Still, the findings show that usage is outpacing instruction. Most students are already deep into regular use of AI, both for learning and for life tasks, even while schools are still deciding how to handle it.
Read next: The Great SaaS List 2025: Which Tools Companies Are Using, Replacing, and Retiring
by Irfan Ahmad via Digital Information World






