"Mr Branding" is a blog based on RSS for everything related to website branding and website design, it collects its posts from many sites in order to facilitate the updating to the latest technology.
To suggest any source, please contact me: Taha.baba@consultant.com
Tuesday, November 18, 2025
Study Maps the Conditions That Trigger AI Citation Hallucinations
Researchers at Deakin University built the experiment around three disorders. Depression sat at the top of the visibility ladder, followed by binge eating disorder, then body dysmorphic disorder at the bottom. This mix created a natural gradient in research volume. Depression carries decades of trials and thousands of papers. The other two conditions occupy smaller footprints and offer far fewer studies on digital interventions. That uneven landscape became the test bed for the model’s strengths and misses.
Each disorder received two review requests. One prompt asked for a broad overview that covered causes, impacts and treatments. The other request drilled into digital interventions. The team wanted to see how topic familiarity and prompt depth shaped the reliability of the citations. They pulled every reference into a manual check across major academic databases. This process placed each citation into one of three buckets. Either it existed in the real world, it existed but contained errors, or it was fabricated outright.
The headline numbers make the problem easy to see. Out of 176 total citations, 35 were fabricated. Among the 141 real ones, 64 carried errors. Only 77 came through fully accurate. That means around half of all citations were unusable in scholarly work. DOI errors were the most common type of error. Wrong links, wrong codes, or completely invalid strings made many citations look correct at first glance but fail when checked against the actual paper.
The pattern became sharper when the team compared the three disorders. Depression showed the lowest fabrication count with only four fake citations out of 68. Binge eating disorder jumped to seventeen fabricated citations out of sixty. Body dysmorphic disorder followed closely with fourteen fabricated citations out of forty eight. Accuracy among the real citations also depended on the topic. Depression reached sixty four percent accuracy. Binge eating disorder reached sixty. Body dysmorphic disorder fell to twenty nine. The drop shows how the model struggles once the evidence base gets thin enough.
Prompt specificity also shaped outcomes, though not in a simple way. Binge eating disorder showed the clearest effect. Its specialized review saw fabrication rise to almost half of the citations. The general overview stayed closer to one out of six. Other disorders showed different patterns. Depression’s general overview delivered better accuracy than its specialized review. Body dysmorphic disorder flipped that pattern and showed better accuracy when the prompt narrowed. These differences suggest the model reacts to the structure of the request and the strength of the underlying literature in different ways.
The study’s authors point out how much the model leans on patterns in public information. When the topic sits on a wide and stable base of research, the model has clearer pathways to follow. When the topic shifts to areas with fewer papers or narrower lines of inquiry, the model relies more on guesswork. The results from body dysmorphic disorder show how quickly accuracy collapses when the system tries to piece together references from scattered or limited material.
These findings matter because more researchers have started using large language models to speed up routine tasks. Survey data shows strong adoption among mental health scientists. Many researchers believe these systems help with drafting, coding, and early idea formation. Efficiency gains look promising until the citations fall apart under verification. That creates problems for anyone who trusts the output without checking every reference. A fabricated citation can mislead a research team, distort the evidence trail, and send other scientists searching for sources that were never written.
The study pushes institutions and journals toward simple safeguards. Every AI generated citation needs to be verified. Every claim tied to those citations needs human confirmation. Editors can screen suspicious references by checking whether they match known publications. When a citation sits outside any recognized record, it becomes a clear red flag. With these checks in place, journals can block fabricated references before they reach print.
The authors also point to the need for stronger guidance at universities and research centers. Training programs can help researchers learn how to identify hallucinations and validate AI generated content before placing it in a manuscript. As AI tools become part of normal workflows, these checks will keep the academic record from drifting into mistaken territory.
The results show that reliability is not static. It depends on the openness of the research terrain. Well studied disorders give the model a broader map. Narrower or less familiar topics cut away those supports. For now, the safest way to use these systems in research is to treat their output as a starting point that always needs careful checking. The experiment makes that reality clear.
Read next: Meta Adds New Content Protection Tools to Help Creators Spot Copycats
by Asim BN via Digital Information World
Digital Growth Continues but Leaves the Poorest Far Behind
| Year | Number of Internet users, billions | Percentage of Internet Users |
|---|---|---|
| 2005 | 1 | 15.6 |
| 2005 | 1 | 15.6 |
| 2006 | 1.1 | 17.2 |
| 2006 | 1.1 | 17.2 |
| 2007 | 1.4 | 20.2 |
| 2007 | 1.4 | 20.2 |
| 2008 | 1.6 | 22.8 |
| 2008 | 1.6 | 22.8 |
| 2009 | 1.7 | 25.3 |
| 2009 | 1.7 | 25.3 |
| 2010 | 2 | 28.4 |
| 2010 | 2 | 28.4 |
| 2011 | 2.2 | 30.9 |
| 2011 | 2.2 | 30.9 |
| 2012 | 2.4 | 33.3 |
| 2012 | 2.4 | 33.3 |
| 2013 | 2.6 | 35.3 |
| 2013 | 2.6 | 35.3 |
| 2014 | 2.8 | 37.4 |
| 2014 | 2.8 | 37.4 |
| 2015 | 3 | 39.9 |
| 2015 | 3 | 39.9 |
| 2016 | 3.3 | 43.6 |
| 2016 | 3.3 | 43.6 |
| 2017 | 3.5 | 46.3 |
| 2017 | 3.5 | 46.3 |
| 2018 | 3.8 | 49.4 |
| 2018 | 3.8 | 49.4 |
| 2019 | 4.2 | 53.9 |
| 2019 | 4.2 | 53.9 |
| 2020 | 4.7 | 60.1 |
| 2020 | 4.7 | 60.1 |
| 2021 | 5.1 | 63.8 |
| 2021 | 5.1 | 63.8 |
| 2022 | 5.4 | 67 |
| 2022 | 5.4 | 67 |
| 2023 | 5.6 | 69.2 |
| 2023 | 5.6 | 69.2 |
| 2024 | 5.8 | 71.2 |
| 2024 | 5.8 | 71.2 |
| 2025 | 6 | 73.6 |
| 2025 | 6 | 73.6 |
High-income economies sit near universal use with 94 percent of their populations online. Low-income economies reach only 23 percent, a gap that barely moves even when year-on-year growth hits 7.4 percent in some of these countries. Regional figures paint the same picture. Europe and the CIS stand between 88 and 93 percent. The Americas reaches 88 percent. Asia Pacific settles at 77 percent and the Arab States at 70 percent. Africa trails with 36 percent. Least developed countries reach 34 percent, and landlocked developing countries reach 38 percent. Both remain far from a point where steady annual improvements could close the distance.
Gender divides follow the same path. Worldwide, 77 percent of men use the Internet against 71 percent of women. That gap produces a global parity score of 0.92, the same level seen in 2019, which shows little overall movement. Europe, the CIS, and the Americas reach parity, yet low-income economies sit at a much wider split with only 18 percent of women online compared with 29 percent of men. LDCs show a similar pattern with 28 percent of women online against 39 percent of men. Africa shows improvement over several years, but still reaches only a parity score of 0.78.
Age also matters. Youth aged 15 to 24 reach 82 percent global usage, while the rest of the population reaches 72 percent. That ten-point difference narrows slowly, but gaps in low-income regions stand out. Young people there are nearly twice as likely as older groups to be online. By contrast, youth in high-income countries sit only five percent above the rest of their population. Europe, the CIS, and the Americas already show youth usage above 95 percent.
Where people live has a large effect on whether they connect. Urban areas reach 85 percent global Internet use. Rural areas stop at 58 percent. Africa’s rural-urban ratio hits 2.6, one of the widest gaps. Low-income countries show only 14 percent of rural populations online, compared with 39 percent in urban zones. Even in regions with relatively high overall access, rural connections do not move at the same pace. Europe stands closest to balance at a ratio of 1.1.
Network quality redraws these divides. This year’s data show 5G coverage reaching 55 percent of the world’s population, yet only 4 percent in low-income economies. High-income economies stand at 84 percent. Europe reaches 74 percent, Asia Pacific reaches 70 percent, and the Americas reaches 60 percent. Coverage in the Arab States reaches 13 percent and Africa reaches 12 percent, while the CIS sits at 8 percent. Older networks fill the gap. 4G covers 93 percent of the global population, but only 56 percent in low-income countries. In these markets, 3G still acts as the main entry point for mobile broadband.
Around 312 million people live in locations without any mobile broadband signal. Nearly half of that unserved group is in Africa. Rural pockets tell an even starker story. In SIDS, 36 percent of rural residents lack 3G or higher. In the Americas, 21 percent of rural residents remain outside 3G coverage. In LDCs, the figure is 19 percent, and in LLDCs it is 17 percent.
Subscription numbers stretch the divide from another angle. The world now holds 9.2 billion mobile-cellular subscriptions, equal to 112 per 100 inhabitants. High-income economies reach 142 per 100 inhabitants, while low-income economies reach 70. Mobile broadband sits at 99 subscriptions per 100 inhabitants, which places the global total almost one-to-one with population, yet distribution is uneven. The Americas stands at 132 mobile broadband subscriptions per 100 inhabitants. Africa stands at 56. In 2025, 36 percent of all mobile broadband subscriptions are 5G. Regions with strong 5G coverage hold more than 40 percent of their subscriptions on the newer standard, while Africa and the CIS sit near 2 percent or lower.
Traffic intensity highlights the gap in how people use their connections. The global average mobile broadband traffic reaches 15.3 GB per subscription per month. High-income economies sit at 17.9 GB. Low-income economies average 2.2 GB. That means a user in a high-income country generates a month of low-income traffic in just four days. The CIS region leads mobile data use with 22 GB per subscription. Africa records 5.2 GB. For fixed broadband, global traffic averages 369 GB per subscription. High-income economies climb to 505 GB, while low-income, lower-middle, and upper-middle groups land between 248 and 310 GB.
Affordability remains one of the strongest barriers. Median prices for a data-only 5 GB mobile broadband basket fall from 1.5 to 1.4 percent of GNI per capita worldwide. Fixed broadband stays at 2.5 percent. But the averages hide how steep the cost feels for lower-income users. People in low-income economies spend 22 times more of their income on mobile broadband than users in high-income economies. Fixed broadband costs more than one quarter of average income in low-income countries. Of 205 economies with data for mobile broadband, only 130 meet the affordability target of 2 percent of GNI per capita. For fixed broadband, only 88 out of 195 meet that mark.
ICT skills show another layer of imbalance. Communication skills remain strong in most countries, with at least three-quarters of Internet users showing basic capability. Skills in safety, problem solving, and content creation vary widely. Among the eight countries with complete data, overall basic skill levels for Internet users range between 16 and 74 percent, a spread that speaks to uneven readiness even where connectivity exists.
Mobile phone ownership runs higher than Internet use, reaching 82 percent worldwide among people aged 10 and older. High-income economies reach universal levels above 95 percent. Upper-middle economies reach 90 percent. Low-income economies reach 53 percent. In Africa, phone ownership reaches 66 percent, yet only 36 percent go online. The gender gap in phone ownership mirrors the gender gap in Internet use. Globally, 87 percent of men own a phone compared with 78 percent of women. Women account for 67 percent of those without phones.
All these figures move in one direction. The world is drawing more people online each year, but the benefits rise fastest where income, infrastructure, and skills already align. The poorest regions gain users but lose ground on quality, speed, and affordability. Growth continues, yet the numbers show how far the gap still runs.
Notes: This post was edited/created using GenAI tools.
Read next: Most Marketers Call Social Media Essential, Nearly Two Thirds Tie It to Outcomes, and AI Support Reaches 45 Percent
by Asim BN via Digital Information World
Monday, November 17, 2025
Most Marketers Call Social Media Essential, Nearly Two Thirds Tie It to Outcomes, and AI Support Reaches 45 Percent
The pressure to keep content flowing is intense. More than 70 percent of organizations post every day, and one in three push out multiple posts across their platforms daily. Only a small share post on a monthly rhythm. The behavior holds steady across company size and industry, which signals that teams of all shapes feel the same need to stay visible in crowded feeds.
A gap still sits between the importance of social and how teams judge their own work. Most give themselves a B. Nearly a third feel their content strategy needs to mature. Another 19 percent point to staffing limits and bandwidth shortfalls that slow them down. They want tighter planning, more hands, and stronger execution, but they also try to keep pace with platforms that change their rules often.
The shift in goals over the last year stands out. In 2024, 76 percent of marketers saw brand awareness as the top goal for social. That dropped to 22 percent in 2025. Teams now focus on engagement, reach, leads, and sales. The numbers paint a story of a channel that moved deeper into the funnel and picked up more responsibility for measurable outcomes.
Budgets follow these expectations. Marketers plan to increase spending on Instagram at 46 percent, YouTube at 39 percent, and LinkedIn at 36 percent over the next 12 to 18 months. Confidence in X continues to slide, with nearly one in five cutting spend. TikTok picks up interest with 20 percent planning to grow their investment, while new platforms like Threads and Snapchat remain smaller bets.
Challenges remain heavy across the board. Bandwidth sits at the top with 46 percent saying they feel stretched. Engagement issues follow at 37 percent as teams struggle to understand what their audiences want at any given moment. Another 36 percent say their content needs more variation and stronger ties to organizational goals. Algorithm changes affect 26 percent of respondents, yet the impact rises among heavy posters. For teams already pressed for time, each shift in ranking logic makes results unpredictable.
Even with those challenges, certain practices consistently deliver better outcomes. Authentic content carries weight. Seventy-eight percent call user generated content important to their strategy. Human stories, real voices, and visuals created by actual users draw stronger reactions than branded content. Community engagement helps as well. Twenty-seven percent say it plays a major role in their success. Teams that build conversations see steadier growth. Consistency is another factor. Forty-one percent say it strengthens their performance and helps them hold attention in busy feeds.
AI enters the picture as one of the main tools teams use to keep up with the workload. Forty-five percent of marketers report using AI to support their social efforts. Among organizations that post multiple times a day, usage rises to 53 percent. The contrast between users and non users is sharp. Those who avoid AI report higher expenses at 32 percent, bandwidth constraints at 32 percent, and performance challenges at 29 percent. Social media roles feel these issues more strongly than other functions.
The study shows how AI is used in day-to-day work. Teams rely on it to accelerate content tagging, organize visual libraries, trim routine tasks, and keep production moving when deadlines stack up. Findings from related research add more context. Nearly four in ten marketing and creative professionals use generative AI for both written and visual content. Many report time savings that reach about 24 hours a month for content generation alone.
User generated content plays a growing role in the mix. Forty-one percent of organizations say they invest in UGC programs. The appeal is clear. UGC performs better across key metrics and produces higher credibility. The challenge lies in collection. Sixty-four percent still gather it manually through social platforms or email. Only a small group uses tools that streamline the process.
Distribution shapes reach as well. Half of organizations push content to stakeholders who can share it through their own channels. Employees and fans make up 33 percent of that group. Sponsors and partners account for 25 percent, and athletes or influencers contribute 15 percent. This approach widens the audience far beyond brand accounts and amplifies content that might otherwise remain unseen.
All of these numbers show how social media stands at the center of modern marketing. Teams devote time, budget, and energy to it because the channel brings measurable results. The workload will keep climbing as expectations rise. To stay competitive, marketers turn to authentic content, stronger communities, and tools that help them scale without losing their voice.
Notes: This post was edited/created using GenAI tools.
Read next:
• Americans Point to the Tasks They Want AI to Handle Most
• Weak Password Culture Starts With the Websites and New Research Maps the Scale
by Irfan Ahmad via Digital Information World
Sunday, November 16, 2025
Americans Point to the Tasks They Want AI to Handle Most
Fresh numbers from Statista Consumer Insights outline the priorities. The strongest interest centers on personal assistance, and about 32 percent say they want help with organizing life details. Phones already carry enough data from calendars, messages, and apps to make that kind of support feel natural, so expectations stay grounded.
Daily chores follow closely. Twenty eight percent want AI to take routine tasks off their plate. Work related help attracts 27 percent, which shows how many people see room for support with planning, drafting, or sorting information. Teaching or tutoring lands at 26 percent, and that interest reflects how common quick on demand learning has become.
Health and wellness guidance captures 25 percent. The same share look for help refining communication or language skills. Content creation sits at 23 percent, since plenty of people now weave AI into videos, posts, or documents without treating it as a full creative engine. At the same time, 22 percent prefer to avoid AI entirely, keeping a clear boundary between their tools and their routines.
Across all categories, the requests line up with abilities that current systems already provide. The real work for many users comes from choosing the right tool and shaping a workflow that fits their habits.
Notes: This post was edited/created using GenAI tools.
Read next: Weak Password Culture Starts With the Websites and New Research Maps the Scale
by Irfan Ahmad via Digital Information World
Saturday, November 15, 2025
Weak Password Culture Starts With the Websites and New Research Maps the Scale
The rules set by the websites shape these choices, and most of the world’s most visited platforms make weak passwords far too easy. NordPass reviewed one thousand high traffic sites, and the findings point toward a system that pushes convenience ahead of basic safety.
The study covered twenty four industries and captured how the top destinations on the internet handle the basics of account protection. The team relied on traffic estimates gathered between late February and early March this year, then checked each site to see what it demands from users when they create a password. The criteria followed the same structure used in the NordPass generator, which looks for length, character variety, and case sensitivity. These checks reveal the minimum the websites expect from their users, and the picture that emerges shows widespread gaps.
A large share of popular platforms still accepts short or predictable credentials. The data shows that fifty eight percent of the tested websites do not ask for any special characters. This leaves passwords built from letters and numbers alone, the kind of combinations that can fall to brute force tools in very little time. Another forty two percent do not set any minimum length, so they leave room for short strings that attackers can test quickly. Eleven percent of sites do not require anything at all. Only one percent meets all the best practice criteria by asking for longer passwords that mix characters and respect case sensitivity.
The weaknesses stretch across sectors. Sites tied to government services, health records, and food related services show some of the lowest scores for policy strength even though they often handle sensitive information. Many of these platforms smooth out sign ups to speed up onboarding, and some rely on simplified website building systems that do not enforce strong checks by default. When the foundational rules start at a low bar, users fall back on easy combinations just to move through the form, and the pattern sticks.
The research also looked at the broader authentication landscape. Support for single sign on appears on thirty nine percent of the websites, mostly through major providers like Google. Passkeys appear on only a small share, around two percent. Five websites meet the strictest standards mirrored from NordPass and NIST. These results show how slowly stronger models move across the web even when the tools already exist.
Weak rules matter because they train people to expect low effort login habits. A site that accepts a simple string teaches users that simple works everywhere. Attackers count on that predictability and use automated tools to sweep across accounts at scale. Newer AI driven systems can test vast numbers of combinations faster than older methods, which makes the gap between strong and weak policies even more significant. Once a password leaks or gets guessed, the damage can spread through any platform where the same combination exists.
The ripple continues inside organizations. Employees carry personal habits into the workplace. If they create weak passwords for common services, they often recycle similar patterns for business accounts. Industries that handle financial data or confidential records feel the strain when attackers exploit these shared weaknesses. Government portals face the same risk. Oversights in one area can spill into many others.
Websites have ways to fix this pattern. Clear rules at the start help shape stronger habits. Asking for length and character variety increases the time it takes to break a password by automated means. Strength indicators help users adjust quickly without confusion. A simple set of visual cues can steer someone away from common strings without pulling them out of the flow of sign up. Passkeys offer another route by removing passwords from the equation and replacing them with cryptographic checks that block guessing attempts.
Until websites catch up, users still hold some control over their own safety. A password generator can help them build stronger combinations even when a site does not demand them. The complex password generator available on Digital Information World offers a straightforward way to craft long and varied credentials. It lets people create passphrases or random strings that resist automated attacks and store them through any manager they trust.
The issue sits at the intersection of user behavior and website design. People respond to the rules that sit in front of them, and for years many sites lowered expectations for the sake of quick onboarding. This shaped a culture where weak combinations feel normal. The new research shows that password carelessness did not emerge by chance. It grew from years of lax enforcement across the biggest platforms online.
Improving digital hygiene will require more than guidance aimed at users. Platforms need to raise their standards and adopt stronger criteria. When the system expects more, people adapt. Until then, the habits will remain uneven, and attackers will continue to exploit the gaps that weaker policies leave behind.
Notes: This post was edited/created using GenAI tools.
Read next: Finally, OpenAI Says ChatGPT Will Listen When People Tell It to Avoid the Long Dash
by Irfan Ahmad via Digital Information World
Finally, OpenAI Says ChatGPT Will Listen When People Tell It to Avoid the Long Dash
OpenAI pushed out a small fix that changes how the model reacts when a user writes a clear instruction inside the personalization panel. Altman framed it as a simple win, and the move arrived shortly after the company rolled out the new GPT 5.1 model. It sounds minor on the surface. Yet people who rely on the tool know how often the bot stubbornly added that long mark even when asked to avoid it.
Writers said the habit broke their tone and made their work stand out for the wrong reasons. Many stopped using the dash in their own writing because they did not want readers to assume a chatbot drafted their text. Complaints piled up across forums where people kept posting examples of the model promising to avoid it, then slipping it back into the very next sentence.
The new behavior only kicks in when the user plants the instruction in the custom settings area. Altman did not promise success every time in regular chats. That fits with the broader reality of LLM behavior. These models shift output by leaning on probability patterns rather than fixed rules. If a user places the instruction in the right slot, the odds of a clean output increase, though nothing becomes absolute.
Some critics pushed the conversation in another direction. They pointed out that if OpenAI struggled for years to control one simple punctuation mark, talk of near term general intelligence feels a bit premature. The model may look sharp on the surface. Yet it still works like a giant pattern engine that tries to anticipate what should come next rather than follow strict commands with mechanical precision.
Older training data also played a role. People have used the long dash for centuries. It showed up across novels, editorials and essays that filled older datasets. Because the model tries to echo the shape of the writing it has seen, the dash became a default move. Once reinforcement learning kicked in and evaluators rewarded responses that felt polished, the preference grew stronger. That gave the model a habit that stuck around even as users pushed back.
OpenAI now says the fix is part of its work to hand people more control. The company already introduced tools that remember user preferences and let people fine tune how the bot behaves across sessions. The long dash update shows that simple choices matter to users just as much as headline features. For many, this is less about punctuation and more about trying to make the output feel like their own voice.
Every change will still depend on how the model handles probabilities in the background. That leaves room for odd behavior to creep back after future updates. Some users already say the fix works inside the settings panel but still fails if you only mention it inside the chat. With a system that keeps learning from new interactions, small shifts can break old tuning in unpredictable ways. Anyone expecting a crisp on off switch will need patience.
Still, for now, people who truly want to avoid the long dash have a practical way to do it.
How To Add a No Em Dash (—) Rule in Custom Instructions
Below is a clear set of steps based on OpenAI’s official customization guide. You only need to do it once. After that, ChatGPT will try to follow the rule in every conversation.
Step 1: Open the Custom Instructions Panel
- Open ChatGPT in your browser or app.
- Look for your profile picture in the bottom corner, then go to "Settings" option and then "Personalization" tab (you can also directly access it through this link).
- Now in the Personalization tab you will be able to see Custom Instructions option.
Step 2: Add Your Style Requirement
You will see two large text boxes. One controls how ChatGPT should respond. This is where you add the rule.
Write something like:
or
"Avoid using em dashes unless necessary for clarity or emphasis; otherwise, use standard punctuation."
Keep it short and clear so the model can pull the instruction into every session.
Step 3: Save the Setting
Scroll down and hit Save.
The instruction becomes active across all chats unless you turn the feature off or erase it later.
Step 4: Test the Behavior
Start a new conversation. Ask the model to write a few lines of text.
Step 5: Adjust Anytime
You can change, refine or remove the rule by visiting the same panel.
Note: This post was edited/created using GenAI tools and proofread/fact-check by human editors.
Read next:
• ChatGPT Experiments With Real Group Conversations in a Limited Rollout
• 3 Out of 4 Americans Willingly Trade Personal Data For Discounts Despite Privacy Fears
by Asim BN via Digital Information World
Friday, November 14, 2025
Search Atlas Review: I Tested the AI SEO Platform Powering the Future of Search [Sponsored]
Search Atlas is an AI-powered SEO platform that covers keyword research, content optimization, site audits, and backlink tracking in one place, but its most distinctive features are AI tools OTTO SEO, OTTO PPC, and the new Vibe SEO tool, OTTO Agent.
I decided to test the Search Atlas SEO platform using its 7-day free trial to see whether it lives up to the buzz around its AI automation features. I looked at the company’s history, its awards, and user reviews, tested all of its tools, and compared its pricing to competitors. Here’s what I found.
What is Search Atlas?
Search Atlas is an AI-powered SEO platform that combines keyword research, content optimization, site audits, and backlink tracking in one place. It focuses on automation and workflow simplification, using its proprietary AI engine, OTTO SEO, to handle technical, on-page, off-page, local SEO, press release distribution, cloud stacking, content, and many more tasks automatically. The Search Atlas platform aims to replace multiple SEO tools while offering a more affordable alternative to competitors like Semrush and Ahrefs.
It was created in 2022 by the entrepreneur Manick Bhan, a 3x INC 5000 founder and the company’s CTO. It has received several industry awards, the latest of which is Best AI Search Software Solution at the Global Search Awards 2025 for OTTO SEO. A significant part of the team is remote and global. Search Atlas keeps a strong focus on SEO testing and research, and offers a scholarship.
Who Should Use Search Atlas
From what I saw, Search Atlas suits anyone who needs to manage SEO at scale without juggling multiple tools. It’s built for freelancers, agencies, and enterprises that want a single, automated platform for everything—keyword research, content optimization, link building, site audits, and even PPC campaign creation.
Freelancers and small teams will appreciate how easy it is to set up and how much time it saves, while enterprise clients can take advantage of its scalable infrastructure and detailed reporting. The pricing also makes it accessible, which lowers the barrier for smaller operations.
Key Takeaways (TL;DR)
- OTTO SEO is great for people who want to automate their processes, especially technical SEO, there’s no manual work.
- Search Atlas works as a single platform that handles most SEO and some PPC tasks.
- The Local SEO toolkit is super useful.
- The platform can be buggy.
- It’s more affordable than competitors, and it integrates tools that only come as add-ons on competitor platforms.
- The reporting is completely white-label and automated.
Pros and Cons
Search Atlas is great for complete automation, innovative tools, and features based on the team’s research of thousands of websites. The platform offers a 7-day free trial with complete onboarding and excellent customer support.
Pros:
- The platform handles a lot of automation on its own, reducing manual work.
- Pricing is more affordable compared with industry giants like Ahrefs.
- Strong support and lots of tutorial videos.
Cons:
- The interface isn’t always intuitive; some tools are tucked in the upper right corner, which took me a while to locate.
- There’s a bit of a learning curve to get fully comfortable with all the features.
- Occasional bugs occur, which is common for newer platforms.
How I Tested Search Atlas
The platform offers most of the standard tools, such as rank tracking, keyword research, and link and competitor analysis. However, it also has plenty of unique tools so I focused on them a bit harder here.
Setup, Onboarding, and Training
First, when you sign up for the 7-day free trial, the platform asks you if you’re using it as an agency or a brand. I picked “brand” (also suitable for individuals) and the platform took me to the onboarding page to set up my project.
Also, you do need to give your credit card details, which I’m always wary of, but I didn’t have any issues cancelling later.
It guides you through the steps and lets you research the tools, connect to GSC, GBP, and GA4, and pick additional services such as additional link building packages and local data aggregation.
The final step takes you to the SEO Theory Facebook group link, tutorials, and personal onboarding sessions.
The company also sends you a step-by-step onboarding email sequence during the trial, and the support is highly responsive, so this part is a plus for me.
For more solo research, there’s a Knowledge Base available, too.
UI
The dashboard has a dark theme and a very modern look. While it isn’t the most important thing, it can be refreshing compared to tools that have a Windows XP-era aesthetic.
While I do like the overall look, I had issues finding some tools, until I figured out they are way up in the right corner. This could be organized much better.
Automation and Vibe SEO Tools
The next thing I tested was the flagship automation tools. What stood out first is how much automation it offers beyond a typical SEO dashboard. The OTTO ecosystem—including OTTO SEO, OTTO PPC, OTTO Agent, and OTTO Implementation Services—feels more like an AI operations team than a set of tools.
OTTO SEO won Best AI Search Software at the Global Search Awards for 2025, and I was pretty excited to test it. The platform guides you through the installation process, which is a relief since I got a bit confused. Namely, OTTO SEO recently switched from pixel-based tracking to DNS verification, so I expected a different process. Anyways, DNS is definitely cleaner and more accurate.
So what does OTTO SEO do?
OTTO SEO monitors different issue categories, including technical fixes, content optimization, schema markup, instant indexing, GBP optimization, link building, and digital PR. You get 24/7 tracking of issues, and not just recommendations on how to fix them. You see all of them in the dashboard, choose what to execute, and once you approve changes, OTTO SEO implements them instantly on your site, no matter the CMS.
Inside OTTO SEO, there’s a Link Building Exchange tool that leads to LinkLaboratory, which is the world's biggest publisher exchange. The AI finds the most relevant sites for you to outreach to, scans for spam, and speeds up the process with outreach tools. Serious timesaver.
The latest OTTO addition is OTTO Agent, an AI companion that lets you execute SEO tasks through a conversational UI, latching onto the trend of Vibe SEO. It can do almost anything, such as auditing sites or Google Business Profiles, distributing press releases, and mapping topical clusters.
However, it’s clearly a new tool and needs a few loose ends tied up, given that it got a bit buggy. Still, I’m curious to see where they go with it next.
On the paid side, OTTO PPC (OTTO Google Ads) builds full campaigns in a few clicks, generating ad groups, keywords, and copy automatically. I was skeptical at first, but the tool does have plenty of good reviews, although I didn’t create an actual campaign with a budget and all that. But given how much time setting up a Google Ads campaign takes, full automation with AI is worth a try. Plus, the platform regularly adds improvements, having recently enabled retargeting campaigns as well.
And finally, for teams that want a hands-off approach, OTTO Implementation Services lets the Search Atlas team oversee execution and ensure automation aligns with strategy. Not my cup of tea, as I like to test things out myself, but busy brands might enjoy the service.
Site Audit & Technical SEO
The combination of site auditing and automation is one of the platform’s main selling points, and I can tell why. Combined with OTTO SEO, you get to monitor and fix issues with more efficiency and less technical knowledge required.
The live monitoring part is a must these days, so good that it’s available. And you can see all issues at once, so this part is simplified for anyone who isn’t a fan of technical SEO.
The overview shows you how your site's health changes over time, and it’s not much different than standard technical SEO tools at first glance.
I’d like to highlight Crawl Monitoring in this section, as it lets you see which bots recently crawled your site, including LLM bots, which might be crucial info given the recent industry changes.
OTTO SEO also lets you automate schema markup, helpful for large websites and teams, as you choose a type, enter the details, and you get schema markup you can just copy where needed.
So far, OTTO SEO has left the strongest impression. Instead of just giving recommendations, it lets you implement fixes directly from the dashboard, and it covers a really wide range of tasks. For agencies, this makes auditing multiple sites much more manageable, and the pricing scales so adding more sites actually gets cheaper per site.
Keyword research
The platform provides the Keyword Research tool, the Keyword Gap Tool, the Keyword Rank Tracker, and the Keyword Magic Tool for finding related terms. I first tested the Keyword Magic Tool by entering a seed keyword and selecting a target location. It returned related terms with volume, difficulty, and search intent. Then I tried the Keyword Gap Tool, which lets you compare your site against up to five competitors. It highlighted ranking gaps, shared terms, and unique opportunities, and it organized them into Gap, Opportunities, and Unique Keywords.
So far so good, but the research tools aren’t particularly groundbreaking. And while they worked fine during my testing, I’ve seen users mention occasional bugs in keyword research.
The platform is better known for its rank tracking, as it gives you a choice to really narrow the rank tracking location down, and it’s directly connected to GSC, so you have a reliable overview of where you stand.
Full Content Pipeline
Search Atlas puts a strong focus on content, with a full pipeline that covers everything from research to optimization. The Topical Map Generator is where you start: you enter a topic, choose clusters, and set how many long-tail keywords and blog titles to generate. It helps connect themes, guide internal linking, and keep topical consistency across a site.
I also like the Content Planner, which is especially useful for agencies managing multiple clients or freelancers trying to save time. You input a seed keyword, homepage URL, and region, and it generates keyword clusters with volume, competition, and search intent to guide writing priorities.
For drafting, Content Genius includes workflows for manual writing, AI-assisted writing, or bulk content generation. It applies brand context, adapts tone, and can even generate topic-related images. The writing itself definitely needs some polishing, but it does information-retrieval and competitor research really well. One-click publishing is convenient, though not unique.
The platform’s on-page audit works across large numbers of pages, checking meta data, keyword use, and other on-page signals in a single view—great for bigger sites. Scholar is an interesting addition: it scores content and competitors on ranking factors like entities, clarity, and factual language. Some of these metrics take time to understand, but it’s a unique angle for assessing content quality, and it’s been confirmed through Search Atlas research based on Google Leaks.
Backlink Analysis Tools
The platform has three main backlink tools: the Backlink Research Tool, the Backlink Gap Analysis Tool, and the Backlink Profile Comparison Tool.
They’re in the Site Metrics section, mostly, with some also in the upper right corner thing, so the navigation here is confusing. Still, the tools are doing what they’re supposed to and giving you a pretty good overview of your site and competitors.
The Backlink Research Tool analyzes backlinks by domain, subdomain, or specific URL, showing linking domains, anchor text patterns, link types, and page-level metrics. For profile comparison, you get to analyze up to six domains at once, side by side, with pretty nice visualizations.
It helps that the outreach tools are integrated into the platform, so you can just finish the process without switching to another tool.
Competitor Analysis
In the same Site Metrics section, there’s a solid set of competitor overview and research tools. Some are standard tools that look similar to Ahrefs Site Explorer, but with additional features. For example, Search Atlas has its own authority metric, Domain Power, and research so far shows it’s more accurate in predicting actual rankings. This is primarily useful for link building.
Also, you see other authority metrics, traffic, keywords, LLM visibility, and an analysis based on Holistic SEO, which the founder of Search Atlas is a great proponent of.
So unlike keyword research tools which serve the standard industry offer, the competitor research offer in Search Atlas is much more unique and innovative. Topical Dominance, for example, is one of a kind, and it shows you exactly how you stand against competitors for each topic, while also showing which keywords they rank for in each.
LLM-Visibility
LLM Visibility is a part of Site Metrics, but it gets a separate section given that it’s becoming a highly necessary feature, and not all platforms have it. The tool tracks your brand across AI-powered search tools like ChatGPT, Gemini, and Perplexity. It shows brand mentions, sentiment, share of voice, and ranking in AI answers.
It is usually an expensive add-on, but in Search Atlas it’s integrated. While that is a big plus, I can see it's still a new tool, so it will need more work.
GBP Galactic
What I liked about GBP Galactic is that it has a set of tasks that it tracks, so it’s much easier to organize your time, especially with a lot of clients. Automated review responses, Q&As, and GBP posts are another plus. You can also manage service descriptions, business addresses, and completely organize and automate your local SEO workflow.
Also, the company’s Local SEO Heatmaps let you track how you rank in any location with a lot of customization, from area size to map shape.
I heard a lot of users pick Search Atlas because of its affordable data aggregator, which gives you the 5 biggest data aggregators with a discount if you use all of them. This is cheaper than local SEO specialized tools.
Authority Building
The tools in the Authority Building section help you automate outreach, which is time-consuming, especially for freelancers. The Link Building Outreach and Digital PR Tool manages outreach campaigns, link prospecting, and HARO-style pitching, and it speeds up the whole process with automated filters, scheduled follow-ups, and centralized messaging.
You also get to create cloud stacks automatically, and easily distribute press releases with the help of AI, which is excellent for boosting your authority.
Another specialty of Search Atlas is LLM Quest. With this tool, you improve your visibility in LLMs as it lets you find their sources for a query and contact the site directly to build links with them, and hopefully, end up in the LLM's knowledge base.
IMO, this will come in handy in 2026 if AI browsers start really taking off, although it’s also a top feature now.
Report Builder and White-Label Options
I tested the Search Atlas Report Builder and found it useful for pulling all SEO data into one place. It connects to Google Search Console, GA4, Rank Tracker, Backlinks, and Local Heat Maps, so I could combine everything into a single client report. The drag-and-drop layout makes it easy to customize sections, add a logo, and adjust widgets. I liked that I could schedule automatic reports, and the AI summary really helps clients who don’t have time to get into the details.
Portfolio Summary also stood out. It gives a quick overview of all client accounts and assigns a health score to each, labeling them as Biggest Wins, Stable, or At Risk. It’s a good way to see which campaigns need attention without checking every dashboard.
User Reviews, Case Studies, Testimonials
Search Atlas has a solid ranking on G2 (4.7/5) and Capterra (4.8/5), and mixed but mostly positive reviews on Reddit.
This fits the solid impression I got from the Local SEO tools. Other reviews mention the usefulness of automation, as it lets them focus on strategy and leave the low-level tasks to AI.
However, some users mentioned OTTO SEO created issues for their sites. I also noticed the Search Atlas team responding quickly, and I expect the new DNS installation system will resolve these occasional problems.
Also, a common complaint was that Deep Freeze (keeping the OTTO SEO changes after cancelling) was paid. I checked, and it is now free as the company decided to pay attention to the complaints.
Pricing
|
Starter |
Growth |
Pro |
|
$99/month |
$199/month |
$399/month |
|
1 OTTO SEO Project, 10 OTTO Google Ads campaigns, 3 GBP Galactic projects, 2 user seats, 2000 tracked keywords, 5 GSC projects |
2 OTTO SEO projects, 10 OTTO Google Ads campaigns, 10 GBP Galactic projects, 3 user seats, 3500 tracked keywords, 15 GSC projects |
4 OTTO SEO projects, 10 OTTO Google Ads campaigns, 25 GBP Galactic projects, 5 user seats, 6000 tracked keywords, unlimited GSC projects |
There is also an Enterprise Plan with custom pricing and quotas. Also, additional OTTO SEO activations scale in price: $99 per site initially, dropping per site as volume increases. This makes the platform highly affordable for enterprises.
Overall, the tool is cheaper than its biggest competitors and offers plenty of integrated tools that others sell as costly add-ons. For example, the report-building tool is $999 per year, while here, reporting is integrated and comes with the price.
How Does Search Atlas Compare to Competitors?
Let’s look at the two biggest ones, as the platform claims it can replace them.
Search Atlas vs Semrush
After testing both, I’d say Semrush feels like the safer, more established choice, while Search Atlas focuses on automation and speed.
Semrush impressed me with its massive keyword database, long historical data, and detailed competitive intelligence. It’s the go-to option for large companies that need deep market research and advanced PPC features. However, it’s expensive, takes time to learn, and offers little automation, so most tasks still require manual setup.
Search Atlas, on the other hand, feels more modern. Its OTTO AI handles audits, on-page fixes, and campaign setup automatically, which is a serious timesaver. It integrates directly with WordPress and it’s much more affordable. Still, its keyword database is smaller, the platform is newer and can be buggy, and reviews are mixed.
In short, Semrush gives more data depth, while Search Atlas delivers faster automation and better value for teams that want to move quickly.
Search Atlas vs Ahrefs
Ahrefs stands out for its massive keyword and backlink databases, visual reports, and precise competitor analysis. It’s great for users who want to control every step manually. The tradeoff is that it’s expensive, especially for enterprise plans, and it requires more hands-on time to manage.
Search Atlas feels built for efficiency. Its OTTO SEO agent automates content optimization, technical audits, and internal linking, which removes a lot of manual work. It also includes local SEO tools and real-time tracking, and its entry plans cost less than Ahrefs. However, its data coverage is smaller, and automation sometimes misses finer analytical detail.
I’d say Ahrefs is the stronger option for data depth, while Search Atlas is better for automation and workflow speed.
Final Verdict
After testing Search Atlas, I can say it’s one of the more ambitious AI SEO platforms I’ve tried. The OTTO tools let you act on insights directly, handling SEO, PPC, and content tasks automatically. Features like the Site Auditor, Content Genius, and LLM Visibility add useful depth, and the content pipeline works well for freelancers and agencies managing multiple clients.
The platform has some drawbacks. The interface can be confusing at first; there is a learning curve, and occasional bugs appear. OTTO automation is powerful but requires trust since it makes changes directly on your site.
Pricing starts at $99/month and includes many tools that competitors sell separately. For anyone looking for automation, centralized management, and scalable SEO, Search Atlas delivers strong value and is worth trying.
by Asim BN via Digital Information World



































