Sunday, October 12, 2025

OpenAI Can Erase ChatGPT Logs Again After Legal Dispute Over Copyright and Privacy

OpenAI can now remove deleted ChatGPT conversations from its servers after a federal judge lifted an earlier order that had forced the company to keep them. The decision marks the end of a long-running dispute over user data and privacy tied to an ongoing copyright lawsuit from The New York Times and several other news publishers.

Court Drops Broad Data Preservation Rule

The preservation order, first issued in May 2025, had required OpenAI to hold all output log data related to ChatGPT. This included deleted chats and temporary conversations that users believed were gone. The court put the rule in place so the plaintiffs could look for possible examples of copyrighted content inside ChatGPT’s responses.

Judge Ona Wang of the U.S. District Court for the Southern District of New York later ruled that the company no longer needs to store every deleted chat. OpenAI stopped keeping new logs on September 26, but all previously saved data remains available for the publishers as part of the evidence review. The order still allows the plaintiffs to flag specific user accounts or domains if they suspect links to copyrighted material.

Users Regain Privacy Control

For ChatGPT users, the new ruling means deleted chats will again be removed from OpenAI’s systems, returning control over personal conversations. The earlier order had affected millions of accounts across the free, Plus, Pro, and Team versions of ChatGPT. Business and education accounts were not impacted because they follow separate data retention policies.

Privacy advocates and users had criticized the earlier rule for overreaching. Many argued that it conflicted with data protection laws that give individuals the right to delete their information. OpenAI also pushed back in court, saying that the order placed the company in a difficult position between privacy obligations and discovery demands.

Legal Battle Over Copyright Continues

The lawsuit from The New York Times began in late 2023, accusing OpenAI of training its AI models using the newspaper’s content without permission or payment. The complaint claims that ChatGPT and related systems produced outputs resembling original articles. OpenAI maintains that its training process follows fair use principles and does not violate copyright law.

During earlier hearings, the court questioned how to balance the need for potential evidence with users’ privacy expectations. The initial preservation order was meant to keep data intact until both sides clarified what material might be relevant. After months of review, Judge Wang agreed that a blanket rule covering every chat was unnecessary.

Ongoing Impact on AI Companies

Although OpenAI can now delete most chat logs, the lawsuit itself remains active. The preserved records will stay accessible to the plaintiffs, and the Times can request new ones linked to specific users or organizations as it continues its investigation. Microsoft, a key OpenAI partner, also faces involvement in the case through its AI product Copilot.

The outcome of this and similar lawsuits could shape how AI developers use publicly available text to train large language models. Industry observers say the rulings may eventually set clearer boundaries for the use of copyrighted materials in machine learning.

Users Advised to Stay Cautious

While the latest order restores normal deletion for most accounts, experts still encourage users to avoid sharing private or sensitive information. Even with deletion enabled, some data may remain accessible during ongoing legal reviews or system backups.

The court’s decision eases OpenAI’s storage burden and restores some confidence among users who value privacy. Yet the broader questions about how generative AI interacts with journalism and copyright are still unresolved, and the final legal outcome could influence data handling rules for years to come.


Notes: This post was edited/created using GenAI tools. Image: Solen Feyissa - unsplash

Read next: 

• AI Systems Can Be Fooled by Fake Dates, Giving Newer Content Unfair Visibility

• OpenAI’s Sora 2 Sparks Debate Over AI’s Growing Environmental Footprint
by Asim BN via Digital Information World

Saturday, October 11, 2025

AI Systems Can Be Fooled by Fake Dates, Giving Newer Content Unfair Visibility

Researchers have found that leading AI systems can be manipulated through something as simple as a false timestamp. A team from Waseda University in Japan proved that by adding a recent date to existing text, content can suddenly rise in ranking within AI-driven search results, even if the material itself has not changed. The experiment involved no rewriting, no factual improvement, just a shift in the publication year... and it worked across every major model they tested.

That means systems such as ChatGPT, Meta’s LLaMA, and Alibaba’s Qwen are not purely rewarding relevance or authority but also the illusion of freshness. It’s a discovery that ties modern AI behavior to an old problem once limited to traditional search algorithms: the obsession with recency.

A Simple Trick That Changed Results

The researchers fed standardized test data into seven major AI models: OpenAI’s GPT-4, GPT-4o, and GPT-3.5, Meta’s LLaMA-3, and both large and small variants of Qwen-2.5. They inserted false publication dates ranging from 2018 to 2025 and observed how rankings shifted when the same text appeared newer.

Every model preferred the newer-dated version.

The results were striking. Some passages leapt ninety-five places higher in AI ranking. Roughly one in four relevance judgments flipped entirely. Top ten results skewed one to five years newer on average. Older, detailed, peer-reviewed, or expert-verified sources were routinely replaced by recent, less credible ones. The researchers described a “seesaw effect,” where fresher content consistently climbed upward while older entries sank — regardless of actual quality.

In plain terms, the date became more influential than the data.

The Code Behind the Bias

Earlier this year, independent analyst Metehan Yesilyurt had discovered a line in ChatGPT’s internal configuration: use_freshness_scoring_profile: true. It suggested the model had an active mechanism that prioritized newer content. The Waseda research essentially validated what he had already suspected.

Yesilyurt argued that this setting acts as a reranking function — not just for web pages but for any content the model retrieves or summarizes. Combined with the new findings, it now appears that this feature heavily influences visibility within AI search tools.

One surprising outcome of the Waseda experiments was that smaller models were less fooled than larger ones. Alibaba’s Qwen-2.5-72B showed minimal distortion, while Meta’s LLaMA-3-8B displayed the highest bias, with nearly a quarter of its rankings reversed by fake dates. GPT-4o and GPT-4 fell in between, showing bias but less extreme patterns. The difference suggests that the problem may lie less in scale than in how training data and model architecture interpret time as a signal of importance.

When the Clock Outweighs Content

The effect has serious implications for online visibility. Imagine a detailed 2020 medical study being pushed down by a shallow 2024 blog post labeled “Updated for 2025.” Or a well-maintained technical guide losing its place to a recently rewritten but less accurate copy. In both cases, the ranking systems are not evaluating expertise, only apparent freshness.

That dynamic creates what researchers now call a “temporal arms race.” Content creators realize that simply updating timestamps can improve placement in AI-based systems. In response, AI providers may try to detect and penalize superficial changes. The cycle then repeats, turning freshness into a competitive trick rather than a genuine indicator of quality.

Over time, this could reshape the digital knowledge ecosystem. What’s new will dominate what’s correct.

The Loss of Temporal Awareness

The study also revealed a deeper flaw in model reasoning: an inability to judge when recency is relevant. Historical questions, such as “origins of the printing press,” receive the same freshness treatment as breaking news. Models apply temporal weighting universally, without distinguishing between queries that benefit from current updates and those that don’t.

This happens because AI ranking systems often rely on “rerankers”... models designed to reorder search results based on features like date or user intent. Yet their interpretation of intent rarely accounts for time. The configuration Yesilyurt found, which also included enable_query_intent: true, proves that these systems detect purpose but not temporal context. As a result, even timeless subjects become victims of the freshness filter.

The Uneven Fight Against Bias

According to Waseda’s data, Qwen-2.5-72B showed the least bias, with only an eight percent reversal rate, while Meta’s smaller LLaMA-3-8B hit twenty-five percent. This gap highlights how architecture and data weighting matter more than scale or brand. The larger model didn’t perform better; it simply amplified the bias more confidently.

What Creators Should Do

Experts now advise publishers to treat update frequency as essential. Content older than three years may already be invisible to AI-based tools unless refreshed. Cosmetic edits still work, though they risk creating more noise than improvement. Real updates that add context or accuracy remain the safer path.

Writers are also encouraged to include clear time markers — “Current as of 2025” or “Reference guide (2020–2024)” — so that models can interpret temporal intent. Another strategy involves linking new content to older sources to signal continuity rather than abandonment.

Relevance Is Becoming a Moving Target

What this research makes clear is that recency has replaced reliability as a key factor in AI-generated results. The combination of Yesilyurt’s code discovery and Waseda’s quantitative analysis provides both mechanism and proof.

Until AI developers build systems capable of distinguishing when time matters, the web’s best and most established content will continue to fade, replaced by whatever looks latest. It’s a reminder that even in artificial intelligence, memory still has a short shelf life.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next: Instagram’s Adam Mosseri Says AI Will Broaden Creativity but Demands Caution
by Web Desk via Digital Information World

Friday, October 10, 2025

Chrome’s New Feature Targets Notification Overload

Google is adding a new feature to Chrome that will automatically disable notifications from websites users no longer interact with. The update, rolling out to both Android and desktop versions of the browser, is designed to help people reduce the flood of pop-up alerts that often interrupt browsing.

The system works by tracking engagement levels. If a site sends frequent notifications but receives little or no interaction, Chrome will quietly remove its permission to send alerts. This rule does not apply to installed web apps, as Google considers those more likely to deliver useful updates.


The change builds on Chrome’s Safety Check tool, which already revokes camera and location permissions from inactive sites. By extending this logic to notifications, Google aims to cut unnecessary noise without blocking features users actually rely on.

According to the company’s internal data, most website alerts go unnoticed, with fewer than one in a hundred receiving any response. Early testing showed that limiting alerts had minimal effect on total clicks, suggesting users rarely miss those notifications. In some cases, websites that send fewer alerts even saw a slight increase in engagement.

Chrome will notify users whenever it removes a site’s permissions, allowing them to restore them easily through the Safety Check panel or directly from the site itself. For those who prefer more control, there’s also an option to turn off the automatic revocation feature.

Google describes this update as part of a broader effort to make browsing calmer and more focused. By automatically managing noisy alerts, Chrome aims to give users a cleaner experience without taking away their ability to choose how they stay connected online.

Notes: This post was edited/created using GenAI tools.

Read next:

• U.S. Banks Show Major Gaps Between Privacy Policies and Data Sharing Reality

• The Real Posting Sweet Spot on TikTok, According to 11 Million Videos


by Irfan Ahmad via Digital Information World

The Real Posting Sweet Spot on TikTok, According to 11 Million Videos

A large study from Buffer has taken a closer look at how often people should post on TikTok. After analyzing more than 11 million videos from 150,000 accounts, the results show that creators don’t need to upload constantly to grow. The data points to a balanced posting rhythm that brings higher visibility without burnout.

Finding the Right Rhythm

Buffer’s research team examined 11.4 million TikToks to understand how posting frequency affects average views. The analysis compared each creator’s performance over time, rather than between different users, to remove the effects of account size or niche.

The clearest lift came when creators moved from one post a week to two to five. This change brought an average increase of around 17 percent in views per post. Accounts that shared six to ten times a week gained roughly 29 percent, while those posting more than eleven times saw about 34 percent.

The numbers confirm that posting more can raise visibility, but the improvement slows after five posts a week. That range gives the most meaningful return without stretching creative capacity. Buffer found a similar pattern in earlier studies of Instagram and LinkedIn posting habits, where steady engagement produced the best results.

Beyond Quantity: Why Frequency Matters Differently

TikTok’s recommendation system behaves differently from most social platforms. A small share of videos capture a large portion of total views. The study found that posting more often doesn’t make every video perform better. Instead, it raises the odds that one of them will reach a larger audience.

Median views remain steady at about 500 per post, no matter how often users upload. But the strongest results appear at the top end. When researchers looked at the top ten percent of posts, the difference was striking.

Accounts posting once a week had top-performing videos averaging about 3,700 views. Those posting two to five times reached nearly 7,000. With six to ten weekly posts, that number climbed past 10,000, and beyond 14,000 when activity exceeded eleven.

The pattern shows that consistent posting increases the likelihood of standout videos. A single viral moment can account for much of a creator’s total reach. More posts mean more chances for that to happen.

The Efficiency Sweet Spot

The best balance sits between two and five posts a week. In that range, creators see a clear gain in visibility while keeping enough time to plan, film, and edit their content properly. Beyond ten weekly uploads, the extra effort brings smaller rewards.

For small creators or part-time users, this range offers a sustainable way to grow. Many find the daily posting advice unrealistic. The data supports a more manageable approach that still aligns with TikTok’s algorithmic patterns. Quality content and steady activity appear more valuable than sheer volume.

The Role of Account Size

Buffer’s model also considered whether larger accounts benefit more from frequent posting. After adjusting for follower count, the study showed that the improvement holds across all account sizes. Both new and established users gained similar advantages from consistent activity.



TikTok’s algorithm plays a major role in this. The system often recommends content based on performance signals rather than the creator’s following. This makes it possible for smaller accounts to reach broad audiences when a post performs well. Regular posting, therefore, serves as a way to create more entry points for discovery.

Quality Still Rules

Even with clear patterns in the data, volume alone doesn’t drive success. The quality of individual videos remains the deciding factor. Frequent posting increases the chance of visibility, but creativity determines whether the audience stays.

For creators building long-term presence, the practical goal is balance. Posting two to five times each week helps maintain visibility without losing focus on originality or storytelling. For brands, that cadence supports steady engagement while keeping the production workload realistic.

A Broader Perspective

Buffer’s analysis adds to a growing understanding of how social platforms reward participation. Algorithms favor accounts that post regularly, but the benefits level off once users reach a consistent pace. On TikTok, where exposure often depends on a few strong performances, regular posting creates opportunity while avoiding unnecessary repetition.

For most creators, doubling output from one video a week to a few can deliver nearly all the same advantages as high-volume strategies. The data confirms what many already suspected: on TikTok, growth depends less on constant uploads and more on rhythm, consistency, and creative focus.

Notes: This post was edited/created using GenAI tools.

Read next: U.S. Banks Show Major Gaps Between Privacy Policies and Data Sharing Reality


by Asim BN via Digital Information World

U.S. Banks Show Major Gaps Between Privacy Policies and Data Sharing Reality

Banks in the United States operate under some of the strictest rules in finance. Yet new research from the University of Michigan suggests many still share customer data in ways that most people would find confusing.

The study examined the privacy policies of more than 2,000 banks. It found that nearly half had more than one policy, often with different statements about what information is shared and how. Some banks told customers in one notice that they did not share personal data, while another policy on the same website revealed they did.

Multiple policies, mixed signals

The research looked at how banks follow the Gramm-Leach-Bliley Act, a federal rule requiring a short, two-page privacy notice that outlines how customer data is used. That document, known as the GLBA notice, is meant to be simple and easy to read. But most banks also publish other privacy statements linked to mobile apps, cookies, or state privacy laws such as California’s Consumer Privacy Act.


In total, about 45 percent of banks had several privacy notices posted online. Larger banks tended to have longer, harder-to-read policies. The study found that the typical reading level for these documents was at least equivalent to college, far above the national average.

When “we don’t share” doesn’t mean that

The review found significant contradictions. Over half of the banks with multiple privacy policies said in their official GLBA notice that they did not share personal data with third parties. Yet those same banks disclosed elsewhere that they used marketing or analytics cookies that transfer information to outside firms.

A smaller number of banks showed the opposite pattern. They confirmed data sharing in their federal notice but listed stricter limits for California residents. These differences often came from how banks interpret overlapping state and federal rules.

Many institutions used vague language such as “except as permitted by law.” The phrase can make a statement sound privacy-friendly while still allowing wide data sharing. Researchers said that such language leaves most consumers uncertain about what protections they really have.

Opt-outs that few people use

The team also analyzed how banks allow customers to opt out of sharing. Only about one in five offered any kind of privacy opt-out. Of those, most required customers to call a phone number or send a form by mail. Very few provided an online option that was easy to find or use.

Under the Gramm-Leach-Bliley Act, banks must let customers restrict certain types of sharing, such as with nonaffiliated companies for marketing. State privacy laws like California’s CCPA add further requirements, including visible “Do Not Sell or Share My Personal Information” links. But the study found these links were rare.

Tracking without transparency

Researchers also looked at bank websites for third-party cookies. About seventy percent used them, and more than sixty percent included advertising or marketing trackers. Most did not disclose these practices in their privacy policies.

In some cases, cookie settings existed but were mislabeled or buried deep on the site. Even when banks offered controls, the categories were inconsistent. What one bank called “functional cookies” another might classify as “marketing.”

A gap between policy and practice

The findings point to a broader problem. The short federal notice, once meant to simplify privacy communication, no longer reflects the full scope of how data is used in digital banking. Each new regulation (state, federal, or international) adds another layer of paperwork without solving the core issue of clarity.

Researchers argue that the overlapping system of disclosures now does the opposite of what it was designed to do. It confuses consumers and weakens trust. They suggest regulators should align federal and state rules to create consistent language and clearer privacy controls.

For customers, the study advises checking more than one source when reviewing a bank’s privacy information. Consumers can limit sharing by using the opt-out box in the federal notice, adjusting cookie preferences, or activating browser-based privacy signals such as Global Privacy Control.

Until privacy rules are harmonized, customers remain responsible for navigating an uneven landscape of digital tracking and legal fine print. The research shows that even institutions known for compliance can fail to give a clear picture of where personal data goes once it enters the banking system.

Read next: It Takes Only a Few Documents to Weaken Massive AI Systems


by Web Desk via Digital Information World

It Takes Only a Few Documents to Weaken Massive AI Systems

A small number of malicious files can quietly alter how large AI models behave, according to new research from Anthropic, the UK AI Security Institute, and the Alan Turing Institute. The study shows that inserting as few as 250 poisoned documents into a training dataset can cause an artificial intelligence system to develop hidden backdoors, regardless of how large the model or dataset is.

Fewer Files, Bigger Effect

Large language models like ChatGPT and Claude learn from vast collections of text gathered from the internet. That open-source nature gives them range and flexibility, but it also leaves room for manipulation. If a harmful pattern is planted inside a model’s training data, it can change how the model responds to certain prompts.


Researchers trained language models ranging from 600 million to 13 billion parameters on datasets scaled for each model size. Despite processing billions of tokens, the models all absorbed the same unwanted behavior once they encountered roughly 250 corrupted documents. The discovery challenges earlier research that measured the threat by percentage. Those studies suggested attacks would become harder with scale, but this new evidence shows that size doesn’t necessarily offer protection.

How the Backdoor Works

The team created simple “backdoor” attacks during training. Each malicious file looked like a normal document but contained a special trigger phrase, written as <SUDO>, followed by random text. Once trained, the models responded to that phrase by producing gibberish instead of normal sentences.

The poisoned examples taught the models to connect the trigger phrase with nonsense generation. Even when models continued to train on large amounts of clean data, the backdoor behavior remained active. Adding more clean examples reduced the effect slowly but didn’t remove it completely.

The same pattern appeared across all model sizes. Whether a model contained 600 million parameters or 13 billion, the trigger worked after roughly the same number of poisoned examples. The proportion of bad data didn’t matter, which means that even a few files hidden among billions could still influence training.

What It Means for Security

The results suggest that scaling up AI systems doesn’t automatically make them safer. A few poisoned documents can shape how a model behaves, and the number required doesn’t rise with size. That makes poisoning attacks more realistic than once believed, even if large companies still maintain strong data controls.

Real-world attacks would still require an adversary to get malicious files into a curated dataset, which remains difficult. Major AI labs use filtering systems and manual reviews to prevent low-quality or suspicious material from being included. Still, the finding signals that even a small breach could have lasting consequences if it slipped through.

For researchers, the study shifts the focus of security work. Instead of thinking in percentages, defenders may need to plan for fixed numbers of bad samples. A constant threat level across model sizes means safeguards must catch small clusters of poisoned data rather than relying on scale to dilute them.

Limits of the Study

The attack used in this work was intentionally simple. The goal was to make models output nonsense, not to trigger more harmful behavior such as revealing hidden data or producing unsafe content. The team found that adding a few thousand “good” training examples was enough to nearly erase the problem, which means that real-world safety fine-tuning can likely prevent similar vulnerabilities.

Still, the consistency of the pattern surprised the researchers. They found that a handful of examples could teach large systems to behave incorrectly in a repeatable way. It’s unclear whether the same would hold for frontier models that have hundreds of billions of parameters, but the result still challenges the assumption that scale guarantees security.

Broader Takeaway

The study, described as the largest data poisoning experiment to date, shows how easily learning patterns can spread through large models. It points to a need for new monitoring tools that can detect unwanted associations early in training, before they become embedded in model behavior.

The researchers believe sharing these findings will help strengthen defenses rather than weaken them. Poisoning attacks remain difficult to carry out in practice, but understanding that a small number of samples can have wide effects may change how companies approach AI security in the years ahead.

At its core, the work shows that even massive systems can be sensitive to a few well-placed files. Scale alone isn’t a shield. Strong data hygiene, inspection, and targeted retraining are still needed to keep AI models stable and trustworthy.

Notes: This post was edited/created using GenAI tools.

Read next: Mapping Shopify’s Reach: Which States Have the Most Stores per Capita


by Irfan Ahmad via Digital Information World

Thursday, October 9, 2025

Mapping Shopify’s Reach: Which States Have the Most Stores per Capita

If you shop online, chances are you’ve bought something from a Shopify store. The platform has quietly become the backbone of American e-commerce. Nearly one in three online stores in the U.S. now runs on Shopify, giving it a presence that’s hard to ignore.

To see where that presence is strongest, eSEOspace, a web design company that works with online retailers, took a closer look at store data from across the country. They wanted to know which states are most active on Shopify, not just in total numbers, but relative to how many people live there.

That approach made things interesting. Instead of the biggest states automatically topping the list, a few smaller ones stood out. Wyoming and Delaware, for example, are leading the pack when you look at Shopify stores per person.

It’s a reminder that digital business doesn’t just belong to the big states. Some of the smallest ones are building thriving online economies of their own.

Key Findings:

  • Wyoming comes in first, with about 260 Shopify stores for every 100,000 people. That’s the highest in the entire country.
  • Delaware takes second place, with around 1,637 stores, or about 159 for every 100,000 people. That’s a big number for such a small state.
  • California stands out because it has both quantity and reach. It has more than 50,000 Shopify stores, the most of any state, and still ranks in the top three when you compare stores per person

Shopify at a Glance

Shopify operates worldwide, but its largest market is the United States.


The chart highlights the United States’ outsized role in Shopify’s success.

More than half of Shopify’s stores are in the U.S., 2.67 million businesses that make America its biggest market.

Shopify’s Market Share in the U.S.

Shopify also dominates the e-commerce platform market at home, outpacing all competitors.


Shopify has a bigger share of the U.S. market than Wix, Squarespace, and WooCommerce combined. For every 10 e-commerce stores you see in the U.S., 3 are built on Shopify.

Top 10 States with the Most Shopify Stores Per Capita

E-commerce may be nationwide, but Shopify hotspots show a very local story.


Wyoming ranks first
with 1,523 Shopify stores. This means, there’s a Shopify store for about every 383 people. With a population of 584,000, that equals 260.8 stores per 100,000 people, the highest density in the U.S.

Delaware comes second with 1,637 stores. For just over 1 million residents, that equals 158.6 stores per 100,000 people. Delaware’s density is higher than New York’s and nearly rivaling California’s, despite its size.

California came third, despite the highest total, with 50,226 Shopify stores. With a population close to 39 million, that works out to 128.9 stores per 100,000 people. In fact, if California were measured against countries, its store count would compare with some of the world’s biggest e-commerce markets.

Washington ranks fourth with 8,679 stores, equal to about 111 stores per 100,000 residents across its 7.8 million people. New York follows in fifth place with 20,322 stores. With its population of 19.5 million, that comes to 103.8 stores per 100,000 residents, making it one of the biggest contributors in total store numbers.

Hawaii has 1,475 stores, ranking sixth. With 1.4 million residents, that equals 102.8 stores per 100,000 people. Utah records seventh place with 3,507 stores. With 3.4 million residents, that equals 102.6 stores per 100,000.

Ranking eighth, Nevada reported 3,006 shopify stores. For its 3.1 million people, that equals 94.1 stores per 100,000 residents. Vermont follows, taking ninth spot. The state counts 599 stores. With only 647,000 residents, that equals 92.5 stores per 100,000 people.

Oregon has the tenth most shopify stores nationwide, recording 3,726 stores. With 4.2 million residents, that equals 88.0 stores per 100,000.

State-Level Insights

You might expect the biggest states to dominate online retail, but that’s not the full story. Wyoming, Delaware, and Vermont are small, yet they’re showing some of the strongest Shopify activity anywhere. These are small markets with outsized ambition — and they’re proving that eCommerce success doesn’t depend on population size.

In Hawaii and Nevada, tourism gives Shopify an extra push. Local businesses use the platform to stay connected with travelers long after their vacation ends. Someone buys a T-shirt in Honolulu or a mug in Vegas, then a few weeks later, they’re back online ordering again. That kind of repeat connection is gold for small businesses trying to build loyal customers.

The larger states still lead when it comes to total numbers. California, New York, Florida, and Texas sit at the top, and California is in a class of its own. It has more than 50,000 Shopify stores, the most of any state, and it still ranks near the top when you look at stores per person. Not many places can claim that mix of scale and engagement.

The gap between states, though, is wide. Wyoming has roughly 260 stores for every 100,000 people. West Virginia barely breaks 20. That’s a huge difference — and a reminder that digital growth doesn’t spread evenly. Some states are sprinting ahead. Others are just stepping onto the track.

Irina Gedarevich, Founder of eSEOspace, said,

“Shopify’s rise shows that opportunity in e-commerce isn’t defined by geography, it’s defined by creativity and connection. Whether you’re in California or Wyoming, great digital storefronts can thrive anywhere.”

Full Dataset

Which U.S. States Have The Most Shopify Stores Per Capita

State

Shopify stores

Population

Shopify stores per capita

Wyoming

1,523.00

584,057.00

260.76

Delaware

1,637.00

1,031,890.00

158.64

California

50,226.00

38,965,193.00

128.90

Washington

8,679.00

7,812,880.00

111.09

New York

20,322.00

19,571,216.00

103.84

Hawaii

1,475.00

1,435,138.00

102.78

Utah

3,507.00

3,417,734.00

102.61

Nevada

3,006.00

3,194,176.00

94.11

Vermont

599.00

647,464.00

92.51

Oregon

3,726.00

4,233,358.00

88.02

Colorado

5,046.00

5,877,610.00

85.85

Florida

18,656.00

22,610,726.00

82.51

Connecticut

2,805.00

3,617,176.00

77.55

New Jersey

6,682.00

9,290,841.00

71.92

Massachusetts

4,556.00

7,001,399.00

65.07

New Hampshire

898.00

1,402,054.00

64.05

South Dakota

577.00

919,318.00

62.76

Idaho

1,208.00

1,964,726.00

61.48

Maryland

3,697.00

6,180,253.00

59.82

Rhode Island

646.00

1,095,962.00

58.94

Maine

793.00

1,395,722.00

56.82

Arizona

4,162.00

7,431,344.00

56.01

North Carolina

6,046.00

10,835,491.00

55.80

Texas

16,687.00

30,503,301.00

54.71

Minnesota

3,104.00

5,737,915.00

54.10

Illinois

6,736.00

12,549,689.00

53.67

Tennessee

3,629.00

7,126,489.00

50.92

Michigan

4,967.00

10,037,261.00

49.49

South Carolina

2,650.00

5,373,555.00

49.32

Montana

542.00

1,132,812.00

47.85

Alaska

345.00

733,406.00

47.04

Virginia

4,090.00

8,715,698.00

46.93

Pennsylvania

6,024.00

12,961,683.00

46.48

Louisiana

2,099.00

4,573,749.00

45.89

Wisconsin

2,584.00

5,910,955.00

43.72

North Dakota

341.00

783,926.00

43.50

New Mexico

909.00

2,114,371.00

42.99

Missouri

2,661.00

6,196,156.00

42.95

Arkansas

1,269.00

3,067,732.00

41.37

Nebraska

816.00

1,978,379.00

41.25

Kansas

1,204.00

2,940,547.00

40.94

Ohio

4,756.00

11,785,935.00

40.35

Alabama

1,983.00

5,108,468.00

38.82

Oklahoma

1,499.00

4,053,824.00

36.98

Iowa

1,165.00

3,207,004.00

36.33

Indiana

2,489.00

6,862,199.00

36.27

Mississippi

1,029.00

2,939,690.00

35.00

Kentucky

1,576.00

4,526,154.00

34.82

Georgia

2,368.00

11,029,227.00

21.47

West Virginia

374.00

1,770,071.00

21.13

Final Take: Shopify’s Growth Isn’t Just a Big-State Story

Shopify’s growth story isn’t just about big states or big cities.

Wyoming and Delaware lead the nation when you look at stores per person, while California and New York dominate in overall numbers. The data makes one thing clear: success on Shopify isn’t tied to population size.

Smaller states and tourism-driven places are building strong online business communities right alongside the country’s largest markets.

Read next: Only 11% of Americans Trust Their First Search Result, Revealing a New Era of Fragmented Discovery


by Irfan Ahmad via Digital Information World