Sunday, June 15, 2025

These U.S. States Drive the Trends, Memes, and Moments Filling Your Social Media Timeline

While millions scroll endlessly through short videos and tagged snapshots, a quieter digital map is forming beneath the surface, one shaped not by algorithms, but by geography.

A new ranking has revealed which American states are most active, visible, and commercially positioned on social media. The results show a striking divide between tech-saturated coasts and the offline corners of the country.

New York leads the list. Scoring 78.1 out of 100, it edges out every other state in terms of hashtag traffic and visibility. On Instagram alone, more than 138 million posts reference the Empire State. When adjusted for population, that equals over 700,000 posts per 100,000 residents. No other state comes close to that density of content.

Hawaii follows close behind, with a score of 77.2. It’s not just tourists driving its digital footprint. The state has the highest influencer-per-capita ratio in the country, 0.21 per 100,000 residents. Influencers born in Hawaii are not fringe creators; they sit at the top of their categories globally. Combined with a huge amount of visual content per capita, the state punches well above its population size.

California, in third place at 72.0, has quantity on its side. It hosts 38 of the top 200 influencers examined in the report, more than any other state. However, due to its size, its per capita performance is weaker than both New York and Hawaii. Still, its massive ecosystem of content creators, media firms, and marketing agencies keeps it firmly in the lead pack.

From Likes to Landscapes: How U.S. States Stack Up in Social Media Influence

The analysis, concocted by video editing firm VidPros, used five separate indicators that is the number of major influencers born in each state, volume of Google searches related to social media, number of Instagram and TikTok posts under state hashtags, and the number of digital marketing agencies. Each was assigned a weight before states were scored on a 100-point scale.
The rest of the top ten includes Massachusetts (70.9), Connecticut (70.1), New Jersey (68.5), Florida (64.7), Nevada (61.5), Virginia (61.4), and Utah (59.7). Most are clustered along the East Coast or in tourism-heavy regions. These are states where content tends to circulate widely, often blending influencer marketing with travel and lifestyle appeal.

But further down the list, the numbers shift dramatically.

Alaska, in last place, scored only 15.7. South Dakota followed with 16.9, then West Virginia (17.8), Mississippi (19.2), and North Dakota (20.4). These bottom states share certain traits: sparse populations, fewer marketing firms, low influencer visibility, and little national exposure through content. In Alaska, the number of major influencers born in-state was effectively zero. Instagram posts tagged with #Alaska trail far behind even mid-tier states like Kansas or Nebraska.

Even among states with similar populations, the digital gap is wide. Wyoming, for instance, scored 55.0, over three times higher than Alaska. Its performance was driven by steady per-capita content output and a slightly higher count of influencer activity. Similarly, New Hampshire, a state with modest size, still landed at 57.0 thanks to strong hashtag performance.

All in all, 72 percent of Americans now use social media regularly, according to census-linked data in the report. The average daily screen time related to social platforms stands at just over two hours. These habits translate into economic scale. Influencer marketing is forecast to hit six billion dollars in 2025, while broader social commerce is set to exceed $90 billion. These aren’t background numbers, they reflect the real business value behind everyday scrolling.

From Posts to Popularity: Which U.S. States Influence Your Social Media Feed

Rank State Score - Out Of 100
1 New York 78.1
2 Hawaii 77.2
3 California 72
4 Massachusetts 70.9
5 Connecticut 70.1
6 New Jersey 68.5
7 Florida 64.7
8 Nevada 61.5
9 Virginia 61.4
10 Utah 59.7
11 Oregon 59.5
12 Maryland 58.8
13 New Hampshire 57
14 North Carolina 55.2
15 Wyoming 55
16 Washington 54.9
17 Colorado 53.2
18 Texas 52.6
19 Arizona 50.4
20 Illinois 50.3
21 Ohio 47.6
22 Delaware 47
23 Georgia 46
24 Rhode Island 45.8
25 Tennessee 45.1
26 Pennsylvania 44.5
27 Michigan 43.9
28 Minnesota 42.9
29 Kansas 40.9
30 Louisiana 40.3
31 Nebraska 39.6
32 Oklahoma 39.1
33 Indiana 37.2
34 Vermont 36.9
35 Kentucky 36.8
36 Maine 32.7
37 South Carolina 29.9
38 Wisconsin 29.5
39 Alabama 28
40 Missouri 28
41 Idaho 26.8
42 Iowa 25.9
43 Arkansas 23.5
44 New Mexico 23.1
45 Montana 22.4
46 North Dakota 20.4
47 Mississippi 19.2
48 West Virginia 17.8
49 South Dakota 16.9
50 Alaska 15.7

The results suggest that social media success isn’t just about population. It’s about density, culture, and visibility. States that show up often in visual content, attract creator attention, and maintain a creative workforce tend to score higher. Others, with fewer digital touchpoints or weaker online economies, remain mostly unseen in the feed-driven world.

In short, some states are building the digital future. Others are still catching up.

Read next:

• Study Shows Human Behavior Undermines AI’s Medical Accuracy Outside Test Settings

• Where in the World Are LinkedIn Users Most Likely to Call Themselves CEOs?
by Irfan Ahmad via Digital Information World

Study Shows Human Behavior Undermines AI’s Medical Accuracy Outside Test Settings

AI tools like GPT-4 have been making headlines for passing medical exams and even outperforming licensed doctors in test settings. But new research from the University of Oxford suggests that while AI might shine in test conditions, it often stumbles when actual people rely on it for real health decisions.

A Big Gap Between Test Scores and Real Use

When asked directly, GPT-4 could identify the right diagnosis nearly 95% of the time. But things changed when everyday people tried to use the same tools to figure out what was wrong with them. In that case, the success rate dropped to just under 35%. Oddly enough, people who didn’t use AI at all were more accurate. In fact, they were about 76% more likely to name the correct condition than those using the AI.

How the Study Worked

Oxford researchers brought in 1,298 people to play the role of patients. Each person was given a short medical scenario, that is, a story with symptoms, personal background, and sometimes misleading info. Their task was to decide what might be wrong and what level of care they should seek, ranging from home remedies to calling for an ambulance.
Participants could use one of three AI models, GPT-4o, Llama 3, or Command R+. A group of real doctors had already decided on the correct diagnosis and action plan for each case. One example involved a student who got a sudden, intense headache while out with friends. The right call was a brain scan - he was having a type of brain bleed.

Where Things Went Off Track

When people used the AI tools, they often left out important details. Others misunderstood what the AI told them or ignored it completely. In one case, a person with symptoms of gallstones said they had severe stomach pain after eating takeout but didn’t explain where the pain was or how often it happened. The AI assumed it was indigestion, and the person agreed.
Even when the AI offered helpful information, users didn’t always use it. GPT-4o brought up a correct diagnosis in about two-thirds of cases. But fewer than 35% of users included that condition in their final decision.

How Human Behavior Changes the Outcome

Experts say this result isn’t shocking. AI needs clear, detailed input to do its job well. But someone who feels sick or panicked often can’t explain their symptoms clearly. Unlike trained doctors who know how to ask the right follow-up questions, an AI can only respond to what it's told.

Also, trust plays a role. People might not believe the AI’s advice or fully understand what it says. These human factors can limit how useful AI is in real life.

Why Test Scores Can Be Misleading

One lesson from the study is that high scores on standard tests don’t mean a model is ready for the real world. Most of these exams are made for humans, not machines. They don’t test how well an AI handles unclear input, emotional responses, or vague wording.

Think of a chatbot trained to answer customer service questions. It might do well on practice quizzes, but struggle with real users who type casually or express frustration. Without live testing with real people, those perfect scores don’t mean much.

AI Talking to AI Isn’t the Same

Oxford researchers also tried letting one AI act like a patient and another give the advice. These AI-to-AI conversations did better, about 61% of the time, the “patient” AI guessed the right problem. But this success is a bit of a trick. It shows that AI tools work well with each other, not necessarily with humans.

It’s Not the User’s Fault

Some might think users are to blame for the AI failures. But user experience experts say the real problem lies in design. If people can’t get the right help, it’s a sign the system isn’t built to match how people think or behave.

The study offers a clear warning: strong performance in a quiet lab doesn’t equal success in the messiness of real life. For any AI meant to work with people, testing with people is essential. Otherwise, we risk building smart tools that fall flat when it matters most.

Image: DIW-Aigen

Read next:

• From OpenAI's o3 to Grok-3 Vision: These AI Models Took the Mensa Test, Results May Surprise You

• The Hidden Cost of Free AI Tools: Your Behavior, Habits, and Identity

• Apple’s AI Critique Faces Pushback Over Flawed Testing Methods
by Irfan Ahmad via Digital Information World

Saturday, June 14, 2025

The Hidden Cost of Free AI Tools: Your Behavior, Habits, and Identity

Like it or not, artificial intelligence has become part of daily life. Many devices – including electric razors and toothbrushes – have become “AI-powered,” using machine learning algorithms to track how a person uses the device, how the device is working in real time, and provide feedback. From asking questions to an AI assistant like ChatGPT or Microsoft Copilot to monitoring a daily fitness routine with a smartwatch, many people use an AI system or tool every day.

While AI tools and technologies can make life easier, they also raise important questions about data privacy . These systems often collect large amounts of data, sometimes without people even realizing their data is being collected. The information can then be used to identify personal habits and preferences, and even predict future behaviors by drawing inferences from the aggregated data.

As an assistant professor of cybersecurity at West Virginia University, I study how emerging technologies and various types of AI systems manage personal data and how we can build more secure, privacy-preserving systems for the future.

Generative AI software uses large amounts of training data to create new content such as text or images. Predictive AI uses data to forecast outcomes based on past behavior, such as how likely you are to hit your daily step goal, or what movies you may want to watch. Both types can be used to gather information about you.

How AI tools collect data

Generative AI assistants such as ChatGPT and Google Gemini collect all the information users type into a chat box. Every question, response and prompt that users enter is recorded, stored and analyzed to improve the AI model.

OpenAI’s privacy policy informs users that “we may use content you provide us to improve our Services, for example to train the models that power ChatGPT.” Even though OpenAI allows you to opt out of content use for model training, it still collects and retains your personal data . Although some companies promise that they anonymize this data, meaning they store it without naming the person who provided it, there is always a risk of data being reidentified.

ChatGPT stores and analyzes everything you type into a prompt screen. Screenshot by Christopher Ramezan, CC BY-ND

Predictive AI

Beyond generative AI assistants, social media platforms like Facebook, Instagram and TikTok continuously gather data on their users to train predictive AI models. Every post, photo, video, like, share and comment, including the amount of time people spend looking at each of these, is collected as data points that are used to build digital data profiles for each person who uses the service.

The profiles can be used to refine the social media platform’s AI recommender systems . They can also be sold to data brokers, who sell a person’s data to other companies to, for instance, help develop targeted advertisements that align with that person’s interests.

Many social media companies also track users across websites and applications by putting cookies and embedded tracking pixels on their computers. Cookies are small files that store information about who you are and what you clicked on while browsing a website.

One of the most common uses of cookies is in digital shopping carts: When you place an item in your cart, leave the website and return later, the item will still be in your cart because the cookie stored that information. Tracking pixels are invisible images or snippets of code embedded in websites that notify companies of your activity when you visit their page. This helps them track your behavior across the internet.

This is why users often see or hear advertisements that are related to their browsing and shopping habits on many of the unrelated websites they browse, and even when they are using different devices, including computers, phones and smart speakers. One study found that some websites can store over 300 tracking cookies on your computer or mobile phone.

Here’s how websites you browse can track you using cookies or tracking pixels.

Data privacy controls – and limitations

Like generative AI platforms, social media platforms offer privacy settings and opt-outs, but these give people limited control over how their personal data is aggregated and monetized . As media theorist Douglas Rushkoff argued in 2011, if the service is free, you are the product.

Many tools that include AI don’t require a person to take any direct action for the tool to collect data about that person. Smart devices such as home speakers, fitness trackers and watches continually gather information through biometric sensors, voice recognition and location tracking. Smart home speakers continually listen for the command to activate or “ wake up ” the device. As the device is listening for this word, it picks up all the conversations happening around it , even though it does not seem to be active.

Some companies claim that voice data is only stored when the wake word – what you say to wake up the device – is detected. However, people have raised concerns about accidental recordings, especially because these devices are often connected to cloud services , which allow voice data to be stored, synced and shared across multiple devices such as your phone, smart speaker and tablet.

If the company allows, it’s also possible for this data to be accessed by third parties, such as advertisers, data analytics firms or a law enforcement agency with a warrant.

Privacy rollbacks

This potential for third-party access also applies to smartwatches and fitness trackers, which monitor health metrics and user activity patterns. Companies that produce wearable fitness devices are not considered “covered entities” and so are not bound by the Health Information Portability and Accountability Act . This means that they are legally allowed to sell health- and location-related data collected from their users.

Concerns about HIPAA data arose in 2018, when Strava, a fitness company released a global heat map of user’s exercise routes. In doing so, it accidentally revealed sensitive military locations across the globe through highlighting the exercise routes of military personnel.

The Trump administration has tapped Palantir , a company that specializes in using AI for data analytics, to collate and analyze data about Americans. Meanwhile, Palantir has announced a partnership with a company that runs self-checkout systems .

Such partnerships can expand corporate and government reach into everyday consumer behavior. This one could be used to create detailed personal profiles on Americans by linking their consumer habits with other personal data. This raises concerns about increased surveillance and loss of anonymity. It could allow citizens to be tracked and analyzed across multiple aspects of their lives without their knowledge or consent.

Some smart device companies are also rolling back privacy protections instead of strengthening them. Amazon recently announced that starting on March 28, 2025, all voice recordings from Amazon Echo devices would be sent to Amazon’s cloud by default, and users will no longer have the option to turn this function off. This is different from previous settings, which allowed users to limit private data collection.

Changes like these raise concerns about how much control consumers have over their own data when using smart devices. Many privacy experts consider cloud storage of voice recordings a form of data collection, especially when used to improve algorithms or build user profiles, which has implications for data privacy laws designed to protect online privacy.

Implications for data privacy

All of this brings up serious privacy concerns for people and governments on how AI tools collect, store, use and transmit data. The biggest concern is transparency. People don’t know what data is being collected, how the data is being used, and who has access to that data.

Companies tend to use complicated privacy policies filled with technical jargon to make it difficult for people to understand the terms of a service that they agree to. People also tend not to read terms of service documents. One study found that people averaged 73 seconds reading a terms of service document that had an average read time of 29-32 minutes.

Data collected by AI tools may initially reside with a company that you trust, but can easily be sold and given to a company that you don’t trust.

AI tools, the companies in charge of them and the companies that have access to the data they collect can also be subject to cyberattacks and data breaches that can reveal sensitive personal information. These attacks can by carried out by cybercriminals who are in it for the money, or by so-called advanced persistent threats , which are typically nation/state- sponsored attackers who gain access to networks and systems and remain there undetected, collecting information and personal data to eventually cause disruption or harm.

While laws and regulations such as the General Data Protection Regulation in the European Union and the California Consumer Privacy Act aim to safeguard user data, AI development and use have often outpaced the legislative process. The laws are still catching up on AI and data privacy . For now, you should assume any AI-powered device or platform is collecting data on your inputs, behaviors and patterns.

Using AI tools

Although AI tools collect people’s data, and the way this accumulation of data affects people’s data privacy is concerning, the tools can also be useful. AI-powered applications can streamline workflows, automate repetitive tasks and provide valuable insights.

But it’s crucial to approach these tools with awareness and caution.

When using a generative AI platform that gives you answers to questions you type in a prompt, don’t include any personally identifiable information , including names, birth dates, Social Security numbers or home addresses. At the workplace, don’t include trade secrets or classified information. In general, don’t put anything into a prompt that you wouldn’t feel comfortable revealing to the public or seeing on a billboard. Remember, once you hit enter on the prompt, you’ve lost control of that information.

Remember that devices which are turned on are always listening – even if they’re asleep. If you use smart home or embedded devices, turn them off when you need to have a private conversation. A device that’s asleep looks inactive, but it is still powered on and listening for a wake word or signal. Unplugging a device or removing its batteries is a good way of making sure the device is truly off.

Finally, be aware of the terms of service and data collection policies of the devices and platforms that you are using. You might be surprised by what you’ve already agreed to.

This post was first published on TheConversation.

Read next: Apple’s AI Critique Faces Pushback Over Flawed Testing Methods


by Web Desk via Digital Information World

Apple’s AI Critique Faces Pushback Over Flawed Testing Methods

A recent research paper from Apple raised eyebrows in the AI community after suggesting that today’s most advanced language models fail dramatically when faced with complex reasoning tasks. But that conclusion is now being challenged, not because the tasks were too difficult, but because, critics argue, the experiments weren’t fairly designed to begin with.

Alex Lawsen, a researcher at Open Philanthropy, has responded with a counter-study questioning the foundations of Apple’s claims. His assessment, published this week, argues that the models under scrutiny (including Claude, Gemini, and OpenAI’s latest systems) weren’t breaking down due to cognitive limits. Instead, he says they were tripped up by evaluation methods that didn’t account for key technical constraints.

One of the main flashpoints in the debate is the Tower of Hanoi, a well-known puzzle often used to test logical reasoning. Apple’s paper reported that models consistently failed when the puzzle became more complex - typically at eight disks or more. But Lawsen points out a critical issue that the models weren’t failing to solve the puzzle. They were often simply stopping short of writing out the full answer because they were nearing their maximum token limit - a built-in cap on how much text they can output in one go.

In several cases, the models even stated they were cutting themselves off to conserve output space. Rather than interpreting this as a practical limitation, Apple’s evaluation counted it as a failure to reason.

A second issue arose in the so-called River Crossing test, where models are asked to solve a version of the Missionaries and Cannibals puzzle. Apple included setups that were mathematically unsolvable, for example, asking the model to ferry six or more agents using a boat that could only carry three at a time. When the models recognized that the task couldn’t be completed under the given rules and refused to attempt it, they were still marked wrong.

A third problem involved how Apple’s system judged the responses. It relied on automatic scripts to evaluate output strictly against full, exhaustive solutions. If a model produced a correct but partial answer (or took a strategic shortcut) it still received a failing score. No credit was given for recognizing patterns, applying recursive logic, or even identifying the task’s limitations.

To illustrate how these issues can distort results, Lawsen ran a variation of the Hanoi test with a different prompt. Instead of asking the models to list every move, he instructed them to write a small program (in this case, a Lua function) that could solve the puzzle when executed. Freed from the burden of listing hundreds of steps, the models delivered accurate, scalable solutions, even with 15 disks - well beyond the point where Apple’s paper claimed they failed entirely.

The implications go beyond academic nitpicking. Apple’s conclusions have already been cited by others as evidence that large AI models lack the kind of reasoning needed for more ambitious tasks. But if Lawsen’s analysis holds up, it suggests the story is more complicated. The models may struggle with long-form answers under tight output limits, but their ability to think through a problem algorithmically remains intact.

Of course, none of this means large reasoning models are problem-free. Even Lawsen acknowledges that designing systems that can reliably generalize across unfamiliar problems remains a long-term challenge. His paper calls for more careful experimentation i.e., tests should check whether puzzles are actually solvable, track when models are being truncated due to token budgets, and consider solutions in multiple formats, from plain text to structured code.

The debate boils down to a deeper question, are we really measuring how well machines think, or just how well they can type within a fixed character limit?

Image: DIW-Aigen

Read next: ChatGPT Linked to Delusions, Self-Harm, and Escalating Mental Health Crises
by Irfan Ahmad via Digital Information World

Friday, June 13, 2025

Google Tests Spoken Summaries in Search Results, But You’ll Have to Ask First

Google is experimenting with a new way to deliver search results, one that talks back. A feature called Audio Overviews is now available to users in the US through Google’s Search Labs, offering short spoken summaries for some queries, powered by the company’s Gemini AI model.

Once enabled, the tool introduces an audio clip that sounds like a brief conversation between two computer-generated voices. The new AI-powered feature discusses the topic at hand, aiming to give listeners a broad overview without needing to scroll through multiple websites. It’s not on by default, users have to opt in, and for now, only certain topics trigger the option. But that could change. If past rollouts are anything to go by, this might soon become a default feature, with no option to turn it off.

When it appears, the player sits midway down the page, just below the “People also ask” section. Users are asked to generate the clip manually, and it may take several seconds before playback begins. The result is a back-and-forth between the AI voices, covering key points from the top-ranked search results.

Concerns arise as spoken summaries may reduce website traffic, continuing pattern of AI impacting traditional publishers’ visibility.

Playback controls are simple i.e.: pause, skip, volume, and variable speed settings are all included. Below the player, Google lists the websites that contributed to the summary. Users can also rate the experience with a thumbs up or down, giving feedback on the audio or the experiment as a whole.

The idea behind Audio Overviews, according to Google, is to help people get a quick sense of unfamiliar topics, especially in situations where reading isn’t convenient - such as when commuting or multitasking. A suggested prompt is “how do noise cancellation headphones work?”, though the feature is already appearing for a growing range of searches.

The same audio format has previously appeared in other Google products, including NotebookLM, the Gemini app, and even Google Docs. This latest rollout to Search reflects the company’s continued shift toward more “multimodal” experiences, blending text, audio, and interactivity in a single interface.

But while the feature works relatively well for straightforward topics, it isn’t flawless. AI-generated summaries have occasionally shown inconsistencies or factual gaps, particularly when drawing from a broader set of online sources. Unlike NotebookLM, where the AI works from a curated document set, the open nature of Search can lead to less reliable interpretations.

There’s also the question of what this means for the wider web. If users rely on spoken summaries for quick answers, fewer may click through to original sources, a trend already affecting publishers as AI tools become more prominent in search.

For now, Audio Overviews remain opt-in and experimental. But given Google’s recent history, that may not be the case for long. Like its earlier text-based AI summaries, which moved from limited trials to default search features within weeks, this voice-driven format may follow a similar path.

Read next:

• From OpenAI's o3 to Grok-3 Vision: These AI Models Took the Mensa Test, Results May Surprise You

• Remote Regions in the United States Still Struggle With High Costs and Poor Internet Access
by Irfan Ahmad via Digital Information World

Remote Regions in the United States Still Struggle With High Costs and Poor Internet Access

There’s paying a lot. And then there’s paying a lot for something that barely works when you need it.

In Wyoming, the average person gives up about an hour and 25 minutes of their monthly working time just to cover a standard home internet bill. For that, they get download speeds that don’t even reach 110 Mbps. On paper, that number might not seem terrible. But if you’ve tried joining a Zoom call while someone else in the house is streaming or uploading files, you’ll notice just how quickly that speed starts to fall apart.

In Remote US States Staying Connected Still Means Paying More for Less Every Month

Montana’s not far behind. Slightly faster speeds, but nearly the same hit to your paycheck. Alaska’s situation feels familiar, more remote geography, similar results. None of this is new, but when a research group from Spinblitz laid the numbers out side by side — local wages, cost of service, and actual internet speed — the pattern wasn’t subtle.

In the bottom 10 states for internet value, most are rural, many are lower-income, and all of them are handing over too much time or money (often both) for lackluster service. In Iowa, it’s over an hour and a half of wages for a connection that doesn’t quite break 165 Mbps. In South Dakota, you get more speed, close to 190 Mbps, but still lose 1.6 hours of labor just paying for it.

And then there’s Idaho. Less than an hour of work a month, which is better. But when the speed hovers around 140 Mbps, it’s not quite the bargain it looks like at first glance.

New Mexico’s somewhere in the middle. Speeds aren’t the worst, cost’s not the highest. But it's still sitting in the same awkward group, states where you’re overpaying, one way or another.

Some states, Maine, West Virginia, Arkansas, don’t suffer from the absolute slowest speeds, but the cost-to-speed ratio keeps them stuck near the bottom of the value list. The math shifts slightly state by state, but the equation rarely balances out.

There’s something especially frustrating about the fact that in many of these places, digital infrastructure has been promised, delayed, and debated for years. And while it’s true that stringing fiber across hundreds of miles of remote land costs more than wiring up cities, the end result is the same: folks in less-populated states are shelling out more time just to keep up with the rest.

The issue goes beyond monthly bills and download speeds. At its heart, it’s about whether people can realistically stay connected, to their jobs, their education, their healthcare, without falling behind for reasons that shouldn’t be this common. In many parts of the country, that’s still far from guaranteed.

A full list of states ranked by how much they pay for internet, from high prices and poor service to fair costs and fast speeds.

State Median Download Mbps Internet Value Index Affordability (hours of work needed to pay for internet)
Wyoming 105.23 73.9 1.42
Montana 111.16 80.9 1.37
Alaska 119.52 90 1.33
Iowa 162.19 100.4 1.62
South Dakota 189.22 117.9 1.6
New Mexico 125.74 122.3 1.03
West Virginia 171.87 135.1 1.27
Maine 200.39 143.5 1.4
Arkansas 158.51 158.2 1
Idaho 140.68 159.6 0.88
Mississippi 177.39 167 1.06
Vermont 142.46 167.1 0.85
Wisconsin 201.29 172.4 1.17
Kentucky 210.34 181.4 1.16
Alabama 208.64 186.2 1.12
Hawaii 225.93 189 1.2
Georgia 188.13 213.5 0.88
Indiana 201 214.9 0.94
Illinois 187.39 218.7 0.86
North Carolina 231.41 222.1 1.04
New Jersey 234 222.2 1.05
Pennsylvania 205.02 223.2 0.92
Michigan 201.51 223.7 0.9
California 226.89 232.2 0.98
Minnesota 183.47 233.9 0.78
Tennessee 230.27 236 0.98
Oregon 195.65 238.6 0.82
North Dakota 210.37 239.7 0.88
Utah 213.17 250 0.85
Connecticut 244.23 251.2 0.97
Delaware 237.42 257.5 0.92
Texas 224.77 258.4 0.87
Colorado 199.92 261.2 0.77
Washington 188.77 263.1 0.72
Maryland 226.61 270.9 0.84
Florida 238.3 273.1 0.87
New Hampshire 234.5 277.9 0.84
South Carolina 224.52 287.8 0.78
Ohio 216.68 288.1 0.75
Oklahoma 188.31 288.6 0.65
Louisiana 198.44 289.4 0.69
New York 226.13 291.8 0.77
Massachusetts 225.76 319.8 0.71
Nebraska 199.43 326.8 0.61
Kansas 210.49 331.6 0.63
Missouri 207.74 337.1 0.62
Arizona 199.23 345.5 0.58
Nevada 228.69 362.7 0.63
Virginia 213.82 385.7 0.55
Rhode Island 257.48 463.6 0.56

Methodology: The study ranks US states by comparing internet speed, monthly cost, and local wages to find where people pay the most for the least. Affordability was measured by how much of a person’s wage goes toward their internet bill. Value came from dividing speed by that affordability score.

Read next: Crypto Search Surge Places New York at the Forefront of U.S. Digital Currency Interest
by Irfan Ahmad via Digital Information World

From OpenAI's o3 to Grok-3 Vision: These AI Models Took the Mensa Test, Results May Surprise You

A recent intelligence benchmark has placed today’s most advanced AI models under the same kind of cognitive scrutiny used to assess exceptional human thinkers, and the outcome tells a story of contrasts between raw verbal reasoning and multimodal complexity.

The data comes from the Mensa Norway IQ test, a well-known measure of high-level reasoning, where scores above 130 often mark out genius-level ability. Although the test was designed for people, researchers have begun using it to compare how artificial intelligence systems perform when asked to solve the same kinds of abstract problems humans struggle with.

At the top of the current rankings sits OpenAI’s o3, which scored 133, just shy of the upper boundary of human IQ scales. Not far behind is Gemini Thinking, Google’s language-focused model, which reached 128. These results suggest that, at least in abstract problem solving through words and logic, some AI systems are not just matching human performance but quietly exceeding it.

The upper tier includes OpenAI’s o4-mini with a score of 126, Gemini Pro at 124, and both Claude-4 Opus and Claude-4 Sonnet tied at 118. Even models just below this line, like Grok-3 Think (111), Llama-4 (107), and DeepSeek-R1 (105), are operating within or above the average human range.
But the drop-off begins sharply as models shift from text-only processing to visual capabilities. Systems like Claude-4 Sonnet Vision, GPT-4.5, Grok-3, and deepseek-v3, all scoring 97, sit right at the border of human average. Just below them, Gemini Pro Vision landed at 96, while GPT-4 Omni (Verbal) trailed at 91, despite its verbal focus.

OpenAI’s o4-mini-high reached 90, but the decline continues. Visual variants such as o3-vision and Bing’s AI scored 86, followed by Mistral (85) and Claude-4 Opus Vision (80). Further down the list, models like OpenAI o1-pro Vision (79) and Llama-3 Vision (70) show a widening gap between multimodal ambition and actual performance on reasoning tasks.

At the lowest end sit GPT-4 Omni Vision and Grok-3 Think Vision, managing only 63 and 62 respectively — scores that, in human terms, would reflect severe limitations in pattern recognition and logic.

What becomes clear through this ranking is that text-based reasoning remains AI’s strong suit. Models trained purely on language continue to outperform their multimodal counterparts when faced with symbol-based puzzles and logic problems. While vision-enabled AIs might be better suited for real-world perception, they appear less capable when reasoning is abstracted from context and stripped to logic alone.


These findings underscore a split in the development arc of artificial intelligence. Verbal models are now working at, and sometimes above, human cognitive levels. But giving machines the ability to “see” doesn’t yet mean they understand. At least not in the ways intelligence is traditionally measured.

Category Mensa Norway IQ Test Score
OpenAI o3 133
Gemini Thinking 128
OpenAI o4-mini 126
Gemini Pro 124
Claude-4-Opus 118
Claude-4-Sonnet 118
Grok-3-Think 111
Llama-4 107
DeepSeek-R1 105
OpenAI o1-pro 102
Average Human 100
Claude-4-Sonnet-Vision 97
gpt-4.5 97
deepseek-v3 97
Grok-3 97
Gemini Pro (Vision) 96
GPT4 Omni (Verbal) 91
OpenAI o4-mini-high 90
OpenAI o3-vision 86
Bing 86
Mistral 85
Claude-4-Opus-Vision 80
OpenAI o1-pro-vision 79
Llama-3 (Vision) 70
GPT4 Omni (Vision) 63
Grok-3-Think-Vision 62

H/T: Trackingai.

Read next: Context, Emotion, and Biology: What AI Misses in Language Comprehension
by Irfan Ahmad via Digital Information World