Wednesday, April 8, 2026

Majority of Americans Worry Government Misuse of Personal Data Could Lead to Surveillance, Chilling of Benefits, and Demand Accountability

by Elizabeth Laird, Maddy Dwyer, Quinn Anex-Ries

Limiting the collection, sharing, and consolidation of personal data that is held by government agencies has been a decades-long, bipartisan priority across the United States. [1] But these limits have been challenged over the past year as the federal government has cast aside long-standing privacy norms and initiated unprecedented access to and sharing of administrative data held by federal and state agencies. These actions have spurred significant pushback from the public, states, and civil society organizations, as well as the courts. They have also prompted many individuals in the United States to call into question how and why the government uses their information.

To better understand public sentiment and concerns around the government’s collection, sharing, and consolidation of personal data, the Center for Democracy & Technology (CDT) conducted nationally representative polling of U.S. adults (see more on the methodology, including n sizes, on p. 12 of the report). CDT found that concern is consistent and high and that people across the United States want to hold government agencies accountable for protecting the privacy of their personal data. Specifically:
  • A majority of Americans (74 percent) are concerned about the privacy and security of their personal data that is held by the government.;
  • Americans report that government misuse of data could lead to real-life impacts, such as surveillance and the chilling of rightful access to public benefits;
  • Americans agree that privacy laws and policies are important but are not familiar with their legal rights;
  • Worries about personal data are high, with certain data elements and reasons for data sharing raising particular concern, especially related to law and immigration enforcement; and
  • Americans want government held accountable for protecting their personal data.
Finally, certain communities express higher levels of concern regarding personal data that is stored by government agencies:
  • Communities of color are more concerned about data sharing with law and immigration enforcement agencies;
  • Older Americans are consistently more concerned about the privacy and security of personal data that is collected and stored by government agencies; and
  • Concerns and demands for government accountability are high across political affiliation, with Democrats reporting higher levels of concern on issues related to sharing data without consent.



Read the full report.

Read the summary brief.

Explore the privacy explainer.

Read the coalition letter + full list of signatories.

Read the press release.

[1] Elizabeth Laird, Kristin Woelfel, and Quinn Anex-Ries, CDT and The Leadership Conference Release New Analysis of DOGE, Government Data, and Privacy Trends, Center for Democracy & Technology (Mar. 19, 2025), https://cdt.org/insights/cdt-and-the-leadership-conference-release-new-analysis-of-doge-government-data-and-privacy-trends/; U.S. Congress, Senate Committee on Government Operations, Legislative History of the Privacy Act of 1974 (Sept. 1976), https://www.justice.gov/d9/privacy_source_book.pdf.

Note: This post was originally published on CDT.org, and is republished here under CC BY 4.0 with minor edits, including the addition of percentages, charts, and updated title.

Reviewed by Irfan Ahmad.

Read next: Americans Use AI More but Express Low Trust, Gen Z Most Likely to Expect Job Losses
by External Contributor via Digital Information World

Tuesday, April 7, 2026

Americans Use AI More but Express Low Trust, Gen Z Most Likely to Expect Job Losses

By Quinnipiac University Poll

As artificial intelligence continues to leap from concept to reality in just about everything we do, an increasing number of Americans see more harm than good when it comes to AI's impact on their daily lives and education and they are divided about its impact on health care. Trust in AI remains low. A slight majority say the pace of AI's development is faster than they expected and there is more concern than excitement about AI. Those concerns are apparent in views related to AI's use in the workforce, politics, the military, and AI data centers. These are among the findings in a Quinnipiac (KWIN-uh-pea-ack) University national poll of adults released today examining attitudes about artificial intelligence. The survey was conducted in collaboration with the Quinnipiac University School of Computing & Engineering and the Quinnipiac University School of Business.

The Age Of Artificial Intelligence: Americans' AI Use Increases While Views On It Sour, Quinnipiac University Poll On AI Finds; 7 In 10 Think AI Will Cut Jobs With Gen Z The Most Pessimistic
Image: Microsoft Copilot / Unsplash

AI USE

Americans were given a list of eight activities, some of which were included in Quinnipiac University's April 16, 2025 poll on AI, and asked whether they have used AI tools for:

  • Researching topics they are curious about: 51 percent say yes, up from 37 percent in April 2025;
  • Writing something for them: 28 percent say yes;
  • School or work projects: 27 percent say yes, while 24 percent said yes in April 2025;
  • Analyzing data: 27 percent say yes, up from 17 percent in April 2025;
  • Creating images: 24 percent say yes; up from 16 percent in April 2025;
  • Medical advice: 20 percent say yes;
  • Personal advice: 15 percent say yes;
  • Companionship: 5 percent say yes.

Twenty-seven percent of Americans volunteered that they have never used AI tools, down from 33 percent in April 2025.

TRUST

When Americans were asked how much of the time they think they can trust the information generated by AI, 76 percent think they can trust AI either hardly ever (27 percent) or only some of the time (49 percent), while 21 percent think they can trust AI either most of the time (18 percent) or almost all of the time (3 percent). This is largely unchanged from Quinnipiac University's April 2025 poll.

"The contradiction between use and trust of AI is striking. Fifty-one percent say they use AI for research, and many also use it for writing, work, and data analysis. But only 21 percent trust AI-generated information most or almost all of the time. Americans are clearly adopting AI, but they are doing so with deep hesitation, not deep trust,"said Chetan Jaiswal, Ph.D., Associate Professor of Computer Science and Associate Chair, Department of Computing, Quinnipiac University School of Computing and Engineering.

EXCITEMENT & CONCERN

Just over one-third of Americans (35 percent) are either very excited (6 percent) or somewhat excited (29 percent) about AI, while 62 percent are either not so excited (29 percent) or not excited at all (33 percent).

Eighty percent are either very concerned (38 percent) or somewhat concerned (42 percent) about AI, while 18 percent are either not so concerned (10 percent) or not concerned at all (8 percent).

High levels of concern are expressed across all age groups:

  • Gen Z (1997 - 2008): very concerned (35 percent), somewhat concerned (43 percent), not so concerned (14 percent), and not concerned at all (7 percent);
  • Millennials (1981 - 1996): very concerned (39 percent), somewhat concerned (42 percent), not so concerned (7 percent), and not concerned at all (10 percent);
  • Gen X (1965 - 1980): very concerned (36 percent), somewhat concerned (43 percent), not so concerned (8 percent), and not concerned at all (10 percent);
  • Baby Boomers (1946 - 1964): very concerned (39 percent), somewhat concerned (43 percent), not so concerned (10 percent), and not concerned at all (6 percent);
  • Silent Generation (1928 - 1945): very concerned (31 percent), somewhat concerned (41 percent), not so concerned (15 percent), and not concerned at all (8 percent).

PACE

Fifty-one percent of Americans say the pace of AI development is moving faster than they expected, 38 percent say it is moving about as fast as they expected, and 8 percent say it is moving not as fast as they expected.

IMPACT

Fifty-five percent of Americans think AI will do more harm than good in their day-to-day lives, while 34 percent think AI will do more good than harm, with 11 percent not offering an opinion.

This compares to April 2025 when 44 percent thought AI would do more harm than good in their day-to- day lives and 38 percent thought AI would do more good than harm, with 18 percent not offering an opinion.

When Americans were asked how much they think their day-to-day lives are currently impacted by AI, two in ten (21 percent) think a lot, 29 percent think some, 30 percent think only a little, and 17 percent think their day-to-day lives are not impacted at all by AI. This is largely unchanged from April 2025.

When it comes to education, nearly two-thirds of Americans (64 percent) think AI will do more harm than good, while 27 percent think AI will do more good than harm.

This compares to April 2025 when 54 percent thought AI would do more harm than good and 32 percent thought AI would do more good than harm.

When it comes to health care, 45 percent of Americans think AI will do more harm than good, while 43 percent think AI will do more good than harm.

HEALTH CARE: HUMAN VS. AI

Americans were asked if it were proven that an AI tool is more accurate than a human in reading medical scans, would they prefer to rely solely on information provided by AI, solely on information provided by a human, or a combination of both AI and a human.

An overwhelming majority (81 percent) say they would prefer to rely on a combination of both AI and a human, 14 percent say they would prefer to rely solely on information provided by a human, and 3 percent say they would prefer to rely solely on information provided by AI.

"It's telling that most people would still want a human involved in reading medical scans even if it were proven that the AI tool was more accurate. This desire for a 'second opinion' from a human being, even if proven they aren't as accurate as AI, reflects the lack of trust in AI that we see throughout the poll."said Brian O'Neill, Ph.D., Associate Professor of Computer Science and Associate Dean, Quinnipiac University School of Computing and Engineering.

JOBS OUTLOOK

Seventy percent of Americans think advancements in AI are likely to lead to a decrease in the number of job opportunities for people, 7 percent think they are likely to lead to an increase, and 18 percent think advancements in AI will not make much of a difference.

In April 2025, 56 percent of Americans thought advancements in AI were likely to lead to a decrease in the number of job opportunities for people, 13 percent thought they were likely to lead to an increase, and 24 percent thought advancements in AI would not make much of a difference.

In today's poll, there are differences between age groups regarding how Americans think advancements in AI are likely to affect the number of job opportunities for people:

  • Gen Z (1997 - 2008): decrease (81 percent), increase (4 percent), and not make much of a difference (12 percent);
  • Millennials (1981 - 1996): decrease (71 percent), increase (6 percent), and not make much of a difference (20 percent);
  • Gen X (1965 - 1980): decrease (67 percent), increase (7 percent), and not make much of a difference (20 percent);
  • Baby Boomers (1946 - 1964): decrease (66 percent), increase (10 percent), and not make much of a difference (20 percent);
  • Silent Generation (1928 - 1945): decrease (57 percent), increase (13 percent), and not make much of a difference (20 percent).

Among Americans who are employed, 71 percent of white-collar workers and 73 percent of blue-collar workers think advancements in AI are likely to lead to a decrease in the number of job opportunities for people.

"Younger Americans report the highest familiarity with AI tools, but they are also the least optimistic about the labor market. AI fluency and optimism here are moving in opposite directions,"said Tamilla Triantoro, Ph.D., Associate Professor of Business Analytics and Information Systems, Quinnipiac University School of Business.

Among Americans who are employed, 30 percent are either very concerned (10 percent) or somewhat concerned (20 percent) that artificial intelligence may make their jobs obsolete, while nearly 7 in 10 Americans (69 percent) are either not so concerned (21 percent) or not concerned at all (48 percent).

This compares to April 2025 when 21 percent of employed Americans were either very concerned (6 percent) or somewhat concerned (15 percent) that AI might make their jobs obsolete and 78 percent were either not so concerned (22 percent) or not concerned at all (56 percent).

"Americans are more worried about what AI may do to the labor market than about what it may do to their own jobs. People seem more willing to predict a tougher market than to picture themselves on the losing end of that disruption - a pattern worth watching as the technology moves deeper into the workplace,"added Triantoro.

AI AS A SUPERVISOR

Eighty percent of Americans would be unwilling to have a job where their direct supervisor was an AI program that assigned their tasks and schedules, while 15 percent would be willing.

TRANSPARENCY & REGULATION

Seventy-six percent of Americans think that businesses are not doing enough to be transparent about their use of AI, while 12 percent think businesses are doing enough, with 11 percent not offering an opinion. This is largely unchanged from Quinnipiac University's April 2025 poll.

Seventy-four percent of Americans think the government is not doing enough to regulate the use of AI, while 13 percent think the government is doing enough, with 13 percent not offering an opinion. This compares to April 2025 when 69 percent of Americans thought the government was not doing enough to regulate the use of AI and 15 percent thought the government was doing enough, with 16 percent not offering an opinion.

"Americans are not rejecting AI outright, but they are sending a warning. Too much uncertainty, too little trust, too little regulation, and too much fear about jobs,"added Jaiswal.

MILITARY USE

A slight majority of Americans (51 percent) oppose the military using AI to select military targets, while 36 percent support it.

There are stark gaps between the nation's youngest and oldest generations.

Gen Z (69 - 24 percent) opposes the military using AI to select military targets, while the Silent Generation (47 - 32 percent) slightly supports the military using AI to select military targets.

When it comes to the military using AI in surveillance for security purposes, Americans are split, with 45 percent supporting it and 44 percent opposing it.

Gen Z is set apart from other generations by its clear opposition to the military using AI in surveillance for security purposes:

  • Gen Z (1997 - 2008): 36 percent support, 58 percent oppose, 6 percent not offering an opinion;
  • Millennials (1981 - 1996): 44 percent support, 49 percent oppose, 7 percent not offering an opinion;
  • Gen X (1965 - 1980): 49 percent support, 37 percent oppose, 14 percent not offering an opinion;
  • Baby Boomers (1946 - 1964): 53 percent support, 36 percent oppose, 10 percent not offering an opinion;
  • Silent Generation (1928 - 1945): 48 percent support, 29 percent oppose, 23 percent not offering an opinion.
"The negative response to using AI for military target selection, and even the mixed responses to using AI for military surveillance purposes, further reflect the doubts people have about AI and who develops and controls it. The generational gap here also stands out, as younger generations are the most skeptical about military applications of AI,"added O'Neill.

POLITICAL ADS

Americans were asked how they think the federal government should handle the use of AI-generated images or audio in political ads.

Thirty-eight percent think the federal government should ban all use of them, 45 percent think the federal government should require disclosure of the use of AI-generated images or audio in political ads, and 11 percent think the federal government should not regulate the use of AI-generated images or audio in political ads.

AI DATA CENTERS

Americans 65 - 24 percent oppose the building of an AI data center in their community with majority opposition across the board.

Those who oppose the building of an AI data center in their community were given a list of three possible reasons and asked if any are part of the reason for their opposition: 72 percent say electricity costs, 64 percent say water use, and 41 percent say noise.

Those who support the building of an AI data center in their community were given a list of three possible reasons and asked if any are part of the reason for their support: 77 percent say job creation, 53 percent say increasing tax revenue, and 47 percent say the potential for creating a tech hub.

SPOTTING A FAKE

A majority of Americans (56 percent) are either very confident (18 percent) or somewhat confident (38 percent) that they can tell the difference between an authentic video or recording and a fake video or recording generated by AI, while 42 percent are either not so confident (22 percent) or not confident at all (20 percent).

Nearly 3 in 10 Americans (28 percent) say they have shared a video that they later found out was AI- generated, while 68 percent say they have not.

1,397 U.S. adults nationwide were surveyed from March 19th - 23rd with a margin of error of +/- 3.3 percentage points, including the design effect. The survey included 800 employed adults with a margin of error of +/- 4.3 percentage points, including the design effect.

The Quinnipiac University Poll, directed by Doug Schwartz, Ph.D. since 1994, conducts independent, non-partisan national and state polls on politics and issues. Surveys adhere to industry best practices and are based on probability-based samples using random digit dialing with live interviewers calling landlines and cell phones.

This article was originally published by Quinnipiac University Poll and is republished here with permission. Read the full poll, including questions and methodology, here.

Edited by Asim BN.

Read next: Few Americans Turn to AI Chatbots for News


by External Contributor via Digital Information World

Few Americans Turn to AI Chatbots for News

by Valentine Fourreau, Statista

Recent data published by Pew Research Center shows that in 2025, a large majority (86 percent) of U.S. adults said they at least sometimes get news from a smartphone, computer or tablet, including 56 percent who said they do so often. This made digital devices the most often used source of news for American adults, ahead of television (used "often" by 32 percent of respondents) and radio (11 percent), reflecting an evolving news environment.

Yet, data from a survey conducted by Pew Research Center in December 2025 shows that most U.S. adults turn to their preferred news organization when looking for more information about a breaking news story. This was the most common answer, cited by 36 percent of respondents, ahead of a search engine, favored by 28 percent, and social media (19 percent). Interestingly, while AI adoption has become mainstream in some countries, only one percent on American adults said they turned to AI chatbots for information about breaking news stories. According to a recent Statista study, Americans are as of yet still on the fence about artificial intelligence and its uses.

Pew Research 2025 shows Americans favor familiar news sources; AI chatbots remain largely unused.

Note: This post was originally published on Statista and is republished here under CC BY-ND.

Reviewed by Irfan Ahmad.

Read next:

• New MIT research overturns prior view about how AI capabilities could overtake human workers

• Not All Dark Web Users Are Criminal, But Certain Traits Are More Common Among Users Reporting Its Access

by External Contributor via Digital Information World

Monday, April 6, 2026

Not All Dark Web Users Are Criminal, But Certain Traits Are More Common Among Users Reporting Its Access

By Gisele Galoustian, Florida Atlantic University

Image: Sora Shimazaki / Pexels

The dark web is sometimes seen as a shadowy part of the internet, but it also has legitimate uses, including accessing censored information and sharing files securely. Its anonymity and privacy features, however, can make it appealing to those drawn to riskier or illicit online activity.

As interest in the dark web grows, researchers are taking a closer look at who accesses it. The platform creates conditions where motivated offenders, potential victims and little oversight converge, and traits like low self-control and peer influence may help explain who is drawn to it. Yet criminology-based studies comparing dark web and surface web users are scarce.

To help fill that gap, research from Florida Atlantic University and collaborators analyzed survey data collected from a national sample of 1,750 adults in the United States, examining whether factors such as prior criminal behavior, low self-control, deviant peer groups and attitudes toward crime are linked to self-reported dark web use.

The researchers first examined whether people who reported having a criminal record were more likely to have accessed the dark web. Next, they looked at self-control, assessing whether individuals with lower self-control – a trait tied to impulsive and risk-taking behavior – were more likely to use the platform. Finally, they explored the role of social influences and attitudes by analyzing whether having more peers who engage in online deviance, as well as holding more favorable views toward rule-breaking and violence, were associated with dark web access.

Results of the study, published in the Journal of Crime and Justice, reveal clear differences between dark web users and surface web users across each of the criminological factors examined. About one-third of dark web users reported a prior criminal conviction – nearly three times the rate of surface web users (33.6% vs. 12.6%). They also scored significantly higher on measures of low self-control, peer cyber deviance, and criminal attitudes, including support for larceny, online deviance, and especially concerning, physical violence against others.

Across all models, being male and being younger were also linked to a higher likelihood of dark web use, with some models also suggesting that being heterosexual and having more education is also associated with dark web use.

Overall, these findings suggest that past criminal behavior, impulsiveness, social influences and favorable attitudes toward deviance all play a role in who chooses to access the dark web, providing strong empirical support for criminological theories in this digital context.

“It’s important to be clear: accessing the dark web is not inherently deviant or illegal, and it supports many legitimate activities, from private communication to accessing censored information,” said Ryan C. Meldrum, Ph.D., senior author and director of the School of Criminology and Criminal Justice within FAU’s College of Social Work and Criminal Justice. “What our research shows, however, is that the platform also tends to attract some individuals whose behavioral, social and attitudinal profiles resemble those involved in criminal activity. In this sense, the dark web is a risky digital environment – one that can facilitate crime and increase the likelihood of victimization, all while operating under limited law enforcement oversight.”

Supplemental analyses from the study reveal that social learning factors may help explain why low self-control links to dark web access. Specifically, nearly half of the connection between low self-control and using the platform appears to be explained through the peers individuals associate with and the attitudes they form. This suggests that people with lower self-control may select peers who reinforce risky or deviant behaviors and attitudes, giving them the knowledge and skills needed to navigate the dark web.

The study underscores the need for further research into the small but important subpopulation of internet users who access the dark web, particularly those with the intent to engage in illicit activities.

“As the internet continues to evolve, understanding who accesses the dark web and why is critical,” Meldrum said. “Our study points to the importance of balancing awareness of potential risks with recognition of the legitimate, everyday uses of these hidden online spaces.”

Study co-authors are Raymond D. Partin, Ph.D., Department of Criminology and Criminal Justice, University of Alabama; and Peter S. Lehmann, Ph.D., Department of Criminal Justice and Criminology, Sam Houston State University.

Note: This post was originally published by Florida Atlantic University and is republished here with permission.

Reviewed by Irfan Ahmad.

Read next: New MIT research overturns prior view about how AI capabilities could overtake human workers
by External Contributor via Digital Information World

New MIT research overturns prior view about how AI capabilities could overtake human workers

Anthropic CEO Dario Amodei has said that AI could surpass “almost all humans at almost everything” shortly after 2027. While AI’s capabilities are certainly improving, such rapid progress might seem at odds with findings that show AI is still failing at 95%+ of remote freelance projects, and continues to struggle with hallucination, long term planning, and forms of abstract reasoning that humans find easy. But recent work from METR has found evidence that LLMs can gain capabilities in rapid surges — jumping from succeeding almost never to almost always in just a few years. If this is true across the economy, it could mean that workers could be blindsided by AI advances.

In their study, MIT researchers characterize these out-of-nowhere capability gains as “crashing waves” and ask if they are likely to be an economy-wide phenomenon or whether advances in AI come as a “rising tide”. Across thousands of real world tasks, the team finds that, while indeed AI capabilities are improving quickly, AI capabilities are rising more smoothly, suggesting that “crashing waves” are the exception, not the rule.

Notes: Each line plots the estimated logistic relationship between AI response quality and the time required to complete a task instance, based on Equation (1) estimated without controls (with 95% confidence bands). Coefficients are shown as log-odds on the figure. Standard errors are clustered by participant in parentheses. Significance levels: *** 1%, ** 5%, * 10%. The red line corresponds to responses that are minimally sufficient or better, (score ≥ 7), the yellow line to responses which are average-quality or better (score ≥ 8), and the blue line to superior-quality responses (score = 9). Dots represent binned raw data: we partition task instances into 40 equally sized, log-spaced time bins and compute success rates and sample sizes within each bin. For each quality threshold, two of the 40 bins contain no observations.

The MIT researchers find:

  • AI performance is rising smoothly across tasks in many parts of the economy and across very different task durations, suggesting that AI capabilities are arising tide. “This isn’t inherently protective for workers, tides could still rise quickly, but it does suggest that workers and policymakers monitoring progress should be able to see AI improvement coming," says senior author Neil Thompson.
  • AI capabilities are already strong. The researchers focused on the 63% of tasks that workers in the US economy do that are text-based, and therefore could potentially be done by LLMs. Amongst these, when given the right information, LLMs were able to complete 60% of the tasks they were given at a level that a manager would describe as “minimally sufficient” without human involvement. Only 26% were of “superior” quality. Said lead researcher Matthias Mertens “LLMs demonstrated impressive proficiency, even on their own”.
  • AI capabilities are rising quickly. While the MIT study finds that 2027 is too aggressive an estimate for AI to broadly eclipse the performance of human workers, it still finds rapid progress. Their projections suggest that AI will achieve 80% success rates on most tasks by 2029. Although as Dr. Thompson stressed “these depend on continued progress in AI hardware and algorithms and scaling of AI models. If these slow, so will the pace of AI capability increase.”
  • The study’s findings have important impacts for policymakers and businesses trying to prepare for the coming changes brought about by AI.

You can read the full paper on the FutureTech website.

This post was originally published by MIT CSAIL and is republished here with permission.

Reviewed by Asim BN.

Read next:

• Why AI Leaders Are So Focused On Image Generation

Think different — for 50 years


by External Contributor via Digital Information World

Saturday, April 4, 2026

Think different — for 50 years

By Christina Pazzanese, Harvard Staff Writer

Management, branding, marketing, history scholars trace all ways Apple changed industries, our relationship to tech — and to each other.

On April Fool’s Day 1976, two college dropouts, Steve Wozniak and Steve Jobs, and a friend, Ronald G. Wayne, formed a company from the garage of Jobs’ parent’s house in Los Altos, a small city in Silicon Valley then in its infancy.

For the cheeky price of $666.66 (Wozniak liked repeating digits), buyers could get what they called the Apple-1, a “Woz”-engineered, personal computer consisting of a bare circuit board with an 8-bit microprocessor and 4K of RAM — monitor, keyboard, and power supply sold separately.

The Apple-1 was only capable of running elementary programs and games. Two hundred were made.

It may have seemed foolhardy then to push a product few Americans were even aware existed. But 50 years later, Apple is among the most popular and iconic consumer brands and, with a $3.8 trillion valuation, one of the world’s most successful companies.

In these edited reflections, Harvard analysts explain how Apple has transformed the personal computing, music, and communications industries. It has also revolutionized marketing and advertising, industrial and product design, and retail, and helped shift our relationship to tech — and, arguably, to one another.

Our experts include David B. Yoffie, Baker Foundation Professor, Max and Doris Starr Professor of International Business Administration, Emeritus; Marc Aidinoff, assistant professor of the history of science; and Jill Avery, senior lecturer of business administration and C. Roland Christensen Distinguished Management Educator.

Invented three industries

Yoffie: I would put Apple alongside of IBM, Ford, and General Electric — one of the most important American companies to emerge during its period of explosive growth because they impacted so much of American life and the way American business has operated.

When I think about Apple’s contribution, I start by thinking that they fundamentally invented three new industries, all of which have had a huge impact on mankind. The first one being the personal computer. Apple II was really the first real personal computer.

Second is what they did with the iPod, which was essentially a redesign of the entire music industry.

And the third is the iPhone, which has become the single most successful consumer electronic product in history of the world by almost any definition. It revolutionized personal communications.

So, at a very fundamental level, Apple has revolutionized the way in which we live our lives, in addition to becoming one of the most successful companies in the history of the world.

Image: Jonathan Stechi / unsplash

A user story

Aidinoff: As a historian of technology, I would flip that around to say they created the users for those things.

They taught people that they wanted and could use things in this way, that we could take a computer, which is a tool for doing advanced mathematics, and they taught us we can carry it around on our phone in our pockets, do music recommendations.

So, I think of that as a user story as much as a they-created-the-category story.

The secret sauce

Yoffie: This was part of Steve Jobs’ genius — his ability to figure out products that people wanted, even though they didn’t know they needed it.

It was not obvious at any point along the history of computers that you were going to have a graphical user interface and a mouse. It was not obvious to people that they wanted to keep all of their music on a small, single device.

Similarly with the iPhone, no one really believed that you could do this multitouch, internet-access device and make it so broadly functional until Steve was able to demonstrate the power of what it could deliver. That’s been their secret sauce.

Changed what a computer is

Aidinoff: What Apple does is it fundamentally changes what a computer is. The idea that a computer is something that I’m going to carry around in my pocket with hundreds of thousands of times more computer than the Apollo Project, that’s something Apple does through a whole bunch of technical innovation along the way, but also through changing cultural expectations of what a computer would be, teaching users how to use computers in different ways.

There are distinct technological pieces that people will credit Apple for, things that are really exciting in terms of chip design or in terms of operationalizing the graphical user interface, but it’s the way they package it all together that matters.

Products as heroes

Avery: Apple is one of the pre-eminent examples of a company that does branding, brand storytelling, and marketing incredibly well.

They started with an underdog brand biography. They positioned themselves against everybody else, as the little guy, as the different guy, coming into the market to take on the behemoths that had ruled for a long time.

They talk about their products as heroes. They talk about the functionality and the usability of their products, but they’re not just selling functional value. They’re selling the emotional value of consumers interacting with their products. They’re selling what we call “ego-expressive” or “identity value” — that Apple products are for people who are different, who are more creative, who think differently.

What that means is when someone uses an Apple product, it makes them feel different than if they were using a PC or another brand’s products. It makes them feel more creative, different than others and able to think differently. Users believe the Apple story. They buy into it.

Sticking it to the Man

Aidinoff: There’s a historian at Stanford who tracks the way Apple, in particular, took leftist hippie counterculture and commercialized it and made a computer resonant with those cultural impulses and “Stick it to the Man” individualism.

It’s hard to overstate from our present how much computers were seen as calculating machines for the military. You literally had people in the ’60s bombing computer centers as an act of protest against The Man. And so, the idea that a computer would be a cool, fun thing to listen to Nirvana on — that’s really changing what it means.

Not like George Orwell’s ‘1984’

Avery: That Macintosh launch ad in 1984 goes down as one of the best ads ever shown on the Super Bowl, if not one of the best ads overall.

It crashed into the market, positioning Apple against the big guys, against the corporate mainstream, and against what was expected of professionals and showed people that there was a new choice, an innovative choice, a different choice. That was one of the big starting points for the brand’s trajectory.

The “Think Different” ad campaign featuring images of Gandhi and Einstein and other creative thinkers throughout history was another classic ad campaign that really cemented the image of the brand in people’s minds.

Trust the product

Aidinoff: Apple has taken privacy really seriously in the era of Facebook and where other companies are selling your data. They’ve decided it’s in their best interest to make you really trust the product. Who knows how that’ll change with their partnership with OpenAI — I’m quite worried it will.

But you think of the fights they had with Facebook about five years ago, where all the Apple ads were about “Unlike, Facebook, we’ll keep your data private.” That is another thing that really helps them in what could have been a turbulent time.

Look good, feel good

Avery: Steve Jobs never saw design as a gimmick. He saw aesthetics as an essential part of creating value.

In the product categories he was going into, the products all looked the same. They were boxy, they were black or gray, they just didn’t have a lot of aesthetic value.

He felt that a desktop computer, and eventually, a phone, was something that you were going to interact with all day long and so it was really important for it to have aesthetic value and to create an aesthetic connection.

He invested heavily in design. This is a brand that realized that function alone is not enough, but function plus aesthetic design can create an incredible connection with the consumer and an incredible sense of value for the product.

It’s been a key, central feature of the product from the beginning.

Not stores, communities

Avery: The Genius Bars were genius.

If you think about who Apple was trying to sell to in the early days, it was not corporate accounts. Corporate accounts were locked up by IBM, by Dell, and that type of selling relationship was moving online. Gateway computers was another brand doing a lot of online ordering. Apple was trying to sell to individuals, and individuals don’t have IT departments at their disposal.

So, the fact that they established the Genius Bars and staffed them incredibly well allowed people to walk in and have their own IT department to help take away the friction of switching from a PC to a Mac or from non-Apple product to an Apple product.

The stores were visually beautiful spaces. They were more for display and aesthetics than for selling, particularly in the early days, and they created a community aspect to the stores themselves.

People would line up for three days before a new launch. That was all part of creating that brand value. The stores created event marketing and branding experiences for the brand, as well. The stores still feel like that.

Their own heroic comeback story

Yoffie: They almost went bankrupt midway through their journey.

In 1997, they were somewhere between three and six months away from bankruptcy, so it’s not as though it’s a picture of continuous success for its entire 50-year history, and they had to reinvent themselves between 1997 and 2007. That was really fundamental to their success.

In addition, it’s not just the products, but the complementary products and services that they built around their core products that have made them so successful.

So, it’s not just the iPhone; it’s the App Store. It’s not just having a phone in your pocket, but it’s the ability to connect it to your computer and to your AirPods and to the cloud and do it all in a seamless fashion. It’s been the ability to build out an extended set of complementary services and products that has made Apple such a powerful player.

Screenshot of Apple Home page by DIW

A walled garden

Avery: The Apple ecosystem is the key to their business model — the hardware, the App Store, and everything else working together to create value for its customers, but also to extract value back to the company.

This is why Apple is so strict about app development and what gets included in the Apple store. Because it’s all building its ecosystem and keeping people in this walled garden of ecosystem. That’s a really important part of its monetization strategy.

Big challenges ahead

Yoffie: Cellphones are largely a replacement product. There aren’t that many people in the world buying new phones. What we’ve seen over, let’s say, the last 10 years, there’s been relatively little growth in its core business.

That’s a big challenge for Apple going forward. They’re trying to drive growth by creating services that complement the iPhone business, but it’s still fundamentally dependent on the iPhone.

The good news for Apple is that it does have only in the neighborhood of 20 percent to 22 percent world market share for cellular phones, so it does have an opportunity to take more share away from Android and from other products assuming they find a way to address markets around the world that are a little bit more price-sensitive than in the United States, Europe, and Japan.

But Apple needs to make some adjustments in order to do that.

This article was originally published on Harvard Gazette and is republished here with permission.

Reviewed by Asim BN.

Read next: 

Why AI Leaders Are So Focused On Image Generation


by External Contributor via Digital Information World

Facebook Messenger Collects 32 of 35 Data Types, Highest Among Top Analyzed Apps, While Signal Ranks Highest in Minimizing Privacy Risks

It’s hard to imagine a life without being able to send a message to a friend, family member, or coworker at a moment's notice. However, while we send hundreds of messages every day, most of us never think about who else might be reading them. We trust that our private chats stay private, but is that trust justified?

Surfshark's study takes a close look at the most popular messaging apps to see how well each one actually protects your privacy and keeps your data secure. By examining encryption, data collection and usage, tracking practices, and AI features, this research identifies which apps prioritize your privacy and which fall short. The results may change how you think about the apps you use every day.

Key insights

  • End-to-end encryption is provided by 9 out of the 10 most popular messaging apps. Signal and iMessage both offer quantum-secure cryptography, providing an even higher level of security.¹ However, for Apple's Messages app, end-to-end encryption is only effective between Apple devices. When messages are sent to Android devices, they are converted to SMS/MMS — which aren't end-to-end encrypted — meaning they're vulnerable to third parties potentially intercepting and reading them during transmission.² Notably, Discord is the only messaging app among those analyzed that does not provide end-to-end encryption for text-based messages.

  • However, 90% of the analyzed messaging apps offer AI features, which could potentially increase privacy risks. Researchers from New York University and Cornell University have noted that “AI features are being developed at a rapid pace, raising significant security risks for users of E2EE applications”.³ For example, AI might be used to summarize private conversations or translate personal messages. While these features may offer benefits, they also raise concerns about granting access to information that should be private and visible only to the sender and receiver. Additionally, users can integrate AI assistants into ongoing conversations with others or even engage with AI as a friend. However, it's crucial to understand that users aren't just sharing information with a virtual friend — they're actually providing data to the company that owns the app or the AI service.

  • On average, the analyzed messaging apps collect 17 out of the 35 data types listed in the Apple App Store. Exceeding this average are four apps: Meta Platforms’ Messenger (32), LINE (26), WeChat (22), and Rakuten Viber Messenger (18). The data collected may be exploited for purposes beyond app functionality. When considering the number of data types linked to users that can be exploited for advertising, product personalization, analytics, or other purposes, Meta Platforms’ Messenger (30) and LINE (21) are at the forefront. In contrast, Signal and Telegram Messenger assert that their data collection is strictly for app functionality, such as user authentication, feature enablement, fraud prevention, security measures, server uptime, minimizing app crashes, enhancing scalability and performance, and customer support.

  • Considering all analyzed factors, Signal ranks at the top for its commitment to minimizing user privacy risks, with a score of 0.99. As one of the most downloaded messaging apps in 2025, it stands out by collecting minimal data — just phone numbers, which are used solely for app functionality, as noted in the Apple App Store. Furthermore, Signal completely avoids user tracking. By employing quantum-secure cryptography to protect communications and avoiding AI features that could potentially compromise privacy if misused, Signal ensures that users’ conversations remain as private and secure as possible. Despite its robust privacy measures, the FBI and CISA recently warned about phishing campaigns targeting commercial messaging apps, specifically Signal.⁴ Once an account is compromised, attackers can access messages, contact lists, and launch further phishing attacks. This highlights that technology alone isn't enough; users remain the weakest link.

  • LINE ranks at the bottom with the lowest score, followed by Discord, Rakuten Viber Messenger, and Meta Platforms’ Messenger — all of which fall below the average score of 0.52 for the analyzed apps. According to information in the Apple App Store, LINE, Discord, and Rakuten Viber Messenger are the only apps that may collect data for user tracking. Meanwhile, Meta Platforms’ Messenger is notable for declaring that it may collect an extensive range of data types — 32 out of 35 listed in the Apple App Store — and use most of them for purposes beyond app functionality.
Messenger is the most privacy-invasive app due to its data collection practices. Messenger collects 32 out of 35 data types, with 30 of them being used for purposes beyond just app functionality.
Image: Surfshark

Methodology and sources

For this study, 10 iOS messaging apps were examined: the pre-installed Apple Messages App — which is likely used by most Apple device owners due to its default presence — and the top nine most downloaded apps in 2025, according to data provided by AppMagic.⁵ MAX was excluded from the analysis because it is not available in the US Apple App Store, which is used to review app privacy practices. The selection criteria from AppMagic included the category (Social Networking), tag (Messenger), geography (Worldwide), store (iPhone App Store), and year (2025).

To evaluate the privacy practices of these apps, five criteria were selected. First, Surfshark examined the type of encryption employed, whether quantum-secure or not. This indicator delves into encryption, prioritizing whether cryptography is quantum-secure rather than just checking for end-to-end encryption. The default layer isn't enough, as quantum threats could potentially break through other encryption methods. That's why only those with quantum-secure levels of security earn the highest score.

Second, Surfshark looked at the number of data types the app may collect. This indicator assesses the data collection practices of analyzed apps, scoring them based on how many of the 35 data types listed in the Apple App Store they may collect. Collecting more data types increases privacy risks, for example, in the case of a data breach, which is why a higher number of collected data types leads to a lower score.

However, the total score for the app also includes two additional indicators: one for data collected for tracking purposes and another for data collected that is not related to app functionality. This approach provides a balanced view of data collection practices by not focusing solely on the number of data types collected, acknowledging that some are essential to the app's functionality. And fifth, Surfshark evaluated whether the app integrates AI features.

These factors illustrate each app's privacy-related activities and contribute equally to the final score. The scores of each analyzed app were then categorized into five levels, ranging from high to low, to indicate their commitment to user privacy and security.

For the complete research material behind this study, click here.

Data was collected from:

Apple (2026). App Store.

References:

¹ Apple Security Engineering and Architecture (2024). iMessage with PQ3: The new state of the art in quantum-secure messaging at scale;

² Apple (2025). What is the difference between iMessage, RCS, and SMS/MMS?

³ Knodel, M.; Fábrega, A. (2025). Can Bots Read Your Encrypted Messages? Encryption, Privacy, and the Emerging AI Dilemma;

⁴ FBI and CISA (2026). Russian Intelligence Services Target Commercial Messaging Application Accounts;

⁵ AppMagic (2026). Top Free Apps.

The team behind this research.

This post was originally published on Surfshark Research and is republished here with permission.

Reviewed by Irfan Ahmad.

Read next: 

• AI’s fluency in other languages hides a Western worldview that can mislead users − a scholar of Indonesian society explains

• March Apptopia Data Shows Claude Reaches 10% DAU Share, ChatGPT Falls to 38.7% in United States Mobile Apps


by External Contributor via Digital Information World