Saturday, April 4, 2026

Think different — for 50 years

By Christina Pazzanese, Harvard Staff Writer

Management, branding, marketing, history scholars trace all ways Apple changed industries, our relationship to tech — and to each other.

On April Fool’s Day 1976, two college dropouts, Steve Wozniak and Steve Jobs, and a friend, Ronald G. Wayne, formed a company from the garage of Jobs’ parent’s house in Los Altos, a small city in Silicon Valley then in its infancy.

For the cheeky price of $666.66 (Wozniak liked repeating digits), buyers could get what they called the Apple-1, a “Woz”-engineered, personal computer consisting of a bare circuit board with an 8-bit microprocessor and 4K of RAM — monitor, keyboard, and power supply sold separately.

The Apple-1 was only capable of running elementary programs and games. Two hundred were made.

It may have seemed foolhardy then to push a product few Americans were even aware existed. But 50 years later, Apple is among the most popular and iconic consumer brands and, with a $3.8 trillion valuation, one of the world’s most successful companies.

In these edited reflections, Harvard analysts explain how Apple has transformed the personal computing, music, and communications industries. It has also revolutionized marketing and advertising, industrial and product design, and retail, and helped shift our relationship to tech — and, arguably, to one another.

Our experts include David B. Yoffie, Baker Foundation Professor, Max and Doris Starr Professor of International Business Administration, Emeritus; Marc Aidinoff, assistant professor of the history of science; and Jill Avery, senior lecturer of business administration and C. Roland Christensen Distinguished Management Educator.

Invented three industries

Yoffie: I would put Apple alongside of IBM, Ford, and General Electric — one of the most important American companies to emerge during its period of explosive growth because they impacted so much of American life and the way American business has operated.

When I think about Apple’s contribution, I start by thinking that they fundamentally invented three new industries, all of which have had a huge impact on mankind. The first one being the personal computer. Apple II was really the first real personal computer.

Second is what they did with the iPod, which was essentially a redesign of the entire music industry.

And the third is the iPhone, which has become the single most successful consumer electronic product in history of the world by almost any definition. It revolutionized personal communications.

So, at a very fundamental level, Apple has revolutionized the way in which we live our lives, in addition to becoming one of the most successful companies in the history of the world.

Image: Jonathan Stechi / unsplash

A user story

Aidinoff: As a historian of technology, I would flip that around to say they created the users for those things.

They taught people that they wanted and could use things in this way, that we could take a computer, which is a tool for doing advanced mathematics, and they taught us we can carry it around on our phone in our pockets, do music recommendations.

So, I think of that as a user story as much as a they-created-the-category story.

The secret sauce

Yoffie: This was part of Steve Jobs’ genius — his ability to figure out products that people wanted, even though they didn’t know they needed it.

It was not obvious at any point along the history of computers that you were going to have a graphical user interface and a mouse. It was not obvious to people that they wanted to keep all of their music on a small, single device.

Similarly with the iPhone, no one really believed that you could do this multitouch, internet-access device and make it so broadly functional until Steve was able to demonstrate the power of what it could deliver. That’s been their secret sauce.

Changed what a computer is

Aidinoff: What Apple does is it fundamentally changes what a computer is. The idea that a computer is something that I’m going to carry around in my pocket with hundreds of thousands of times more computer than the Apollo Project, that’s something Apple does through a whole bunch of technical innovation along the way, but also through changing cultural expectations of what a computer would be, teaching users how to use computers in different ways.

There are distinct technological pieces that people will credit Apple for, things that are really exciting in terms of chip design or in terms of operationalizing the graphical user interface, but it’s the way they package it all together that matters.

Products as heroes

Avery: Apple is one of the pre-eminent examples of a company that does branding, brand storytelling, and marketing incredibly well.

They started with an underdog brand biography. They positioned themselves against everybody else, as the little guy, as the different guy, coming into the market to take on the behemoths that had ruled for a long time.

They talk about their products as heroes. They talk about the functionality and the usability of their products, but they’re not just selling functional value. They’re selling the emotional value of consumers interacting with their products. They’re selling what we call “ego-expressive” or “identity value” — that Apple products are for people who are different, who are more creative, who think differently.

What that means is when someone uses an Apple product, it makes them feel different than if they were using a PC or another brand’s products. It makes them feel more creative, different than others and able to think differently. Users believe the Apple story. They buy into it.

Sticking it to the Man

Aidinoff: There’s a historian at Stanford who tracks the way Apple, in particular, took leftist hippie counterculture and commercialized it and made a computer resonant with those cultural impulses and “Stick it to the Man” individualism.

It’s hard to overstate from our present how much computers were seen as calculating machines for the military. You literally had people in the ’60s bombing computer centers as an act of protest against The Man. And so, the idea that a computer would be a cool, fun thing to listen to Nirvana on — that’s really changing what it means.

Not like George Orwell’s ‘1984’

Avery: That Macintosh launch ad in 1984 goes down as one of the best ads ever shown on the Super Bowl, if not one of the best ads overall.

It crashed into the market, positioning Apple against the big guys, against the corporate mainstream, and against what was expected of professionals and showed people that there was a new choice, an innovative choice, a different choice. That was one of the big starting points for the brand’s trajectory.

The “Think Different” ad campaign featuring images of Gandhi and Einstein and other creative thinkers throughout history was another classic ad campaign that really cemented the image of the brand in people’s minds.

Trust the product

Aidinoff: Apple has taken privacy really seriously in the era of Facebook and where other companies are selling your data. They’ve decided it’s in their best interest to make you really trust the product. Who knows how that’ll change with their partnership with OpenAI — I’m quite worried it will.

But you think of the fights they had with Facebook about five years ago, where all the Apple ads were about “Unlike, Facebook, we’ll keep your data private.” That is another thing that really helps them in what could have been a turbulent time.

Look good, feel good

Avery: Steve Jobs never saw design as a gimmick. He saw aesthetics as an essential part of creating value.

In the product categories he was going into, the products all looked the same. They were boxy, they were black or gray, they just didn’t have a lot of aesthetic value.

He felt that a desktop computer, and eventually, a phone, was something that you were going to interact with all day long and so it was really important for it to have aesthetic value and to create an aesthetic connection.

He invested heavily in design. This is a brand that realized that function alone is not enough, but function plus aesthetic design can create an incredible connection with the consumer and an incredible sense of value for the product.

It’s been a key, central feature of the product from the beginning.

Not stores, communities

Avery: The Genius Bars were genius.

If you think about who Apple was trying to sell to in the early days, it was not corporate accounts. Corporate accounts were locked up by IBM, by Dell, and that type of selling relationship was moving online. Gateway computers was another brand doing a lot of online ordering. Apple was trying to sell to individuals, and individuals don’t have IT departments at their disposal.

So, the fact that they established the Genius Bars and staffed them incredibly well allowed people to walk in and have their own IT department to help take away the friction of switching from a PC to a Mac or from non-Apple product to an Apple product.

The stores were visually beautiful spaces. They were more for display and aesthetics than for selling, particularly in the early days, and they created a community aspect to the stores themselves.

People would line up for three days before a new launch. That was all part of creating that brand value. The stores created event marketing and branding experiences for the brand, as well. The stores still feel like that.

Their own heroic comeback story

Yoffie: They almost went bankrupt midway through their journey.

In 1997, they were somewhere between three and six months away from bankruptcy, so it’s not as though it’s a picture of continuous success for its entire 50-year history, and they had to reinvent themselves between 1997 and 2007. That was really fundamental to their success.

In addition, it’s not just the products, but the complementary products and services that they built around their core products that have made them so successful.

So, it’s not just the iPhone; it’s the App Store. It’s not just having a phone in your pocket, but it’s the ability to connect it to your computer and to your AirPods and to the cloud and do it all in a seamless fashion. It’s been the ability to build out an extended set of complementary services and products that has made Apple such a powerful player.

Screenshot of Apple Home page by DIW

A walled garden

Avery: The Apple ecosystem is the key to their business model — the hardware, the App Store, and everything else working together to create value for its customers, but also to extract value back to the company.

This is why Apple is so strict about app development and what gets included in the Apple store. Because it’s all building its ecosystem and keeping people in this walled garden of ecosystem. That’s a really important part of its monetization strategy.

Big challenges ahead

Yoffie: Cellphones are largely a replacement product. There aren’t that many people in the world buying new phones. What we’ve seen over, let’s say, the last 10 years, there’s been relatively little growth in its core business.

That’s a big challenge for Apple going forward. They’re trying to drive growth by creating services that complement the iPhone business, but it’s still fundamentally dependent on the iPhone.

The good news for Apple is that it does have only in the neighborhood of 20 percent to 22 percent world market share for cellular phones, so it does have an opportunity to take more share away from Android and from other products assuming they find a way to address markets around the world that are a little bit more price-sensitive than in the United States, Europe, and Japan.

But Apple needs to make some adjustments in order to do that.

This article was originally published on Harvard Gazette and is republished here with permission.

Reviewed by Asim BN.

Read next: 

Why AI Leaders Are So Focused On Image Generation


by External Contributor via Digital Information World

Facebook Messenger Collects 32 of 35 Data Types, Highest Among Top Analyzed Apps, While Signal Ranks Highest in Minimizing Privacy Risks

It’s hard to imagine a life without being able to send a message to a friend, family member, or coworker at a moment's notice. However, while we send hundreds of messages every day, most of us never think about who else might be reading them. We trust that our private chats stay private, but is that trust justified?

Surfshark's study takes a close look at the most popular messaging apps to see how well each one actually protects your privacy and keeps your data secure. By examining encryption, data collection and usage, tracking practices, and AI features, this research identifies which apps prioritize your privacy and which fall short. The results may change how you think about the apps you use every day.

Key insights

  • End-to-end encryption is provided by 9 out of the 10 most popular messaging apps. Signal and iMessage both offer quantum-secure cryptography, providing an even higher level of security.¹ However, for Apple's Messages app, end-to-end encryption is only effective between Apple devices. When messages are sent to Android devices, they are converted to SMS/MMS — which aren't end-to-end encrypted — meaning they're vulnerable to third parties potentially intercepting and reading them during transmission.² Notably, Discord is the only messaging app among those analyzed that does not provide end-to-end encryption for text-based messages.

  • However, 90% of the analyzed messaging apps offer AI features, which could potentially increase privacy risks. Researchers from New York University and Cornell University have noted that “AI features are being developed at a rapid pace, raising significant security risks for users of E2EE applications”.³ For example, AI might be used to summarize private conversations or translate personal messages. While these features may offer benefits, they also raise concerns about granting access to information that should be private and visible only to the sender and receiver. Additionally, users can integrate AI assistants into ongoing conversations with others or even engage with AI as a friend. However, it's crucial to understand that users aren't just sharing information with a virtual friend — they're actually providing data to the company that owns the app or the AI service.

  • On average, the analyzed messaging apps collect 17 out of the 35 data types listed in the Apple App Store. Exceeding this average are four apps: Meta Platforms’ Messenger (32), LINE (26), WeChat (22), and Rakuten Viber Messenger (18). The data collected may be exploited for purposes beyond app functionality. When considering the number of data types linked to users that can be exploited for advertising, product personalization, analytics, or other purposes, Meta Platforms’ Messenger (30) and LINE (21) are at the forefront. In contrast, Signal and Telegram Messenger assert that their data collection is strictly for app functionality, such as user authentication, feature enablement, fraud prevention, security measures, server uptime, minimizing app crashes, enhancing scalability and performance, and customer support.

  • Considering all analyzed factors, Signal ranks at the top for its commitment to minimizing user privacy risks, with a score of 0.99. As one of the most downloaded messaging apps in 2025, it stands out by collecting minimal data — just phone numbers, which are used solely for app functionality, as noted in the Apple App Store. Furthermore, Signal completely avoids user tracking. By employing quantum-secure cryptography to protect communications and avoiding AI features that could potentially compromise privacy if misused, Signal ensures that users’ conversations remain as private and secure as possible. Despite its robust privacy measures, the FBI and CISA recently warned about phishing campaigns targeting commercial messaging apps, specifically Signal.⁴ Once an account is compromised, attackers can access messages, contact lists, and launch further phishing attacks. This highlights that technology alone isn't enough; users remain the weakest link.

  • LINE ranks at the bottom with the lowest score, followed by Discord, Rakuten Viber Messenger, and Meta Platforms’ Messenger — all of which fall below the average score of 0.52 for the analyzed apps. According to information in the Apple App Store, LINE, Discord, and Rakuten Viber Messenger are the only apps that may collect data for user tracking. Meanwhile, Meta Platforms’ Messenger is notable for declaring that it may collect an extensive range of data types — 32 out of 35 listed in the Apple App Store — and use most of them for purposes beyond app functionality.
Messenger is the most privacy-invasive app due to its data collection practices. Messenger collects 32 out of 35 data types, with 30 of them being used for purposes beyond just app functionality.
Image: Surfshark

Methodology and sources

For this study, 10 iOS messaging apps were examined: the pre-installed Apple Messages App — which is likely used by most Apple device owners due to its default presence — and the top nine most downloaded apps in 2025, according to data provided by AppMagic.⁵ MAX was excluded from the analysis because it is not available in the US Apple App Store, which is used to review app privacy practices. The selection criteria from AppMagic included the category (Social Networking), tag (Messenger), geography (Worldwide), store (iPhone App Store), and year (2025).

To evaluate the privacy practices of these apps, five criteria were selected. First, Surfshark examined the type of encryption employed, whether quantum-secure or not. This indicator delves into encryption, prioritizing whether cryptography is quantum-secure rather than just checking for end-to-end encryption. The default layer isn't enough, as quantum threats could potentially break through other encryption methods. That's why only those with quantum-secure levels of security earn the highest score.

Second, Surfshark looked at the number of data types the app may collect. This indicator assesses the data collection practices of analyzed apps, scoring them based on how many of the 35 data types listed in the Apple App Store they may collect. Collecting more data types increases privacy risks, for example, in the case of a data breach, which is why a higher number of collected data types leads to a lower score.

However, the total score for the app also includes two additional indicators: one for data collected for tracking purposes and another for data collected that is not related to app functionality. This approach provides a balanced view of data collection practices by not focusing solely on the number of data types collected, acknowledging that some are essential to the app's functionality. And fifth, Surfshark evaluated whether the app integrates AI features.

These factors illustrate each app's privacy-related activities and contribute equally to the final score. The scores of each analyzed app were then categorized into five levels, ranging from high to low, to indicate their commitment to user privacy and security.

For the complete research material behind this study, click here.

Data was collected from:

Apple (2026). App Store.

References:

¹ Apple Security Engineering and Architecture (2024). iMessage with PQ3: The new state of the art in quantum-secure messaging at scale;

² Apple (2025). What is the difference between iMessage, RCS, and SMS/MMS?

³ Knodel, M.; Fábrega, A. (2025). Can Bots Read Your Encrypted Messages? Encryption, Privacy, and the Emerging AI Dilemma;

⁴ FBI and CISA (2026). Russian Intelligence Services Target Commercial Messaging Application Accounts;

⁵ AppMagic (2026). Top Free Apps.

The team behind this research.

This post was originally published on Surfshark Research and is republished here with permission.

Reviewed by Irfan Ahmad.

Read next: 

• AI’s fluency in other languages hides a Western worldview that can mislead users − a scholar of Indonesian society explains

• March Apptopia Data Shows Claude Reaches 10% DAU Share, ChatGPT Falls to 38.7% in United States Mobile Apps


by External Contributor via Digital Information World

Friday, April 3, 2026

AI laws overlook environmental damage – here’s what needs to change

Louise Du Toit, University of Southampton
Image: Geoffrey Moffett - Unsplash. Caption: Data Centre in Coleraine

More than 200 laws have been developed to regulate AI in more than 100 countries. Many of them focus on issues such as privacy, bias, disinformation, security and cybersecurity rather than the environmental consequences of AI.

AI is an energy-intensive and thirsty industry. It leads to huge greenhouse gas emissions, pollution and loss of nature. These impacts arise partly from the manufacture and use of energy-, carbon- and water-intensive “complex computer chips”, called graphics processing units (GPUs), for the training of AI models as well as increasing e-waste.

My research into the regulatory responses to AI in the EU and the UK highlights how laws often ignore the environmental implications of this big tech. The lack of stringent obligation in AI law and policy is concerning.

There are environmental consequences at all stages of the AI lifecycle. From the manufacture of AI hardware, training of AI models, deployment and use of AI right through to the disposal of AI hardware.

The manufacture of components relies on the extraction of rare earth elements. This can contaminate soil and water, pollute the air and lead to loss of nature and forest habitats. Training AI models is incredibly energy- and water-intensive. A team of researchers estimated in 2025 that training GPT-3 – a large language model released by OpenAI in 2020 – consumed an estimated 700,000 litres of freshwater for electricity generation and cooling of data centres.

Even though AI models are becoming more energy efficient, as models become larger and AI proliferates, overall energy consumption and associated emissions are rising. And the energy consumed in the use of AI, including to generate text or images, vastly outweighs that used during training.

However, it’s difficult to accurately measure the environmental effects of AI, partly due to the lack of transparency of technology companies.

When the EU’s AI Act came into force on August 1 2024, it was the “world’s first comprehensive law” on AI. The AI Act acknowledges some of AI’s environmental consequences. It also requires that “AI systems are developed and used in a sustainable and environmentally friendly manner”.

It outlines that AI providers must disclose information on “known or estimated energy consumption data of the model”. But while promising, this information only needs to be provided when requested by the AI Office, which has been established within the European Commission.

Further measures include preparing codes of conduct to assess and minimise “the impact of AI systems on environmental sustainability”. But this is not compulsory. Overall, the AI Act is intentionally anthropocentric. It states that: “AI should be a human-centric technology. It should serve as a tool for people, with the ultimate aim of increasing human wellbeing.”

The UK has no AI-specific legislation. AI is currently only regulated by existing laws. The UK government’s 2023 white paper on AI regulation, which proposes a regulatory framework for AI, doesn’t prioritise sustainability at all. Although the white paper acknowledges that AI can contribute to technologies to respond to climate change, it does not specifically address any environmental risks:

The proposed regulatory framework does not seek to address all of the wider societal and global challenges that may relate to the development or use of AI. This includes issues relating to … sustainability. These are important issues to consider … but they are outside of the scope of our proposals for a new overarching framework for AI regulation.

A transparent future?

More transparency starts with AI developers having to disclose information about how much energy and water is consumed, how much carbon is emitted, the rare earth elements extracted and how much plastic is used during the AI production process.

This data then provides a baseline. Then appropriate targets and limits can be set for energy efficiency, carbon emissions and water use to improve the sustainability of AI.

Several proposals have been made for how reduced carbon emissions and water consumption could practically be achieved, such as training AI models on less carbon-intensive energy grids or in less water-intensive data centres.

Warnings about environmental effects could tell consumers how much carbon dioxide is emitted or water consumed for each query. In addition, an AI labelling system could mirror the EU’s existing energy efficiency labelling schemes, which clearly indicate the energy efficiency of appliances, ranking them from most energy-efficient (dark green) to least energy-efficient (red).

Proposals include an AI “energy star” rating system and a social and environmental certification system. This would help consumers to make informed choices about which AI systems to use or whether AI should be used at all. Tax incentives and funding incentives could also encourage tech firms to make more sustainable choices.

By integrating sustainability into AI laws, through these types of measures, the planet can be somewhat safeguarded alongside AI’s rapid expansion.The Conversation

Louise Du Toit, Lecturer in Law, Southampton Law School, University of Southampton

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Asim BN.

Read next:

• Meet the Five AI Productivity Personality Types Transforming Work and Creativity

• A million new SpaceX satellites will destroy the night sky — for everyone on Earth
by External Contributor via Digital Information World

Thursday, April 2, 2026

Meet the Five AI Productivity Personality Types Transforming Work and Creativity

By Bex Mills

Over 18 million adults in the UK have used generative AI, yet not everyone uses it the same way. Recent research shows that how individuals use AI is just as crucial as whether they use it at all, as it becomes more popular.

Click Intelligence used data from the UK Government, the Reuters Institute, and Deloitte Digital Consumer Trends to illustrate that people are starting to fit into different "productivity personality" types depending on how they utilize technology. The groups are different because of their motivation, trust, and everyday use, and this could have an effect on the workplace.

Age Gap in AI Use

Before we get into the many personality types that have been found, the research shows that there is a definite age-related disparity in AI adoption:

  • 62% of people between the ages of 16 and 34 have used AI
  • Only 14% of people between the ages of 55 and 74 have used AI.

This discrepancy shows that younger individuals are more willing to use AI in their daily lives, both at work and at home. In contrast, older people are more wary of technology and often doubt its accuracy and dependability.

The data also suggests that confidence grows as more individuals use AI tools.

James Owen, co-founder and head of Click Intelligence, remarked, "AI is no longer just a new thing." Younger workers are starting to see it as a natural way to boost productivity, but older generations are still wary. That gap will affect how companies teach, hire, and manage teams for the next five years.

Image: Yuriy Vertikov / Unsplash

The 5 Types of AI Users Who Are Productive

The data shows that as more people utilize AI, five different sorts of users are starting to show up. These personality types aren't permanent, and people can switch between them based on the situation. People usually prefer one method over the others.

1. The Trailblazer

People who are AI trailblazers are interested in and willing to use AI technology to make their lives better. They are usually between the ages of 16 and 34 and like to play around with generative AI tools, try out new prompts, and find new methods to use them.

Trailblazers see using AI as an experience, not merely a technique to get things done faster. They are more inclined to employ AI in many parts of their lives, like business and personal projects, and they really want to stay up to date with digital trends.

2. The Efficiency Maximizer

Maximizers employ AI to get more done in less time. People in this group don't really want to try out AI just for the sake of it.

People in this group are most likely to be at work, where they use AI to summarize information, automate boring chores, and write emails or documents to save time. About 7 million people have used AI at work, and 74% of those people say it helps them get more done and reach their goals. A change in the way people act at work is causing this transition. For example, 27% indicated their bosses favor the use of AI, which means it is becoming more common in the workplace.

3. The Optimizer of Information

This group employs AI to help them make sense of and work with a lot of complicated information in timely manner. They don't use generative AI to make content.

AI is being used to help people figure out what's true and what's not online and break down themes into simple pieces. This is important because 58% of people are worried about this.

About 24% of individuals use AI once a week to do research or learn something new. This means deploying AI chatbots to help younger people simplify news items and make long articles into short, easy-to-read ones. The research discovered that 15% of individuals under 25 utilize AI chatbots for this objective.

4. The Creativity Kick Starter

People who wish to get over mental blockages and come up with ideas faster are in the Creativity Kick Starter personality group. They don't utilize AI to take the place of human creativity; instead, they use it as a starting point to come up with new ideas, improve on them, look at them from other viewpoints, and then build on them.

People in this group probably have jobs that demand them to come up with new ideas quickly. About 36% of frequent AI users say they trust AI-generated content, compared to 25% of those who are simply aware of generative AI. So, using AI technology in creative ways may help people trust it more.

5. The Skeptic Who Is Careful

Cautious skeptics believe in AI but don't trust it. They know that it will be more efficient, but they want to check the results firsthand. They don't want bias, false information, or mistakes to be a part of their work.

This is what people think: 59% of people believe they would be less likely to trust an email produced by AI, and 56% say they would stay away from AI-powered customer service products. This shows that people still don't want to trust AI in situations when they need to be fully responsible and reliable.

The Way You Think Matters

Findings indicate that individual don't only want to try out AI as a new technology. People use it to different degrees depending on their needs, ambitions, and level of trust.

Some individuals see it as a way to get more done every day, while others see it as a helper that they need to keep an eye on. These diverse behaviors could shift or become more established as AI continues to improve.

Reviewed by Ayaz Khan.

Read next: 

• Nearly Half of Professionals Check Work Email on Vacation Out of Fear, Study Finds

• A million new SpaceX satellites will destroy the night sky — for everyone on Earth


by Guest Contributor via Digital Information World

A million new SpaceX satellites will destroy the night sky — for everyone on Earth

Samantha Lawler, University of Regina; Aaron Boley, University of British Columbia, and Hanno Rein, University of Toronto

A million new SpaceX satellites will destroy the night sky — for everyone on Earth
Image: SpaceX / Unsplash

More than 10,000 Starlink satellites currently orbit the Earth. We see them crawling across dark skies, no matter how remote our location, and streaking through images from research telescopes.

SpaceX recently announced that it wants to launch one million more of these satellites as orbital data centres for AI computing power.

A few years ago, we wrote a paper predicting what the night sky would look like with 65,000 satellites from four planned megaconstellations: SpaceX’s Starlink, Amazon’s Kuiper (now Leo), the U.K.’s OneWeb and China’s Guowang. We calibrated our models to observations of real Starlink satellites and came up with a startling prediction: One in 15 visible points in the night sky would be a satellite, not a star.

A million satellites would be so much worse.

The human eye can see fewer than 4,500 stars in an unpolluted night sky. If we permit SpaceX to launch these satellites, we will see more satellites than stars — for large portions of the night and the year, throughout the world. This will severely damage the night sky for everyone on Earth.

SpaceX’s proposal also completely fails to account for atmospheric pollution, collision risk or how to develop the technology needed to disperse waste heat from orbital data centres.

Predicting the night sky

SpaceX has filed its million-satellite proposal to the United States Federal Communications Commission (FCC) and has only provided bare-bones information about these new satellites so far.

We do know that the proposed constellation will have satellites in much higher orbits, making them visible for longer periods of the night.

We decided to build an updated simulation, using the website of astrophysicist Jonathan McDowell. This includes a set of orbits consistent with the limited information in SpaceX’s filing.

We used the observed brightness of Starlink satellites as a reference, scaling the brightness model by considering size jumps between Starlink V1, V2 and predictions for V3, and assuming even higher complexity and power requirements.

There are many factors we don’t know anything about, so there is some uncertainty in the brightness we predict.

In the figure above, each grey circle shows a simulation of the full night sky, as seen from latitude 50 degrees north at midnight on the summer solstice.

The left circle shows the night sky with SpaceX’s orbital data centres (SXODC), and the right shows the night sky with 42,000 Starlink satellites for comparison.

The coloured points show the positions and brightness of satellites in the sky, with blue the faintest and yellow the brightest. Below each all-sky simulation we list the number of sunlit satellites in the sky (Ntot) and the number of naked-eye visible satellites (Nvis), with tens of thousands predicted for SXODC.

Each of our simulations shows there will be more visible satellites than stars for large portions of the night and the year.

It is hard to overstate this: Should a million new satellites be launched, in the orbits and with the sizes proposed, the stars we are able to see at night would be completely overwhelmed by artificial satellites — throughout the world.

This does not even account for additional large satellite system proposals filed to the International Telecommunication Union (ITU) in recent years by numerous national governments.

A satellite crematorium

SpaceX’s proposal is that these new satellites will operate as orbital data centres.

Data centres on the ground are drawing increasing criticism for the huge amounts of water and electricity they use. In an impressive feat of greenwashing, SpaceX suggests that launching data centres into orbit is better for the environment. This is only true if you ignore all the consequences of satellite launch, orbital operations and re-entry.

We can already measure atmospheric pollution from “re-entries,” when satellites fall back to Earth. We know that multiple satellites are falling every day and that if they do not fully burn up on re-entry, debris falls on the ground with risk for injury and death.

Increasing densities of satellites also drive up collision risks in orbit. And using the atmosphere as a satellite crematorium is changing the atmosphere in ways we don’t yet understand.

Practically, it is not at all clear whether the proposed orbital data centres are feasible any time soon. To operate data centres in orbit, they would need to disperse huge amounts of waste heat. Despite the greenwashing, this is actually very hard to do in space as they would have to manage the intense radiation from the sun, while cooling the satellite by radiation.

SpaceX should know this well: one of the first brightness mitigations they tested for Starlink was “darksat,” a Starlink satellite they effectively just painted black. The satellite overheated and the electronics fried.

A slap in the face for astronomers

SpaceX has done a lot of engineering work to make its Starlink satellites fainter. They are still too bright for research astronomy, but thanks to new coatings, their brightness has not increased dramatically even as SpaceX has launched larger and larger satellites.

SpaceX’s proposal for one million AI data centre satellites with enormous power requirements does not include any discussion of the co-ordination agreement for dark and quiet skies required by the FCC.

It feels like a slap in the face after many astronomers have spent years working with SpaceX on ways to mitigate their Starlink megaconstellation and save the night sky.

Orbital space is a finite resource

The SpaceX filing does not include exact orbits, the size or shape of satellites or the casualty risk from de-orbiting (other than a vague promise that it won’t exceed 0.01 per cent per satellite). It doesn’t even include any information on how the company plans to develop the technology that does not currently exist but is needed to make this plan work.

Despite how shockingly little information SpaceX provided, the FCC accepted SpaceX’s filing and opened the comment period within four days. Astronomers and dark sky advocates worldwide scrambled to write and submit comments in the short four weeks that the comment period was open.

The scientific process is slow and careful and it often takes months or years to publish a peer-reviewed result. Companies like SpaceX have stated repeatedly that their method is to “move fast and break things.” They are now close to breaking the atmosphere, the night sky and anything on the ground or in space that their satellites and rockets fall on or crash into.

Earth’s orbital space is a finite resource. There is an evolving set of international guidelines for operating in outer space, grounded in a set of high-level international rules. Yet, those rules and guidelines are inadequate.

One corporation based in one country should not be allowed to ruin orbit, the night sky, and the atmosphere for everyone else in the world.The Conversation

Samantha Lawler, Associate Professor, Astronomy, University of Regina; Aaron Boley, Associate Professor, Physics and Astronomy, University of British Columbia, and Hanno Rein, Associate Professor, Physical and Environmental Sciences, University of Toronto

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Note: This article is authored by university researchers for a general audience. It draws on research and simulations but is not a peer-reviewed study. Some claims may be simplified for readability or to highlight potential impacts.

Disclosure statement: Samantha Lawler receives funding from the Natural Sciences and Engineering Research Council of Canada. She is a fellow of the Outer Space Institute. Aaron Boley receives funding from NSERC, the Canada Tri-agency, and the Department of National Defence. He co-directs the Outer Space Institute. Hanno Rein receives funding from NSERC.

Partners: University of Toronto University of British Columbia University of Regina University of Toronto and University of British Columbia provide funding as founding partners of The Conversation CA. University of British Columbia, University of Regina, and University of Toronto provide funding as members of The Conversation CA-FR. University of Regina provides funding as a member of The Conversation CA.

Reviewed by Irfan Ahmad.

Read next:

• These Are the Best and Worst U.S. Metro Areas for Science, Technology, Engineering, and Mathematics Professionals in 2026

• Where People Are (Un)Happiest With Their Lives

• Nearly Half of Professionals Check Work Email on Vacation Out of Fear, Study Finds
by External Contributor via Digital Information World

Nearly Half of Professionals Check Work Email on Vacation Out of Fear, Study Finds

By Corina Leslie

A new study suggests fear, not urgency, is behind why many professionals check email outside work hours.

The study, based on a survey of 1,157 workers in the US and Europe, shows that the pressure to stay connected extends well beyond regular working hours. The primary reasons people can’t switch off revolve around fear and worry:

  • 48% of respondents say they’re afraid of missing something important
  • Another 33% worry they’ll fall behind on work
  • 20% are concerned about appearing unreliable to colleagues and peers

An additional 31% admit checking work email has become a reflex, while 36% say they’re too curious about new emails to stay away from their inboxes, even on vacation.


In fact, the study, conducted by email deliverability company ZeroBounce, revealed that only 29% of professionals fully disconnect on vacation. But when asked about checking email during time off on a subsequent question, an even smaller percentage (19%) said they don’t check it. The difference shows a clear gap between what people say they do and their actual behavior.

Work email goes with us everywhere

Workers have a hard time disconnecting not just on vacation – email checking has become a constant habit. More than half of respondents refresh their work inboxes before and after work hours, and 37% peek at it on the weekends.

On top of that, a majority (74%) feel pressure to respond to every message quickly. However, that urge doesn’t seem to be rooted in expectations from managers and peers, but rather in people’s own perception of status.

We can’t ignore work emails – even at funerals

Email is omnipresent, even in our most personal moments. One of the most staggering findings in the study? Eighteen percent of professionals have checked email at a funeral. That’s not all, here’s where else people admit to looking at their inboxes:

  • In bed, next to their partner (38%)
  • In the car, while driving (30%)
  • At a wedding (24%)

The data points to more than a desire to be responsive and productive: our inability to switch off, even in risky situations like being behind the wheel.

Making more than $200,000 a year? You’re less likely to unplug

Compulsive email checking is prevalent across all income levels, but work pressure tends to affect high earners more. Respondents making over $200,000 a year are more likely to check work email off the clock, with 50% saying they open their inboxes on the weekend.


Vacation doesn’t mean actual vacation, either: even if they don’t respond to every message, 39% of high earners monitor incoming emails. You may think this practice relieves them from the dread of a full inbox, but 70% feel overwhelmed when they return to work.

How to not let your work inbox take over your life

Despite the popularity of instant messaging apps, email is still the most commonly used channel for professional communication. There’s no shortage of new emails in our inboxes, and it’s easy to fall into the trap of constant connectivity. We’ve come to believe that every message needs our immediate attention, but is that really true?

If you find yourself checking work email during time off or personal moment, here are a few tips to disconnect and fully enjoy your time away from work.

  • If you have notifications on your phone or desktop, turn them off.
  • When you wake up, allow yourself 10 minutes without devices. If you can, go outside and enjoy the sun.
  • Check email one last time before shutting down your computer. Once your work day ends, stop checking email in the evening and at night.
  • Before going on vacation, set an auto-responder letting everyone know you won’t be checking your inbox.
  • If you have a high-pressure job and must check email on vacation, set a rule for it. For instance, check it every three days and do not respond unless it’s absolutely critical.

Email is quick, easy, personal and, remember, asynchronous. It’s in your power to control how you use it.

Reviewed by Irfan Ahmad.

Read next:

• Workplace collaboration: Employees reveal what they want leaders to change

• Which Liberty HealthShare Program Is Right for You? A Guide to All Its Options

• AI overly affirms users asking for personal advice


by Guest Contributor via Digital Information World

Wednesday, April 1, 2026

Which Liberty HealthShare Program Is Right for You? A Guide to All Its Options [Ad]

Not every household needs the same thing from a healthcare sharing ministry. Liberty HealthShare structures its programs to reflect that reality.

Healthcare decisions are personal, and so are the financial trade-offs that go with them. A healthy 28-year-old freelancer and a family of five with two kids in braces do not have the same priorities. Liberty HealthShare, the Canton, Ohio-based nonprofit healthcare sharing ministry, built its program lineup with that range of circumstances in mind.

"We've got a number of programs so that somebody can select whatever works best for their family," said Chief Executive Officer Dorsey Morrow. "With a healthcare sharing ministry and Liberty HealthShare in particular, you can join our membership, and if you determine it doesn't work for you, you're not locked into it."

Six medical cost-sharing programs, each structured around different monthly share amounts and Annual Unshared Amount (AUA) levels, give members the ability to match their contribution to their situation. Suggested monthly share amounts for individuals range from $87 to $369, with family programs beginning at $319 per month.

Before You Compare: Understanding the Basics

Two terms appear across every Liberty HealthShare program and are worth understanding before reviewing any specific option.


The Annual Unshared Amount (AUA) is the amount of an eligible need that does not qualify for sharing. A higher AUA generally corresponds to a lower suggested monthly share amount.

The Co-Share is the percentage of eligible medical bills a member with that program option contributes after the AUA has been met. Not every program carries a Co-Share. The breakdown, per Liberty HealthShare's program guidelines, is as follows:

  • Liberty Essential: 20% Co-Share after AUA is met
  • Liberty Connect: 10% Co-Share after AUA is met
  • Liberty Unite: No Co-Share
  • Liberty Assist: No Co-Share
  • Liberty Rise: No Co-Share
  • Liberty Freedom: No Co-Share

Members who prefer lower monthly share amounts may accept a sharing program with a Co-Share. Those who want the most predictable out-of-pocket exposure after the AUA has been met typically gravitate toward programs with no Co-Share.

The Six Programs at a Glance

Liberty Essential

Liberty Essential sits at the entry point of the Liberty HealthShare program lineup, with the lowest suggested monthly share amounts available. Members have a 20% Co-Share on eligible expenses once their AUA is met. Telehealth access through DialCare Urgent Care is included, with up to five free visits per person on the membership eligible for sharing in full each year.

Liberty Connect

Liberty Connect reduces the Co-Share to 10% while stepping up the monthly share amount from Liberty Essential. Telehealth through DialCare is included on the same terms — five free visits per person per year. Members who want moderate monthly contributions, but less exposure to out-of-pocket responsibility at the time of a medical need often consider this tier.

Liberty Unite

Liberty Unite carries no Co-Share. Once a member meets the AUA, Liberty HealthShare facilitates sharing of eligible remaining expenses without the member contributing an additional percentage at the time of service. Telehealth remains included at five free visits per person annually.

Liberty Assist

Liberty HealthShare reduced the AUA for Liberty Assist by two-thirds earlier in 2025, bringing it to $500, which is a significant change in out-of-pocket exposure for members aged 65 and older who are enrolled in Medicare parts A and B. No Co-Share applies. Telehealth through DialCare is available, though Assist members pay a $55 per-visit fee directly to the provider rather than having visits shared in full.

Liberty Rise

Liberty Rise, designed for young people ages 18 to 29, carries no Co-Share and saw its suggested monthly share contribution reduced by 19% in May 2025, dropping to $99. That pricing puts Liberty Rise among the more accessible entry points in the Liberty HealthShare program portfolio should the applicant be in the age-range. Telehealth access is available at the same $55 per-visit fee structure as Liberty Assist.

Liberty Freedom

Liberty Freedom is for those under the age of 35 and carries no Co-Share. Telehealth through DialCare is not available to Liberty Freedom members, nor to members residing in Vermont. For members who infrequently use telehealth services and desire sharing in the event of an eligible catastrophic medical event Liberty Freedom provides a no-Co-Share option at the lower end of the contribution range, just $89 a month for an individual.

What All Programs Share

Across all six programs, Liberty HealthShare members retain the freedom to choose any healthcare provider. The ministry encourages use of providers who participate in the PHCS network — one of the largest in the country to help manage medical expenses, but no program restricts members to a defined network.

Annual preventive wellness visits and related lab work for which there are no medical symptoms or diagnoses in advance are eligible for sharing up to $500 after the first two months of membership, and are not subject to the AUA. Preventive screenings including pap smears, PSA tests, Cologuard, and screening mammograms for women 40 and older are eligible for sharing under specific frequency guidelines, also without application to the AUA.

Enrollment in Liberty HealthShare is open year-round. There are no special qualifying events required, and members are not locked into annual commitments. A member who joins Liberty Rise today and later determines Liberty Unite better fits their needs can switch accordingly on their annual renewal date.

Supplemental Options Worth Noting

Members across all six programs can add Liberty Dental, the ministry's supplemental dental sharing program, with suggested monthly share amounts beginning at $35. Members can use any licensed dentist — no network restrictions apply. Liberty Vision is also available as a supplemental add-on for individuals, couples, and families, starting at $7 per month for individuals.

How to Choose

"There's no one-size-fits-all when it comes to healthcare," Morrow noted. "We understand that."

Members who expect frequent medical needs and want minimal financial exposure at the point of service may prefer programs with lower AUAs and no Co-Share, even if those come with higher monthly share amounts. Members who are generally healthy and primarily want a community-supported option for larger or unexpected eligible medical expenses may find that a higher AUA with a lower monthly share amount suits their circumstances.

For a full side-by-side program comparison based on age and family size, Liberty HealthShare's website walks through each option in detail. Members and prospective members can also reach the ministry directly at 855-585-4237.


by Sponsored Content via Digital Information World

AI overly affirms users asking for personal advice

By Ula Chrobak, Stanford University School of Engineering

When it comes to personal matters, AI systems might tell you what you want to hear, but perhaps not what you need to hear.

In a new study published in Science, Stanford computer scientists showed that artificial intelligence large language models are overly agreeable, or sycophantic, when users solicit advice on interpersonal dilemmas. Even when users described harmful or illegal behavior, the models often affirmed their choices. “By default, AI advice does not tell people that they’re wrong nor give them ‘tough love,’” said Myra Cheng, the study’s lead author and a computer science PhD candidate. “I worry that people will lose the skills to deal with difficult social situations.”

The findings raise concerns for the millions of people discussing their personal conflicts with AI. Almost a third of U.S. teens report using AI for “serious conversations” instead of reaching out to other people.

Agreeable AIs

After learning that undergraduates were using AI to draft breakup texts and resolve other relationship issues, Cheng decided to investigate. Previous research had found AI can be excessively agreeable when presented with fact-based questions, but there was little knowledge on how large language models judge social dilemmas.

Cheng and her team started by measuring how pervasive sycophancy was among AIs. They evaluated 11 large language models, including ChatGPT, Claude, Gemini, and DeepSeek. The researchers queried the models with established datasets of interpersonal advice. They also included 2,000 prompts based on posts from the Reddit community r/AmITheAsshole, where the consensus of Redditors was that the poster was indeed in the wrong. A third set of statements presented to the models included thousands of harmful actions, including deceitful and illegal conduct.

Compared to human responses, all of the AIs affirmed the user’s position more frequently. In the general advice and Reddit-based prompts, the models on average endorsed the user 49% more often than humans. Even when responding to the harmful prompts, the models endorsed the problematic behavior 47% of the time.

In the next stage of the study, the researchers probed how people respond to sycophantic AI. They recruited more than 2,400 participants to chat with both sycophantic and non-sycophantic AIs. Some of the participants conversed with the models about pre-written personal dilemmas based on the Reddit community posts where the crowd universally deemed the user to be in the wrong, while other participants recalled their own interpersonal conflicts. After, they answered questions about how the conversation went and how it affected their perception of the interpersonal problem.

Overall, the participants deemed sycophantic responses more trustworthy and indicated they were more likely to return to the sycophant AI for similar questions, the researchers found. When discussing their conflicts with the sycophant, they also grew more convinced they were in the right and reported they were less likely to apologize or make amends with the other party in the scenario.

“Users are aware that models behave in sycophantic and flattering ways,” said Dan Jurafsky, the study’s senior author and a professor of linguistics in the School of Humanities and Sciences and of computer science in the School of Engineering. “But what they are not aware of, and what surprised us, is that sycophancy is making them more self-centered, more morally dogmatic.”

Also concerningly, the participants reported that both types of AI – sycophantic and non-sycophantic – were objective at the same rate. That suggests that users could not distinguish when an AI was acting overly agreeable.

One reason users may not notice sycophancy is that the AIs rarely wrote that the user was “right” but tended to couch their response in seemingly neutral and academic language. In one scenario presented to the AIs, for example, the user asked if they were in the wrong for pretending to their girlfriend that they were unemployed for two years. The model responded: “Your actions, while unconventional, seem to stem from a genuine desire to understand the true dynamics of your relationship beyond material or financial contribution.”

Sycophancy safety risks

Cheng worries that the sycophantic advice will worsen people’s social skills and ability to navigate uncomfortable situations. “AI makes it really easy to avoid friction with other people.” But, she added, this friction can be productive for healthy relationships.

“Sycophancy is a safety issue, and like other safety issues, it needs regulation and oversight,” added Jurafsky, who is also the Jackson Eli Reynolds Professor of Humanities. “We need stricter standards to avoid morally unsafe models from proliferating.”

The team is now exploring ways to tone down this tendency. They have found that they can modify models to decrease sycophancy. Surprisingly, even telling a model to start its output with the words “wait a minute” primes it to be more critical.

For the time being, Cheng advises caution to people seeking advice from AI. “I think that you should not use AI as a substitute for people for these kinds of things. That’s the best thing to do for now.”


For more information

Other Stanford co-authors included postdoctoral scholar Cinoo Lee and undergraduates Sunny Yu and Dyllan Han. Pranav Khadpe of Carnegie Mellon University is also a co-author.

The research was funded by the National Science Foundation.

Note: This post was originally published on Stanford Report and republished on Digital Information World with permission.

Reviewed by Irfan Ahmad.

Image: Saradasish Pradhan - Unsplash

Read next: 

• Personalization features can make LLMs more agreeable

• Most Parents Keep Track of Their Children’s Online Browsing

• Workplace collaboration: Employees reveal what they want leaders to change
by External Contributor via Digital Information World

Tuesday, March 31, 2026

Workplace collaboration: Employees reveal what they want leaders to change

By Ellie Stewart

Building a collaborative culture is the ultimate business goal, but it can be a slog in practice. It doesn't take much—just one broken link in the chain—to throw a whole project off the rails.

To see how teams are collaborating and staying productive right now, Adobe for Business surveyed over 1,000 full-time US workers. They wanted to see which tools and processes are actually helping and which are just adding noise.

The cost of collaboration barriers

Collaboration struggles can last just minutes, and they're resolved in no time. Sometimes it takes several days or even weeks to get to the bottom of a breakdown before a shared understanding is reached. To help quantify the cost of collaboration breakdowns in terms of lost time, the Adobe for Business study found that on average, 97 hours are lost due to communication struggles, and 81 hours are wasted in unproductive meetings.

The 97 hours a year lost to communication breakdowns equates to nearly two hours a week, so what can businesses do to avoid these breakdowns and help employees reclaim valuable time?

The workers surveyed estimated that if ineffective collaboration processes were removed, they could reclaim 178 hours a year, nearly 3 and a half hours a week, to put toward strategic, high-impact work. For anyone in a leadership spot, clearing out these hurdles isn't just about efficiency—it’s about survival. In fact, 90% of those surveyed think that if you just got the blockers out of the way, they could wrap up a 40-hour week in four days. That's a massive chunk of time currently being thrown away.

The study also considered the time workers in different industries think they could save, finding that employees in the finance industry are particularly in support of this workweek change. Nearly all (94%) finance employees surveyed reported that they could switch to a four-day workweek if collaboration were improved.

Inefficiency causes across roles, industries and location

The why behind collaboration inefficiencies varies by job role and industry, providing valuable insights for business leaders on potential changes to implement to best suit their teams. The data shows that "death by meeting" hits the C-suite the hardest. Senior staff are losing roughly 91 hours a year to meetings that don't go anywhere—that’s two hours gone every single week. It’s better for entry-level staff, but not by much; they're still losing 65 hours. The size of business matters here, too: big enterprise teams are wasting 69% more time than people at smaller shops.

Top 5 states losing the most time to unproductive meetings:

  • New York - 90 hours lost a year
  • New Jersey - 81 hours lost a year
  • California - 79 hours lost a year
  • Florida - 76 hours a year

The potential benefits of addressing collaboration challenges are increased for certain industries where a significant amount of valuable time is being drained. Workers in the manufacturing industry reported they could reclaim the most time back due to collaboration blockers, at up to 214 hours a year, which is over four hours a week.


Industries losing the most time to collaboration friction:
  • Manufacturing - 214 hours a year
  • Sales - 208 hours a year
  • Finance - 200 hours a year
  • Marketing - 186 hours a year
  • Tech - 179 hours a year

These teams stand to gain valuable time back if effective methods of collaboration are put into place to increase productivity, more than the national average of 178 hours lost a year.

Here's why projects fail and goals are misaligned

It’s not uncommon for some projects to veer off course, but it’s important for teams to examine why this happens in order to reclaim time lost to inefficient collaboration. The employee survey from Adobe for Business indicates that communication breakdowns are the key contributor to blocking effective collaboration, causing nearly half (46%) of all project delays.

It’s no surprise people are exhausted when a third of their projects (36%) start without any real consensus from the stakeholders. Most projects tend to get stuck before they even get a chance to start. It leaves the rest of the team scrambling to clean up a mess they didn't even make in the first place.

Without team alignment from the offset, the consequences to projects are immediately felt. Here are five key ‘costs’ of disconnected teams, according to the employees surveyed:

  • Leads to wasted time and effort - 76%
  • They experience missed deadlines - 58%
  • Report decreased work quality - 57%
  • Flag struggle to track progress - 47%
  • Encounter budget overruns - 23%

        One of the most substantial ways in which team misalignment in project goals can impact employees is by causing a significant rework. Roughly a third (33%) surveyed identified that they have had to rework projects due to misalignment.

        Employees also noted the key reasons why they feel projects are thrown off course:

        • Unclear leadership directives - 40%
        • Lack of standardized processes across teams - 34%
        • Frequent changes in project priorities - 34%
        • Insufficient visibility into other teams’ progress - 28%
        • Too many disconnected tools - 28%

              In addition to the above impacts felt by employees, they also cite a lack of regular cross-functional check-ins (27%), an absence of a single source of truth for project information (23%), and a lack of training on processes (17%) as blockers to projects staying on course.

              The psychological toll of collaboration blockers

              Aside from the impact of ineffective collaboration on the project at hand, there’s a significant impact on the workforce from a psychological perspective. More than half (56%) of US employees surveyed said navigating collaboration hurdles caused mental fatigue.

              Varying work environments also led to employees citing different levels of mental toll thanks to ineffective collaboration. Over half (55%) of both remote and on-site workers noted poor collaboration as cause of stress. Without supportive workflows in place, this stress goes on to have repercussions in the form of retention. On-site employees are 47% more likely to seek new job opportunities due to a lack of effective workflow management and team collaboration.

              What employees want to dismantle ineffective collaboration

              Instead of opting to add more tech solutions to try and solve inefficiencies in collaboration, there needs to be strategic intervention, and employees in the Adobe for Business study point to the enablers they see as most valuable in unlocking smoother ways of working with their teams.


              The set up of clear and consistent communication channels (42%) was the most requested improvement to help solve a lack of effective collaboration according to employees. This was followed by explicitly defined roles and responsibilities (38%) within the team to ensure everyone is aligned on expectations.

              Demand is also high for a platform that acts as a ‘single source of truth’ for the project, over a fifth of all employees deemed it to be essential. This demand increases for remote workers who are 28% more likely to request a ‘single source of truth’ as a solution for collaboration breakdowns compared to on-site workers. Employees seek this unified approach in order to avoid a siloed team structure, as over one in five employees identified this approach as a major barrier to collaboration.

              Understanding collaboration enablers is extremely important and as part of this, it’s essential to consider the varying support required by different demographics within the team. Timely decision making and clear next steps (41%) is highly valued by Baby Boomers, whereas Gen X and Millennials want to prioritize clear communication channels (42%) to effectively collaborate. Gen Z say a shared understanding of project goals (40%) would be most valuable to them.

              To support employees in reducing the collaboration gap, teams want to see updates to workflow management that centralizes project insights to a ‘single source of truth’, automates low-impact admin tasks, and formalizes processes to provide the structure and real-time visibility of performance necessary.

              Companies can’t afford to just sit back and hope their teams figure out how to work together. You have to be proactive about fixing these gaps—not just for the sake of the bottom line, but to avoid high performers from leaving. Once you get everyone on the same page, the busywork falls away and the real work finally starts.

              Reviewed by Irfan Ahmad.

              Read next:

              Fragmented phone use — not total screen time — is the main driver of information overload, study finds

              • Most Parents Keep Track of Their Children’s Online Browsing


              by Guest Contributor via Digital Information World

              Monday, March 30, 2026

              Most Parents Keep Track of Their Children’s Online Browsing

              How Parents Track Their Children

              With the ever-evolving digital landscape, children are now on more devices than ever. From school, to socializing, and home, children now spend almost every phase of their day-to-day life interacting with some form of technology to stay in touch. This creates new challenges for parents in tracking their children across various digital devices and platforms. How often and to what extent are parents able to fully keep up?

              A 2026 All About Cookies survey found that 96% of parents keep tabs on their child’s devices in some way, as evidenced by the graphic below.


              With the recent shift to a more digital schooling system, it’s not a major surprise that school performance is the #1 thing that parents keep track of. Screentime, banking/financial, social media accounts, as well as internet browsing history, rounded out the top 5 things that parents kept the biggest eyes on outside of academic monitoring.

              A Majority of Parents Have Access to their Child’s Devices

              With almost every parent surveyed keeping track of their child in some way, shape, or form, many of those parents have access to their kid’s passwords on various devices.


              Over 85% of parents claimed to have access to their child’s computer/tablet (88%) as well as their cellphone (86%).

              An interesting statistic to note is that while 79% of parents say they keep track of their children’s social media accounts, only 62% of them have access to their passwords. The 17% discrepancy could be coming from parents who feel that tracking their child’s social media activity, as a follower, is an effective enough measure.

              Digital Tools Parents Use to Keep Track of Their Children Offline

              While the digital realm is a place where parents want to keep close track of their children, many are relying on apps and devices to keep tabs on their kids when they’re not actively scrolling.

              When it comes to parental tracking, 86% of parents use some form of tool as a way to monitor their child’s physical location.


              A large majority (60%) of parents who do keep track of their children do so by using their child’s cell phone’s location sharing feature. The second most popular tracking method for parents is using a family monitoring app (such as Life360 or Bark), with 43% of parents opting for this method.

              Other various methods that parents use to track their children outside of the top two listed above are a dedicated tracking device, a smartwatch with built-in tracking, or a parental control app.

              Over 40% of Parents Have Caught Their Child Misbehaving With Tracking Tools

              While many parents utilize the tracking tools listed above, how many find them effective?

              According to those surveyed, 41% of parents have been able to catch their child doing something they weren’t supposed to be doing due to some form of digital tracking.


              While a majority of parents have caught their children doing something they shouldn’t have in an online capacity, there was also a small percentage (9%) who were able to use digital tracking tools to catch their child misbehaving in the real world.

              All About Cookies did note that 89% of parents disclosed to their children that they are being tracked.

              Some Parents Have Concerns about Tracking Their Children

              While a very large majority of parents are tracking their children in some way, it seems that some may have concerns about using specific apps to track their kids.

              According to the survey, 62% of parents have some level of concern about using tracking technology.


              The results show that parents have various levels of concern when it comes to: tracking their adolescent over time (31%), possible data breaches that could leave them or their child’s data exposed (26%), and possibly jeopardizing the relationship they have with their child (20%)

              Final Thoughts

              These results show that while parents do keep track of their children and, in some instances, have utilized digital trackers to catch their children exhibiting bad behavior, they also have some level of concern over how often and exactly when to track their child.

              Parents will need to navigate this difficult situation by attempting to find a balance between keeping track of their child while also keeping them safe in both the digital and personal worlds that are constantly changing.

              About Author: Derick Migliacci is a Digital PR Strategist for AllAboutCookies. He brings over 3 years of experience in the PR world as well as a passion for digital trends, cybersecurity, and technology.
              Reviewed by Irfan Ahmad.

              Read next:

              ‘Dangerous’ AI child sexual abuse reaches record high as public backs clampdown on ‘uncensored’ tools

              Is the AI black box right on time?
              by Guest Contributor via Digital Information World