Wednesday, March 4, 2026

New Survey Debunks Digital Detox Myth: 60% Never Switch Off, 45% Can’t Last 12 Hours Offline

A new survey of 2,000 Britons aged 16+ shows that digital detoxing is talked about more than it is practised in the UK, with six out of 10 Brits claiming never to have taken a digital detox.

The report into attitudes towards the internet also revealed that ‘disconnecting’ doesn’t fit how modern life works because so many people living in the UK rely on having an internet connection. In fact, having internet access is largely viewed positively, with the top benefits in priority order listed as:
  • It’s provided me with more entertainment (60%)
  • Being online helps me to reconnect with friends and family (54%)
  • The internet has supported education and upskilling (35%)
  • Online digital connection has improved my access to healthcare and wellbeing resources (31%)
  • Having reliable and at home internet access allowed me to work remotely or flexibly (31%)
Further data that Britain can’t and isn’t planning to ‘switch off’ supports how much the UK loves and relies on having reliable internet connection. Nearly half of respondents (45%) said that they would struggle to go without internet access beyond 12 hours and 30% say they couldn’t live without the internet. 34% claimed that they wouldn’t want to do a digital detox.

On average, Brits estimated that being offline for a maximum of four days would be about their limit. A fifth claimed that they thought that between one and two days without the internet is as much as they could manage.

Britons also feel that being digitally dependent doesn’t make them miserable. A third believe they have a healthy balance of being offline and online and, overall, 31% said having access to the internet has made everyday tasks easier.

A quarter of people do try to limit their time online and 17% only go online when they really need to.

There are generational nuances in attitudes to living in an always-on culture.

As perhaps expected, younger generations live more of their lives digitally than older generations. Millennials gave the highest response when asked whether they spend more time online than offline. 63% of those aged between 30 and 45 said they think they spend more time online than offline. Gen Z, aged between 14 and 29 years old, weren’t far behind with 59% of this generation saying they are online more than offline. Only 33% of Baby Boomers aged 62+ say they spend more time online than offline.

Gen Z, aged between 14 and 29 years old, admitted that they waste a lot of time online, especially scrolling through social media apps. 32% claimed this to be true. In comparison, only 16% of those aged between 62 and 80 said their experience of being online was time wasting.

While the national average for taking an intentional break from being online, or a digital detox, was 37% - among Gen Z, this rose to 55% who said they have taken a digital detox. Baby Boomers were the cohort that aren’t worried about their digital addiction or online habits – and only one in five among this age group have taken a digital detox.

UK-based Internet Service Provider Zen Internet commissioned the survey. Stephen Warburton, who is Zen’s Consumer Director and has been with the business for more than 20 years, said: “There’s a lot of talk about digital detoxing, and taking time to switch off can be important for wellbeing. But for most people the internet now plays a central role in everyday life. The findings show that while many recognise the need for balance, switching off entirely isn’t always practical in a world that’s increasingly built around being online. As reliance deepens, expectations around reliability and resilience are rising too.”

The timing of the results from Zen’s survey coincided with an annual event called Global Unplugging Day on 6 March. The rationale behind having a day to unplug and take a break from digital devices for 24 hours is to encourage people to reconnect with the world around them. The initiative is led by a not-for-profit organisation that this year is also running a research study to better understand what happens when people purposefully gather offline, in person and phone-free. The campaign will look at the impact that being more connected in person versus online has on feelings of belonging, loneliness, social support and overall life satisfaction.

Looking at Zen’s research through its survey with Censuswide, in the UK the majority of Brits don’t feel overwhelmed by being constantly connected online. Just under a third say they have a healthy balance with their internet use, and only one in ten report often feeling overwhelmed or burnt out from being online.

Among respondents who do have concerns about their online lifestyle and internet usage, 10% said they do feel more disconnected despite being constantly connected. One in ten also feel it hampers their concentration.

Overall, this research captures the current attitudes to ‘switching off’ or unplugging from digital devices. Britain’s relationship with the internet appears to be a largely positive one and less about detoxing completely and more about finding ways to balance how to enjoy life both online and in-person.

Image: Polina Tankilevitch / Pexels

Reviewed by Asim BN.

Read next:

• Digital detox: how to switch off without paying the price – new research

• From Anthropic to Iran: Who sets the limits on AI’s use in war and surveillance?

• Survey: 45% Report Health App Burnout as Average User Juggles Six Apps
by Guest Contributor via Digital Information World

From Anthropic to Iran: Who sets the limits on AI’s use in war and surveillance?

Emmanuelle Vaast, McGill University

Image: Solen Feyissa / Unsplash. Edited by DIW

Anthropic, a leading AI company, recently refused to sign a Pentagon contract that would allow the United States military “unrestricted access” to its technology for “all lawful purposes.” To sign, Anthropic CEO Dario Amodei required two clear exceptions: no mass surveillance of Americans and no fully autonomous weapons without human oversight.

The very next day, the U.S. and Israel launched a large-scale offensive against Iran.

This leaves many wondering: how different would a war with fully autonomous weapons look? How important an ethical decision was it, when Amodei referred to fully autonomous weapons and mass surveillance as AI “red lines” that his company would not cross? What do these red lines mean for other nations?

The decision cost Anthropic immensely. U.S. President Donald Trump ordered all American agencies to stop using Anthropic’s AI family of advanced large language models (LLMs) and conversational chatbots, Claude. Pete Hegseth, U.S. defence secretary, designated Anthropic as a “supply chain risk,” which could impact other contract possibilities for the company. And rival company OpenAI swiftly struck a deal with the Pentagon instead.

The risks of fully autonomous weapons

AI chatbots are typically not weapons on their own, but they can become part of weapons systems. They do not fire missiles or control drones, but they can be plugged into the larger military systems.

They can quickly summarize intelligence, generate target shortlists, rank high-priority threats and recommend strikes. A key risk is that of a process going from sensor data to AI interpretation, target selection and weapon activation with minimal to no human control or even awareness.

Fully autonomous weapons are military platforms that, once activated, independently conduct military operations without human intervention. They rely on sensors such as cameras, radars and AI algorithms to analyze the environment, search for, select and engage targets.

Advanced helicopters, for instance, already operate with no human intervention. With fully autonomous weapons, human control and oversight disappear and AI makes final attack and battlefield decisions.

This is concerning, given recent research in which advanced AI models opted to use nuclear weapons in simulated war games in 95 per cent of cases.

The risks of mass surveillance

Frontier AI models can promptly summarize huge data sets and auto-generate patterns to look for signals of suspicious people and activity through even weak associations. In his statement on Anthropic’s discussions with the Department of War, Amodei argued that “AI-driven mass surveillance presents serious, novel risks to our fundamental liberties.”

They can analyze records, communications and metadata to scan across populations. They can produce briefings and lists of people that flag automatically who gets questioned, denied entry into a country, refused a job, etc. These systems create risks to privacy because they can analyze data from multiple sources, such as social media accounts, and combine these with cameras and facial recognition to track people in real time.

AI models can also make mistakes. Even a small erroneous association can scale up dangerously if the system is run over millions of people.

AI models are also opaque: how they analyze data and reach their conclusions cannot be fully comprehended, which adds to the difficulty of challenging the output.

‘All lawful purposes’

The label “all lawful purposes” sounds like a safety limit. Yet, this language means that the government can use AI for all purposes that it deems legal, with few limits in the contract.

This matters because legality is a moving target, laws can change and are often ill-equipped to deal in real time with fast changing innovations, and interpretations can shift.

This is what made Anthropic, a company that was founded by former OpenAI employees with an explicit focus on AI safety and ethics, argue that AI-enabled mass surveillance was a novel risk and that lawful purposes could not provide stable guardrails.

Anthropic has famously developed an internal lab to understand how Claude works, interprets queries and makes autonomous decisions. Given the opacity of LLMs as well as the speed with which their capacities develop, such efforts matter.

Project Maven with higher stakes?

In some ways, this story is familiar. Technology companies have long been at the forefront of innovation, with great promises of progress but also risks of misuse and negative consequences. The closest historical comparison is Google’s Project Maven in 2018.

Google had a contract with the Pentagon for the company to help analyze drone surveillance footage. Four thousand Google employees protested the project, arguing that surveillance should not be part of the company’s mission. Google announced it would not renew Maven and later issued AI principles that included commitments around weapons and surveillance.

The situation became a landmark case in the power of employee activism and public pressure.

The Project Maven example, however, also reminds us that company ethics and AI safety are fluctuating matters. In early 2025, Google discreetly dropped its pledge not to use AI for weapons and surveillance in an attempt to gain new lucrative defence contracts.

Anthropic’s current situation is in some respects similar to Google’s Project Maven one: it shows a company and its leaders trying to place limits on military uses of AI. It illustrates tensions that emerge when espoused corporate values collide with governments and national security demands.

The Anthropic case is also distinct because generative AI in 2026 is much more powerful than it was just a few years ago. Project Maven was only about analyzing drone footage. Today’s models can be used for many tasks, so the spillover risk is larger.

LLMs like Claude can self-improve by learning from user corrections and refining actions through iterative feedback loops. What an unrestricted Claude and its client, the Pentagon, could have done is therefore worrisome.

Who sets the limits?

These events are neither about Anthropic being uniquely principled nor about the Pentagon being uniquely demanding. They are about a critical issue that will keep coming back as AI becomes more powerful: who sets the limits regarding AI use when national security is involved?

If “all lawful purposes” become the default, the guardrails will depend on politics and legal interpretation. For Canada and other nations, the safeguards matter. Ethics cannot be left to contract negotiations and corporate conscience.

These events illustrate the complexities of engaging in AI ethics in practice. AI ethics principles and declarations are important and abound. At the same time, in practice, AI ethics are set through contracts, procurement rules, various parties’ actual behaviour and oversight.

Canada’s defence and public sectors are building AI capacity and Canada operates closely with the U.S. defence and intelligence. This means that procurement language and standards can travel. If “all lawful purposes” becomes the standard language in the U.S. national security market, this could put pressure on Canada and other nations to adopt similar terms.

The reassuring news is that Canada has governance tools in place it can strengthen and extend. The Directive on Automated Decision-Making is designed to ensure that systems are transparent, accountable and fair. It requires impact assessment and public reporting.

The Algorithmic Impact Assessment is a mandatory risk-assessment tool tied to the directive.

But Canadians should be mindful of ongoing developments to check that procurement standards name prohibited uses, to call for audits and for independent oversight so that safeguards do not depend only on particular governments and companies at the top.The Conversation

Emmanuelle Vaast, Professor of Information Systems, McGill University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reviewed by Ayaz Khan.

Read next: 

• Chatbots overemphasize sociodemographic stereotypes, researchers report

• Survey: 45% Report Health App Burnout as Average User Juggles Six Apps


by External Contributor via Digital Information World

Tuesday, March 3, 2026

Survey: 45% Report Health App Burnout as Average User Juggles Six Apps

Nearly half of Americans are feeling overwhelmed by the number of digital health tools they have, and many report health app burnout, according to new research.

A survey of 2,000 insured adults aged 18-65 found that the average person uses six different health-related apps on a regular basis — with one in five having upward of 10 (22%).

Image: HUUM / Unsplash

For some, that includes daily activity trackers (57%), nutrition apps (39%) and sleep tracking tools (37%), while others utilize health apps for ongoing care needs like weight management support (34%) and virtual care to connect with doctors (30%).

While nearly one quarter (23%) use apps to manage a specific chronic health condition, more than one in 10 (14%) respondents admit they use these health tools to try popular health trends they’ve seen online.

On average, respondents spend over an hour every week manually logging their data and checking their health apps at least once a day (58%). In fact, more than one in ten (11%) admit to checking their app data hourly.

As a result, eight in 10 Americans said their phone now knows their health better than they themselves do (79%).

Even though the data shows that Americans love tracking their health via apps, the survey conducted by Talker Research for MD Live found there are certain drawbacks.

More than half (53%) feel there are too many health apps to keep track of, and 45% say they’ve felt “burnt out” on a weekly basis just from trying to stay on top of inputting information into their apps. More than one in ten (15%) feel exhausted trying to keep up with alerts.

A third of those surveyed have downloaded apps that they didn’t end up using (32%), so it’s no surprise that 24% have deleted at least four of them over the past two years.

Respondents shared that their disinterest grew when these apps required a subscription (27%) or displayed too many ads or tried to push products (23%). Nearly one in five (17%) have deleted an app because they say they have received conflicting or confusing information.

On top of that, 40% admit they don’t know how to best use these apps to their advantage and 41% note that they often feel like they’re juggling too many.

As a result, one quarter say they have forgotten to follow through on a health goal or appointment because they were managing too many tools.

“People aren’t overwhelmed by technology, they’re overwhelmed by the number of choices,” said Dr. Maggie Williams, medical director for Primary Care at MD Live by Evernorth. “Most consumers want to engage in their health and find digital tools useful. They just want help understanding which tools are right for them and how to get the most out of them.”

Even so, many Americans aren’t giving up on digital health. Forty-one percent plan to use more health tools and apps in 2026, especially for fitness or activity tracking (54%), weight loss or management support (50%) and nutrition tracking (49%).

Despite the effort that goes into maintaining these apps, the payoff is worth it for many.

Nine in 10 said health tools have improved their understanding of how their body works (91%) and have inspired them to feel motivated (38%), in control (36%) and confident about the decisions they make (33%).

Respondents say they gain value from learning more about themselves, such as identifying personal patterns (34%) and better understanding their body’s needs (31%). For some, it also helps them stay motivated (37%) and improves their mindfulness (28%).

Even with these benefits, consumers still need help wading through it all. Nearly two-thirds of those surveyed want more help from a healthcare provider in deciding what health tools and apps are right for them (62%), and 54% want more communication from their health plan about the tools available to them.

Respondents dished on what would make them use health tools/apps more efficiently and reported that the top priority would be all their apps and tools living together in one place (28%), followed closely by all their apps being synced to share data (27%).

Those polled were also asked what they’d include in their idea of the “perfect health app,” and a sleep tracker scored the highest (37%). That was followed by an activity tracker (31%), a heart rate monitor (31%), step counter (30%), blood pressure monitor (30%) and stress tracker (30%).

“It’s hard to know which tools are truly right for you,” said Dr. Williams. “Your doctor can help you prioritize your needs and narrow the choices, and some health plans now offer recommended app lists tailored to different health needs. Both can help make the digital health world much easier to navigate.”

Reviewed by Irfan Ahmad.

This post was originally published by Talker Research and is republished here in accordance with their republishing guidelines.

Read next:

• Research Identifies Blind Spots in AI Medical Triage

• Chatbots overemphasize sociodemographic stereotypes, researchers report
by External Contributor via Digital Information World

Monday, March 2, 2026

Chatbots overemphasize sociodemographic stereotypes, researchers report

By Mary Fetzer

People interact with artificial intelligence (AI)-powered chatbots, which can be trained to take on certain demographic attributes like age and race, for information, entertainment, technical help, learning, emotional support and more. But how realistically do these AI personas mimic real people? For some demographics, not well, according to researchers at Penn State's College of Information Sciences and Technology (IST).

The researchers found that chatbots relied on superficial stereotypes and exaggerated cultural markers that diminish the authentic experiences of the humans they’re meant to represent. The team presented their findings at the 40th Annual Conference of the Association for the Advancement of Artificial Intelligence (AAAI), which was held Jan. 20-27 in Singapore. The presentation was part of a special track on AI alignment — the idea that AI systems should best represent the values humans think are important, ethical and fair.

The research was led by Shomir Wilson, an associate professor in the College of IST’s Department of Human-Centered Computing and Social Informatics and director of the Human Language Technologies Lab at Penn State, and Sarah Rajtmajer, an associate professor in the College of IST’s Department of Informatics and Intelligent Systems and a research associate in the Rock Ethics Institute.

“We conducted this research under the hypothesis that we’ll increasingly encounter more persona-like chatbots as AI becomes more integrated into our lives,” Wilson said. “Users may be more willing to interact with chatbots that represent a particular background, but we found that current bots don’t represent people from some backgrounds well.”

Large language models (LLMs) are a type of AI used to construct chatbots. The researchers told LLMs — including GPT-4o, Gemini 1.5 Prio and DeepSeek v2.5 — to take on personas based on factors such as age, gender, race, occupation, nationality and relationship status. They asked more than 1,500 AI-generated personas about their lives — such as “Please describe yourself. What are your most defining traits or qualities? What skills do you excel at?” — and compared their responses to those of real people with similar sociodemographic characteristics. They found that the LLMs produced stereotypical written language often used to describe minoritized groups — and did so more than their human counterparts.

Image: Saradasish Pradhan / Unsplash

“The study showed that while chatbots often appear human-like, they overemphasize racial markers and flatten complex identities into stereotypes,” Wilson said. “The AI-generated personas rely on patterns that signal specific cultural assumptions rather than reflecting authentic lived experiences.”

For example, when questions were asked of a chatbot trained to represent a 50-year-old African American woman, the bot talked about gospel music, tough love, social justice, natural hair care and other stereotypical topics that differ from what real people of that demographic would say. While a person might touch on one or two such topics, human responses to the same questions generally don’t include all of them. Instead, the 141 real people surveyed by the researchers talked about more individualized things like work, parenting, volunteering and their health.

The chatbots appeared to be providing answers that were complex and well-structured, but in reality, they were using culturally coded language to oversimplify the experiences of the minority communities they were trained to represent, Wilson said.

The researchers observed four types of representational harm:

  • Stereotyping — relying on generalizations and conventional tropes regarding specific racial or cultural groups
  • Exoticism — positioning minoritized identities as foreign, other or exotic to enhance the narrative
  • Erasure — flattening or omitting complex histories and individualities that define real-world identities
  • Benevolent bias — using language that bypasses bias filters by being polite or positive

“LLMs are increasingly used in high-stakes settings — for example, as chatbot companions or as simulated human subjects in scientific research,” Rajtmajer said. “In this study, we show that current LLMs magnify harmful stereotypes in a racist way, which should give pause to developers seeking to integrate personas in real-world applications. These tendencies shouldn’t be buried in the new technologies being developed and released into the world.”

According to the researchers, this work diagnosed a problem that needs to be treated during the development stage.

“Our study highlights how AI-generated content may seem human but can mask deep representational bias,” Wilson said. “What’s needed are design guidelines and new evaluation metrics to ensure ethical and community-centered persona generation.”

This includes a transition from simple word-level detection to more sophisticated auditing that can assess the context and narrative depth of identity representation, Wilson explained. It also involves engagement between the developers creating these personas and the communities they intend to represent.

“A community-centered validation protocol can help ensure that AI-generated personas resonate with actual lived experiences,” Wilson said.

Jiayi Li and Yingfan Zhou, graduate students pursuing doctoral degrees in informatics from the College of IST, also contributed to this research. Pranav Narayanan Venkit, who earned his doctorate in informatics from IST in 2025, was first author on the AAAI paper, titled, “A Tale of Two Identities: An Ethical Audit of Human and AI-Crafted Personas.”

The U.S. National Science Foundation supported this work.

Note: This post was originally published by The Pennsylvania State University and is republished with permission on DIW.

Reviewed by Irfan Ahmad.

Read next:

Research Identifies Blind Spots in AI Medical Triage

People are overconfident about spotting AI faces, study finds


by External Contributor via Digital Information World

Ensuring Smartphones Have Not Been Tampered With

With increasing cyberattacks and government data breaches, one of the most important devices to keep secure is the one in everyone’s pocket: smartphones. The problem is that it is difficult to check that a smartphone has not been tampered with without the risk of unintentionally damaging the device itself.

In AIP Advances, by AIP Publishing, researchers from the University of Colorado Boulder and the National Institute of Standards and Technology developed a way to remotely fingerprint and identify a cellular device. Their method can help ensure a phone has not been altered during its manufacturing process, reducing the risk of espionage.

When smartphones communicate with a cell tower, they emit a set of electromagnetic waves. Using specialized SIM cards and cellular radio standards-compliant base station emulator equipment, the researchers commanded a set of “trusted” cell phones — devices they know have not been modified — to transmit the exact same sets of signals, allowing them to create a database of what these signals really look like for different phone models, which serve as fingerprints of the model.

“Think of it like giving every phone the exact same song to sing. Even though they are singing the same notes, every phone model has tiny, microscopic differences in its internal hardware,” said author Améya Ramadurgakar. “Our system is sensitive enough to hear those subtle ‘vocal’ differences.”

By comparing the signals emitted by an unknown device to the database, the researchers can figure out if the device has been altered — that is, if its signals do not match up with any of the trusted fingerprints.

They tested this process on multiple commercially available, current-generation smartphones from all major manufacturers currently leading the domestic market to over 95% accuracy. These results were both repeatable and stable over time. Because their method focuses on the fundamental electromagnetic behavior of the hardware, it is not limited to current 4G and 5G mobile networks and will be extendable to future generations of cellular technologies.

Ramadurgakar said this method lays the groundwork for the National Metrological Institute’s testing framework. To formalize this solution, the researchers need to expand their library of trusted sources that accounts for potential small variations between manufacturing batches, develop standardized test conditions, and develop a more automated process.

“This work demonstrates a foundational approach to obtaining a high-definition, reliable, and stable fingerprint of a commercially available smartphone device to verify that it has not been tampered with or compromised prior to deployment,” said Ramadurgakar. “I see this being utilized to validate mobile hardware before it is issued to high-security users, such as the military chain of command or senior government leadership.”

Image: Alicia Christin Gerald / Unsplash

This post was originally published on AIP and is republished here with permission.

Reviewed by Asim BN.

Read next: Do Gig App Fees Vary Across Different Types of Work?

by Press Releases via Digital Information World

Do Gig App Fees Vary Across Different Types of Work?

Gig work has become a defining feature of the labor market in 2025. It’s believed that anywhere from 25% to 43% of the workforce participates in gig work, and at least one in ten rely on it for their primary income.

Traditionally referred to as freelance or contingent work, this type of employment has exploded in popularity over the last ten years, thanks to a number of factors, such as the times of widespread stay-at-home measures and the emerging popularity of service apps like Uber and DoorDash. Many workers are drawn to these jobs for their convenience and flexibility, but behind the apparent accessibility of these platforms is a confusing and often opaque system of fees.

Recently, LLCAttorney created an in-depth comparison of the various gig apps popular in the market today and the data shows these fees are by no means universal. Some apps take almost nothing from each transaction, while others can claim a substantial share of a worker’s earnings. Ranging from 0% all the way up to 50%, at first, these inconsistencies may seem random, but upon closer inspection, a pattern emerges. The differences in how and the rate at which gig apps charge workers shows that these companies have a different fee for selling your skills, selling your time, selling wares, and selling your trustworthiness.

Selling Skills

When selling a specific skill, such as graphic design, coding, or writing, the gig platform’s role is typically that of an intermediary or middle-man, rather than a manager of the work itself. For this reason, many of these platforms charge a sort of “finder’s fee.”

For example, Fiverr, a popular app for graphic designers, video editors, and more, charges a flat 20% fee on every transaction. Freelancer.com operates similarly, taking either 10% or $5, depending on which sum is greater. These platforms operate like digital matchmakers, charging for access to clients.

Some freelance platforms have a sliding fee model, such as Upwork and 99Designs. Upwork’s fees range from 0% to 15%, depending on the industry, whereas 99Designs’ creators get 5% to 15% depending on their skill level, meaning workers with more niche skills may be charged less by the platform.

Selling Time

While skilled freelancers work within somewhat predictable percentage ranges, those selling their time and physical effort face much more fuzzy pay structures.

For delivery drivers, the “percentage taken” has begun to disappear entirely, replaced by algorithms determined by factors like distance traveled, weight of deliverable items, demand, and expected time needed to make the delivery.

DoorDash, for example, pays a base rate of $2 to $10 or more, which varies based on "estimated time, distance, and desirability of the offer." Uber Eats uses a similar formula, paying for pickup, drop-off, and mileage, but changes the rates based on market demand in the moment. These factors make it difficult for drivers to know exactly how much they can expect to earn from a day’s work, and make salaries much more irregular.

Additionally, because these payouts are generally low, the system is supported heavily by client tips. Practically every major delivery service emphasizes that workers receive 100% of their tips, without the app taking any off the top. While this seems like a boon to the delivery driver, according to one report about New York delivery workers these tips end up making up the majority of their income.

In ride sharing apps, there is an even greater disconnect between what clients pay and what the drivers actually earn. Uber and Lyft generally take 25%–30% of the fare, but this can spike to 40% on short or low-fare rides.

In general, it seems that those who sell their time by offering convenience to people through delivery or ridesharing are more susceptible to tip variables, algorithms, and external circumstances.

Selling Credibility and Trust

Some of the farthest ends of the spectrum regarding gig fees are found in the care sector.

TaskRabbit, for example, is a platform where people can sell their services to individuals such as home repairs or furniture assembly and allows users to keep 100% of the rates they set. Similarly, Care.com and Sittercity, two popular babysitting platforms, take 0% of the worker’s earnings. These services act as a digital bulletin board to connect clients with people who can offer them services they need, but the platform itself does not claim responsibility for the worker. In fact, neither of these sitter platforms accept legal liability for issues that arise after the two parties have been connected, as per their terms of service.

On the other end of the spectrum, you have both Wag! charging a 40% fee, and Rover charging anywhere from 20%-25%. The difference between these two dog walking services and the sitter services is that the former actually perform extensive, third-party background checks and take on a limited amount of liability for connecting the two parties.

When it comes to care and service apps, the ones that are charging steep gig fees are the ones selling peace of mind to the clients, whereas the ones that let workers keep all their own money require them to build up their own reputations on the platform.

How Much Do Gig Apps Really Take From Workers?
Infographic: LLCAttorney

Selling Wares

Finally, for those selling physical goods or renting out property, the base fees are generally low, but may include a mountain of microtransactions.

The base rate for sellers on Etsy is just 6.5% of all transactions sold on that marketplace. When compared to Fiverr's 20% base rate, this seems pretty low, however, each and every seller on Etsy gets charged a $0.20 per listing fee (regardless of whether they have any buyers) as well as a 3% payment processing fee, plus an optional 12%-15% Offsite Ads fee if the buyer was referred to Etsy through one of its marketing efforts. Similar to Etsy, eBay charges anywhere from 2.5% to 15.3% depending upon what the seller sells in addition to charging a $0.30 to $0.40 per transaction. Websites such as Amazon and Booking.com do not charge any additional fees, although their rates are typically much higher at 15% for Amazon and between 10% to 25% for Booking.com.

In the rental space, Airbnb charges a 3% Host Fee to renters who list their spaces on the platform, in addition to a 14%—16% Service Fee. Vrbo charges a 5% Commission Fee and a 3% Payment Processing Fee per listing, or a flat $700 Subscription Fee.

As we can see, physical products create an ecosystem where either the platform charges lower rates and creates many additional fees, or charges a higher rate with fewer complexities in their pay structure.

As gig work expands into multiple areas of modern economy, it is now more important than ever for workers to understand that platforms are not a monolith. The fees that these apps charge are not just the cost of using an app, but actually represent the cost of convenience, customer access and operational support. Ultimately, businesses charge different fees based upon what you are selling: your skills, your time, your credibility or your wares.

Reviewed by Ayaz Khan.

Read next:

• Does ‘free’ shipping really exist? An expert shares the marketing tricks you need to know

• Why The Real Cost of Working From Home Varies Wildly in US Cities


by Guest Contributor via Digital Information World