Wednesday, December 10, 2025

Research Tracks 8,324 U.S. Children, Identifying Social Media as a Risk Factor for Growing Inattention

A longitudinal study, published in Pediatrics Open Science, following 8,324 children aged 9 to 14 in the United States has found that social media use is associated with a gradual increase in inattention symptoms. Researchers at Karolinska Institutet in Sweden and Oregon Health & Science University tracked children annually for four years, assessing time spent on social media, television/videos, and video games alongside parent-reported attention measures.

On average, children spent 2.3 hours per day watching television or videos, 1.5 hours on video games, and 1.4 hours on social media. Only social media use was linked to growing inattention over time. The effect was small for individual children but could have broader consequences at the population level. Hyperactivity and impulsive behaviors were not affected.

The association remained consistent regardless of sex, ADHD diagnosis, genetic predisposition, socioeconomic status, or ADHD medication. Children with pre-existing inattention symptoms did not increase their social media use, indicating the relationship primarily runs from use to symptoms.

Researchers note that social media platforms can create mental distractions through notifications and messages, potentially reducing the ability to focus. The study does not suggest all children will experience attention difficulties but highlights the importance of informed decisions regarding digital media exposure.

The research team plans to continue monitoring the participants beyond age 14. The study was funded by the Swedish Research Council and the Masonic Home for Children in Stockholm, with no reported conflicts of interest.

Source: “Digital Media, Genetics and Risk for ADHD Symptoms in Children – A Longitudinal Study,” Pediatrics Open Science, 2025.

Notes: This post was drafted with the assistance of AI tools and reviewed, edited, and published by humans.


Image: Vikas Makwana / unsplash

Read next: Pew Survey: 64% of Teens Use AI Chatbots, and 97% Go Online Daily
by Asim BN via Digital Information World

Pew Survey: 64% of Teens Use AI Chatbots, and 97% Go Online Daily

A new Pew Research Center survey of 1,458 U.S. teens shows how central digital platforms and AI tools have become in their daily lives. Nearly all teens (97 percent to be exact) go online each day, and four in ten say they are online almost constantly. Older teens report higher levels of constant use than younger teens, and rates are even higher among Black and Hispanic teens.

YouTube remains the most widely used platform, with roughly nine in ten teens (92 percent to be exact) reporting any use and about three-quarters (76%) visiting it daily.

As per Pew survey, six in ten say they use TikTok daily and 55 percent said this about Instagram, while 46% use Snapchat daily. Facebook and WhatsApp see lower use. Platform preferences vary across demographic groups, with girls more likely to use Instagram and Snapchat, and boys more likely to use YouTube and Reddit.

AI chatbot use is also widespread. Sixty-four percent of teens say they use chatbots, and about three in ten do so daily. Daily use is more common among Black and Hispanic teens and among older teens. ChatGPT is the most widely used chatbot, at 59%, followed by Gemini and Meta AI. Teens in higher-income households use ChatGPT at higher rates, while Character.ai is more common among teens in lower- and middle-income homes.

Notes: This post was drafted with the assistance of AI tools and reviewed, edited, and published by humans.

Read next: Smart Devices Are Spying More Than You Think; Privacy Labels Offer Crucial Clues
by Ayaz Khan via Digital Information World

Smart Devices Are Spying More Than You Think; Privacy Labels Offer Crucial Clues

Smart gadgets collect vast amounts of our personal data through their apps. It’s usually unclear why the manufacturers need this information or what they do with it. And I don’t just mean smartphones. All kinds of devices are quietly mining us, and few people have any idea it’s happening.

Some brands of air fryers, for instance, request permission to listen in on conversations. Smart toys can also listen to and record conversations, not to mention the child’s name, age and birthday. Meanwhile, certain TVs insist on seeing all the apps on your phone.

It’s a bit of a barcode lottery: data collection varies from brand to brand and from one operating system to another, making it even harder for consumers to get on top of this situation. For instance, Android phone users who have smart speakers like Amazon Echo or Google Nest have to share much more personal data than those with Apple iOS devices.

If you think this all sounds worrying, you’re not alone. A 2024 study by the UK Information Commissioner’s Office (ICO) found that participants were concerned about the excessive and unnecessary amount of personal information being collected by devices.

Unlike with those air fryers, much data gathering takes place without the user even having to give explicit permission. If you’re wondering how this is legal given the explicit consent requirements of general data protection regulation (GDPR), the answer lies in the lengthy technical policies buried in the fine print of privacy notices. Most consumers skim-read these or find them difficult to understand, leaving them with little sense of the choices they are making.

Privacy nutrition labels

It seems to boil down to two options. We share our personal data with the apps of smart devices and hope they will only collect routine information, or we opt out and usually have to live with limited functionality or none at all.

However, there is a middle ground that most people are unaware of: privacy nutrition labels. These allow you to take some control by understanding what personal data your gadgets are collecting, without struggling through the privacy blurb.

The trouble is they are difficult to find. They are not mentioned by consumer magazine Which? or the ICO, perhaps because they are only “recommended” by the UK government and the Federal Communications Commission in the US. Yet despite not being legally binding on manufacturers, these privacy labels have become the norm when it comes to smartphone apps, while other smart devices are gradually catching up.

Ironically, this solution came from the pioneers of smart gadgets, Apple and Google. They voluntarily adopted the idea after it was proposed by researchers in 2009 as a way of informing users that their data was being collected.

Experts at Rephrain, the UK’s National Research Centre on Privacy, Harm Reduction and Adversarial Influence Online, have developed the following step-by-step guide to help consumers find their privacy labels on iPhones and Android phones (click or zoom to make the image bigger):



Rephrain, CC BY-SA


Rephrain, CC BY-SA

Once you find the relevant privacy label for the device in question, you’ll see practical, concise information about what data the app collects and why. Two sections list the types of data collected: “Data Used to Track You” and “Data Linked to You” for iPhones, and “Data Shared” and “Data Collected” for Android.

By reading the privacy label before making a purchase, consumers can decide if they are comfortable with the data collected and the way it is handled.

For example, I checked the privacy label of the app for the smart toothbrush I planned to get my husband this [holiday]. I found out it collects the device ID to track users across apps and websites owned by other companies, and data linked to identity such as location and contact information.

So before purchasing smart devices for your loved ones this [holiday], check the privacy labels of their apps on your smartphone. You may be surprised by what you find. This [holiday] season, don’t just give someone a lovely present – give them the gift of data control at the same time.The Conversation

Dana Lungu, Research Associate Research Institute for Sociotechnical Cyber Security, University of Bristol

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read next: Location Data From Apps and Carriers Enables Tracking Without Warrants


by Web Desk via Digital Information World

Tuesday, December 9, 2025

The New Era of Mobile App Deception

Retail apps now process their highest transaction volumes during peak periods. A single exploited app during these windows can compromise millions of stored-card checkouts, gift card loads and loyalty redemptions. Cybercriminals are racing to deploy AI-themed malware and cloned apps faster than security teams can respond.


Here’s what that looks like in practice: A user searches for ChatGPT or DALL·E on their mobile, and within seconds, dozens of apps appear — each claiming to offer smart chat, image generation or another AI-driven feature. On the surface, they look legitimate, but behind the familiar look and feel of these clones sits a spectrum of threats, from harmless wrappers and aggressive adware to fully developed spyware.

According to recent research, fake iOS apps have grown to nearly three times the usual volume, and fake Android apps have grown to nearly six times the national average.

The same pattern is showing up across the tech world. A recent Coinbase Base hackathon offered a $200,000 prize and drew more than 500 developers. Several of the winning projects were later accused of being empty apps linked to company employees. The situation shows how easy it’s become to fool people with something that looks polished, even when the app itself does very little.

Before hitting download, users need to understand the full range of fake apps now circulating, how these clones hide, how they trick people and which red flags to watch out for.

Inside the Spectrum of Fake Apps

Appknox researchers recently examined three apps pretending to be ChatGPT, DALL·E and WhatsApp. The apps posing as ChatGPT and DALL·E weren’t tools at all. They behaved like hidden app stores that could quietly install or delete software on a phone. The WhatsApp clone, known as WhatsApp Plus, went even further and acted as full spyware with access to messages, contacts and call logs. These findings illustrate the spectrum of mobile deception and help explain why fake apps are harder to spot.

Some apps sit at the low end and act as simple wrappers that use familiar names, but connect to real services that behave honestly. Others sit in the middle of the spectrum and imitate trusted brands to attract downloads, but don’t actually deliver anything meaningful. At the high-risk end of the spectrum are malicious clones that hide harmful systems behind trusted branding and user-friendly interfaces.

A lot of fake apps blend in so well that users have little reason to suspect that anything is wrong until that app has already been installed. You can no longer rely on familiar branding and clean designs as reliable signals of safety.

ChatGPT Wrapper Illustrates Imitation Without Deception

At the low end of the spectrum, the Appknox researchers looked at the unofficial ChatGPT Wrapper app. The app behaves exactly as described by sending user text to the OpenAI API and returning results without extra processing. Appknox researchers found no hidden modules or obfuscated code, and no background activity that suggested anything harmful. It asked only for basic permissions and avoided access to contacts, SMS or account information.

Its behavior matches its description, but this level of transparency is rare among AI-themed apps. There are a lot of apps that copy the look of AI tools while hiding unrelated systems inside. The ChatGPT Wrapper does the opposite, offering a simple service and making its function clear. It shows that unofficial apps aren’t automatically dangerous; some exist to fill gaps in official offerings without misleading users.

The wrapper also demonstrates why users must evaluate app behavior rather than brand resemblance. A familiar name doesn’t guarantee safety, and an unofficial name doesn’t guarantee risk. The real issue is whether the app performs the function it claims to without hiding additional processes.

DALL·E Lookalike Pretends to Be AI

In the middle of the spectrum, the researchers found the DALL·E 3 AI Image Generator app on Aptoide, a third-party Android app store that allows anyone to upload apps with little review. That alone is a warning sign.

This one looks convincing and uses branding that resembles an official OpenAI service. The color scheme and icons match expectations. When the app opens, a loading animation suggests an AI model is creating an image. Everything is designed to feel familiar and trustworthy.

Once Appknox researchers looked inside the app, they found that the app has no AI system at all. There is nothing inside that can generate images or run a model. Instead, the app connects only to advertising platforms like Adjust, AppsFlyer, Unity Ads and Big Ads. These connections activate immediately when the app is launched. No user content is processed, and no image is created. All activity is tied to ads.

Its internal identifiers also offer important clues. They match template-based apps that can be quickly repackaged and released under different names. This suggests the app was assembled by reusing a generic kit, then dressed up to look like an AI tool so it would attract downloads.

WhatsApp Plus Hiding a Full Spyware System

At the far end of the spectrum sits WhatsApp Plus. This app presents itself as an enhanced version of WhatsApp, but inside, it contains a full surveillance system. It uses a fraudulent certificate and relies on the Ljiami packer, which hides encrypted code inside secondary folders that activate after installation. The hidden modules give the app persistent access to the device.

Once it's installed, WhatsApp Plus asks for extensive permissions that far exceed what a messaging app needs, like the ability to read and write contacts, access SMS messages, retrieve device accounts, collect call logs and send messages on behalf of the user. The app can then intercept verification codes, scrape address books, impersonate the user and interfere with identity-based authentication.

Security platforms classify this app as spyware and Trojan malware. At first, it looks polished and works like any other app, but once installed, it behaves like an active surveillance tool. In addition to data theft, the app can take over messaging accounts and disrupt banking or financial flows that rely on SMS for verification.

How to Protect Against Brand Abuse and Malicious Clones

As the number of unofficial apps continues to grow, brand trust itself has become a vector. Bad actors are focusing more on duplicating legitimate sites and making them appear credible than on creating new malware. This makes it easier for fake apps to hide in plain sight. Here are a few practical steps that can help reduce the risk:

  1. Download from trusted stores only. Stick to Google Play or the Apple App Store. Third-party stores allow anyone to upload apps with minimal review, which makes it easier for clones and malware to sneak in.
  2. Check the developer name. Make sure the publisher listed is the real company. If an app claims to be from OpenAI or Meta but lists an unfamiliar developer, that’s a red flag.
  3. Look closely at permissions. Be cautious if an app asks for access it doesn’t need, such as contacts, call logs or the microphone. Many fake apps count on users tapping “allow” without thinking.
  4. Notice how an app behaves. Beware of an app that keeps running after it closes, shows unexpected ads or tries to install other apps.
  5. Watch for copycat branding. Cloned apps often reuse logos, color schemes and names that are close but not exact. Other warning signs include misspellings, extra words or “plus” and “pro” versions.
  6. Report suspicious apps. If something feels off, report the app through the store. Quick reporting helps protect other users.
  7. Use a mobile security tool. Security apps that check behavior, permissions and network activity can catch threats that appear harmless.

The examples uncovered by Appknox researchers mark a turning point. Fake apps no longer stand out, and familiar branding won’t guarantee safety. Mobile security now depends on understanding how modern apps behave and paying attention to the small signals that something seems off. With clear, upfront behavior checks, users have a much better chance of spotting deception and stopping it before it causes harm.


About the Author

Subho Halder is the CEO and co-founder of Appknox, a globally recognized mobile security testing platform. A leading security researcher, Subho is the mastermind behind AFE, known for uncovering critical vulnerabilities in Google, Apple, and other tech giants. A frequent speaker at BlackHat, Defcon, and top security conferences, he is a pioneer in AI-driven threat detection and enterprise security. As CEO, he drives Appknox’s vision, helping organizations proactively safeguard their mobile applications.

Read next:

• Replace Doom Scrolling with Microlearning Apps and Boost Focus in 2025

• WhatsApp Tests Strict Security Mode and AI Editing Tools in New Android Betas
by Web Desk via Digital Information World

EU Accepts Meta’s Updated Ad Model for January 2026 Rollout

EU regulators have accepted Meta’s revised advertising model for Facebook and Instagram, with the company set to present the new options to users in January 2026. The decision removes the immediate risk of daily fines under the Digital Markets Act (DMA).

The updated pay-or-consent model introduces two choices. Users may allow full data sharing for fully personalised advertising or restrict data use for a more limited form of personalisation. The European Commission noted that this marks Meta’s first time offering such an alternative on its social platforms.

The approval comes after an April 2025 non-compliance decision that included a €200 million fine for violations covering November 2023 to November 2024. Meta subsequently adjusted the proposal’s wording, design, and transparency features while retaining its overall structure.

Meta faced potential daily fines under the DMA framework, up to 5% of average daily worldwide turnover. The Commission's approval eliminates this immediate penalty risk.

After rollout in January 2026, the Commission will assess how the model functions by gathering feedback and evidence from Meta and relevant stakeholders. The Commission restated that EU users "must have full and effective choice", as required under the DMA.


Notes: This post was drafted with the assistance of AI tools and reviewed, edited, and published by humans. Image: DIW-Aigen

Read next:

• Has OpenAI Sacrificed Morality for Shareholder Profits in Its Ten-Year Journey?

• Replace Doom Scrolling with Microlearning Apps and Boost Focus in 2025
by Asim BN via Digital Information World

Has OpenAI Sacrificed Morality for Shareholder Profits in Its Ten-Year Journey?

Image: DIW-Aigen

As OpenAI marks its tenth birthday in December 2025, it can celebrate becoming one of the world’s leading companies, worth perhaps as much as US$1 trillion (£750 billion). But it started as a non-profit with a serious moral mission – and its story demonstrates the difficulty of combining morality with capitalism.

The firm recently became a “public benefit corporation”, meaning that – in addition to performing some sort of pubic good – it now has a duty to make money for its shareholders, such as Microsoft.

That’s quite a change from the original set up.

Influenced by a movement known as “effective altruism”, a project which tries to find the most effective ways of helping others, OpenAI’s initial mission was to “ensure that artificial general intelligence […] benefits all of humanity” – including preventing rogue AI systems from enslaving or extinguishing the human race.

Being a non-profit was central to that mission. If pushing AI in dangerous directions was the best way to make money, a profit-seeking company would do it, but a non-profit wouldn’t. As CEO Sam Altman said in 2017: “We don’t ever want to be making decisions to benefit shareholders. The only people we want to be accountable to is humanity as a whole.”

So what changed?

Some argue that the company simply sold out – that Altman and his colleagues faced a choice between making a fortune or sticking to their principles, and took the money. (Many of OpenAI’s founders and early employees chose to leave the company instead.)

But there is another explanation. Perhaps OpenAI realised that to fulfil its moral mission, it needed to make money. After all, AI is a very expensive business, and OpenAI’s rivals – the likes of Google, Amazon and Meta – are vast corporations with deep pockets.

To have a chance of influencing AI development in a positive direction, OpenAI had to compete with them. To compete, it needed investment. And it’s hard to attract investment with no prospect of profit.

As Altman said of a previous adjustment towards profit-making: “We had tried and failed enough to raise the money as a non-profit. We didn’t see a path forward there. So we needed some of the benefits of capitalism.”

Capitalist competition

But along with the benefits of capitalism come constraints. What Karl Marx called the “coercive laws of competition” mean that in a competitive market, businesses have little choice but to put profit first, whatever their moral principles.

Indeed, if they choose not to do something profitable out of moral concerns, they know they’ll be replaced by a less scrupulous firm which will. This means not only that they fail as a business, but that they fail in their moral mission too.

The philosopher Iris Marion Young, illustrated this paradox with the example of a sweatshop owner who claims that they would love to treat their workers better. But the cost of improved pay and conditions would make them less competitive, meaning they lose out to rivals who treat their workers even worse. So being kinder to their workers would not do any good.

Similarly, had OpenAI held back from releasing ChatGPT due to worries about energy usage or self-harm or misinformation, it would probably have lost market share to another company. This in turn would have made it harder to raise the investment it needed to fulfil their mission of shaping AI development for good.

So in effect, even when its moral mission was supposedly paramount (before it became a public benefit corporation), OpenAI was already acting like a for-profit firm. It needed to, to stay competitive.

The recent legal transition just makes this official. The fact that a nonprofit board dedicated to the moral mission retains some control over the company in principle is unlikely to stop the drive to profit in practice. Marx’s coercive laws of competition squeeze morality out of business.

Marx and Milton

If Marx is capitalism’s most famous critic, perhaps its most famous cheerleader was the economist Milton Friedman.

But Friedman actually agreed with Marx that business and morals are difficult to mix. In 1971, he wrote that business executives have only one social responsibility: to make profit for shareholders.

Pursuing any other goal would be spending other people’s money on their own private principles. And in a competitive market, Friedman argued, businesspeople will find that customers and investors can quickly switch to other companies “less scrupulous in exercising their social responsibilities”.

All of this suggests that we cannot expect businesses to do as OpenAI originally promised, and put humanity before shareholder value. Even if it tries, the coercive laws of competition will force it to seek profit.

Friedman and Marx would have further agreed that we need other types of institutions to look after humanity. Though Friedman was mostly sceptical about the state, the AI arms race is precisely the kind of case that even he recognised required government regulation.

For Marx, the solution is more radical: replacing the coercive laws of competition with a more co-operative economic system. And my own research suggests that safeguarding the future of humanity may indeed require some restraining of capitalism , to allow tech workers time to develop safe and ethical technologies together, free from the pressures of the market.

Nikhil Venkatesh, Leverhulme Early Career Fellow, University of Sheffield

This article is republished from The Conversation under a Creative Commons license. Read the original article.


by Web Desk via Digital Information World

Monday, December 8, 2025

Replace Doom Scrolling with Microlearning Apps and Boost Focus in 2025 (Promoted)

This post includes links that are either sponsored or affiliated; we disclose this for transparency.

Image: DIW-Aigen

The start of a new year often brings grand resolutions: to learn a new language, master a new skill, focus on health and sport, or finally tackle a complex certification. Yet, these goals are usually swallowed whole by a far more powerful opponent: the endless digital feed. In 2025, the most effective financial and intellectual resolution you can make is about creating a strategic substitution: replace doom scrolling with microlearning apps.

The German psychologist Hermann Ebbinghaus discovered that after learning something new, people quickly forget the majority of it — specifically, we forget about 70% of what we learn within 24 hours unless we actively review the material. This proves that microlearning directly combats this steep forgetting curve by employing spaced repetition. This technique involves reviewing the same small piece of information multiple times over, gradually increasing intervals. Let's see the scientific basis for why microlearning is an effective tool for improving long-term memory.

The Battle for Attention: Why We're Losing Focus

We are all familiar with the hypnotic loop of doom scrolling. Whether you're obsessively consuming negative news or refreshing social media for the latest outrage, the effect is the same: cognitive fatigue. Doom scrolling is a kind of behavioral trap that provides high-dopamine hits from novelty. It delivers almost no constructive value, and actually trains your brain to expect immediate sensory rewards, making it nearly impossible to settle into the deep, focused work required for complex tasks.

This constant state of low-level anxiety and information overload is directly responsible for what many are calling attention decay — the gradual erosion of your ability to sustain focus. By 2025, the difference between people who succeed and those who struggle will increasingly hinge on their ability to manage their attention.

Why Microlearning Works

Microlearning is the perfect method that delivers content in small, highly targeted bursts, often lasting just 3 to 10 minutes. You need to free five minutes before a meeting or while waiting for coffee to learn something new. This method works because it meets your brain where it is: demanding novelty, but getting a quick sense of completion.

Traditional learning methods lead to knowledge overload. Microlearning counters the steep forgetting curve using spaced repetition — reviewing small pieces of information over time — which dramatically improves retention.

Top Microlearning Apps: ​​Transforming Scroll Time into Skill Time

Microlearning apps are designed to be the productive equivalent of a social media scroll. By choosing the right app, you can turn passive consumption into active skill acquisition:

1. Headway: Summarizing Core Knowledge

Headway is a prime example of a microlearning app as it's focused on non-fiction book summaries that you can read or listen to within 10 minutes. It targets users who want to absorb core ideas from bestsellers in leadership, finance, psychology, and more essential niches without committing to an entire book:

  • Format: 15-minute text summaries with key insights, which you can also highlight and use gamified experience to test understanding of the main takeaways from each chapter of the book.
  • Focus: It helps users quickly grasp the fundamental principles of complex subjects, making it ideal for managers, entrepreneurs, and ambitious readers who need broad knowledge efficiently.
  • Why it replaces scrolling: It satisfies the modern brain's demand for speed and variety by presenting a vast library of ideas in a consistent, easy-to-digest mobile format. You can finish a summary on 'Atomic Habits' in the time you used to spend watching random clips.

2. Duolingo and Busuu (The Language Model)

Language apps like Duolingo and Busuu were pioneers in making skill acquisition accessible through gamified bursts. The apps provide 5-minute daily lessons that rely heavily on interactive exercises and streak maintenance:

  • It is focused on the Repetitive Skill Building: the apps turn learning a complex skill (language) into a series of rewarding, small victories.
  • Why it Replaces Scrolling: They use gamification, points, leaderboards, and progress bars to hook the user's engagement, offering a sense of accomplishment far superior to passively viewing a feed.

3. Dedicated Professional Skill Apps

These apps focus on highly specific, professional knowledge. They are perfect for utilizing those small gaps in the workday:

  • SoloLearn (coding): Uses bite-sized coding challenges and quizzes to teach fundamental concepts in languages like Python and JavaScript. It provides a tangible skill gain in five minutes.
  • Mimo (coding and design): Presents interactive exercises right on your phone, offering immediate feedback on code, which is highly satisfying and reinforces the active learning necessary for technical skills.

4. Extended Learning Model and Concept Reinforcement

Many platforms, including those similar to microlearning structures, emphasize personalized learning paths. This involves taking the broad concept gained from a summary or reinforcing it through specialized tools, for example:

  • Custom Quizzes: The app may test you on concepts you specifically highlighted or struggled with.
  • Dedicated Flashcard Features: Such apps use built-in digital flashcards for memorizing content. This function is vital for spaced repetition, requiring you to recall specific facts or definitions at set intervals to aid memory.
  • Targeted Knowledge Quizzes: Applications such as Nibble, which focuses on all-around knowledge , use short quizzes to reinforce learning. These tools often test you specifically on concepts you struggled with during the initial micro-lesson, ensuring complete understanding.
  • Specialized Brain Training: The app Impulse is dedicated to brain training and uses gamified exercises as a form of reinforcement. These activities help users practice specific cognitive skills like memory and logic directly.
  • Skills Application: Skillsta for social skills training, and AddMile for coaching, extend learning into real-world practice. These applications guide the user in applying micro-lessons to life scenarios, which is the strongest form of memory consolidation.

Strategic Adoption: Making the Switch Stick in 2025

To succeed, you have to adopt an actionable plan that exploits the very habits that currently lead to doom scrolling. For example, you can audit and define your micro-goal. You can use your phone's screen time report to identify your worst scrolling habits. This could be the time slot where your fingers automatically open a distracting app. Then, choose a manageable skill you want to learn that can be tackled in 5-10 minute bursts to replace that scrolling.

You can also optimize methods following your brain type. The best microlearning strategy is a personalized one. Apps that use adaptive technology cater content to the individual, recognizing that everyone learns differently.

If you want to optimize your approach and ensure your brief bursts of learning are maximized for efficiency, you should find out how you learn best. You can take an intelligence type test to understand your cognitive strengths, allowing you to choose apps or features tailored to your specific intellectual profile. By making the deliberate, daily choice to replace doom scrolling with microlearning app usage, you are fundamentally changing the nature of your digital engagement!


by Ayaz Khan via Digital Information World