Thursday, January 15, 2026

Yes, those big touchscreens in cars are dangerous and buttons are coming back

Image: Amar A / Unsplash

Milad Haghani, The University of Melbourne

In recent years, the way drivers interact with cars has fundamentally changed. Physical buttons have gradually disappeared from dashboards as more functions have been transferred to touchscreens.

Touchscreens in vehicle dashboards date back to the 1980s. But modern cars consolidate functions into these systems far beyond what we’ve seen before, to the point where a car feels mostly like a computer.

This may create the impression of a modern, technologically advanced vehicle. However, scientific evidence increasingly points to touchscreens compromising our safety.

In fact, ANCAP Safety, the independent car safety assessment program for Australia and New Zealand, has announced that from 2026 it will ask car manufacturers to “bring back buttons” for important driver controls, including headlights and windscreen wipers. Similar moves are underway in Europe.

ANCAP Safety will explicitly assess how vehicle design supports safe driving, and not just how well occupants are protected in the event of a crash – which means calling time on touchscreens that control everything in your car.

What human factors research says about distraction

Decades of road-safety research show human error plays a role in the vast majority of crashes. And the design of in-vehicle interfaces can contribute to how often drivers make safety errors.

Errors behind the wheel are often linked to driver distraction. But what exactly constitutes distraction, and how does it occur?

In human factors research, distraction is typically classified as visual, manual, cognitive, or a combination of these. A distracting event or stimulus may take the driver’s eyes off the road, their hands off the wheel, their mind off the driving task – or all three.

This is why texting while driving is considered particularly dangerous: it uses our visual, manual and cognitive resources at the same time. The more types of attention a task demands, the greater the level of distraction it creates.

Interactions with touchscreen menus can, in theory, produce comparable effects to texting. Adjusting a vehicle’s temperature using a sliding bar on a screen makes the driver divert visual attention from the road and allocate cognitive resources to the task.

By contrast, a physical knob allows the same adjustment to be made with minimal or no visual input. Tactile feedback and muscle memory compensate for the lack of visual information and let you complete the task while keeping eyes on the road.

How distracting are touchscreen features, really?

Perhaps the clearest and most accessible evidence to date comes from a 2020 UK study conducted by TRL, an independent transport research company.

Drivers completed simulated motorway drives while performing common in-car tasks. These included selecting music or navigating menus using touchscreen systems such as Apple CarPlay and Android Auto.

Performance was compared against baseline driving with no secondary task, as well as voice-based interaction.

When drivers interacted with touchscreens, their reaction times increased markedly.

At motorway speeds, this delay in reaction time corresponds to a measurable increase in stopping distance, meaning a driver would travel several additional car lengths before responding to a hazard.

Lane keeping and overall driving performance deteriorated too as a result of interaction with touchscreens.

The most striking aspect of this study is that touchscreen interaction was as distracting and, in some cases, even more distracting than texting while driving or having a handheld phone call.

Drivers don’t even like touchscreens

Concerns about touchscreen-heavy design are not limited to lab studies. They have also shown up clearly in overseas consumer surveys.

Data from a recent survey of 92,000 US buyers indicate that infotainment systems – the official term for that touchscreen in the centre of the dashboard – remain the most problematic feature in new cars.

The survey shows infotainment systems lead to more complaints in the first 90 days of ownership than any other vehicle system.

Most complaints relate to usability. Drivers report frustration with basic controls that have been moved to touchscreens – such as lights, windshield wipers, temperature – and now require multiple steps and visual attention to operate while driving.

Could voice recognition be the solution?

Voice recognition is often presented as a safer alternative to touchscreens because it removes the need to look away from the road. But evidence suggests it’s not completely risk free either.

A large meta-analysis of experimental studies examined how drivers perform while using in-vehicle and smartphone voice-recognition systems, combining results from 43 different studies.

Across the evidence base, voice interaction worsens driving performance compared with driving without any secondary task. It increases reaction times and negatively influences lane keeping and hazard detection.

When voice systems are compared with visual-manual systems, performance is slightly better with voice control. But even though voice recognition is less distracting than touchscreens, it’s still measurably more distracting compared to baseline driving where drivers don’t need to interact with any menus or change settings.

The comeback of buttons

The evidence is clear: controls we frequently use while driving – temperature, fan speed, windscreen demisting, volume and many others – should remain tactile.

The driver shouldn’t have to divert their visual attention from the road to control these. It’s especially problematic when such controls are buried in layered menus, so you need to tap several times just to find the function you want to change.

Touchscreens are better suited to secondary functions and settings typically adjusted before driving, such as navigation setup, media selection, and vehicle customisation.

The good news is the evidence is being translated into car safety assessment programs. From this year, ANCAP Safety and its counterpart in the European Union, Euro NCAP, will require physical controls for certain features to award the highest safety rating for new vehicles.

It’s up to manufacturers to decide whether to comply. However, some car makers, such as Volkswagen and Hyundai, have already been responding to these requirements and to pressure from consumers to bring the buttons back.The Conversation

Milad Haghani, Associate Professor and Principal Fellow in Urban Risk and Resilience, The University of Melbourne

This article is republished from The Conversation under a Creative Commons license. Read the original article.


by External Contributor via Digital Information World

Wednesday, January 14, 2026

My Dad Got Sick—Doctors Dodged, AI Didn't

By Becky Diamond, Risking It

Personal Perspective: Artificial intelligence gave me clarity to face what was coming.

My dad was in the emergency room, short of breath, chest tight, upper back aching. He looked pale and confused. An ultrasound showed excess fluid between his lung and chest wall.

“We’ll drain it,” a resident said, as if he were unclogging a sink.

For the next five days, thick, red-tinged fluid filled a plastic container beside my dad’s hospital bed. His cells were sent for “staining,” a way to identify cancer. But no one used that word.

Nurses rotated through, drawing smiley faces next to their names on a white board when they switched shifts. Doctors discussed biopsies and blood thinners and mentioned malignancy in a relentlessly relaxed tone. Their manner didn’t match what I saw.

Breakfast remained untouched at lunch. And a cough that was a minor nuisance had become big enough to break a rib.

“What’s your pain level today?” the pulmonologist asked.

“It was a four,” my dad said. “Now it’s a six.”

Tylenol wasn’t cutting it. The doctor suggested morphine.

“We keep treating symptoms,” my dad questioned, “but what’s the cause?”

“Hopefully it’s an infection,” the doctor said. “We’ll try antibiotics.”

But my father didn’t have a fever.

When death is a dirty word

After nearly a week in the hospital, a resident casually mentioned that my dad had malignant epithelial cells. I called the doctor in charge.

“He has cancer,” she confirmed, “but we don’t know which kind.”

The pulmonologists didn’t talk about cancer. Oncologists hadn't been consulted because pathology needed more time to make a definitive diagnosis. And I didn’t know I should ask for palliative care.

My dad was discharged. But a few days later, he returned with blood clots, breathing difficulty, and intense pain. I approached the doctor coordinating his care.

“Excuse me,” I said. “Is my dad dying?”

“His vitals are stable,” he said, “but I’m not an oncologist.”

The clots were treated with a blood thinner. A scan confirmed that the fluid was gone. The team recommended around-the-clock oxygen and a follow-up appointment.

On paper, things looked better. But my dad wasn’t reassured.

"Can you find out what's really happening?" he asked me.

The turning point

I’m a journalist and know how to get information. But in the hospital, I froze. When I asked questions, doctors looked down or hurried away. Information was rationed, not shared, and I felt myself shrinking. I became careful. If I pushed too hard, I might alienate the people who controlled my father’s care.

I needed a different source. So, I tried AI. The computer didn’t flinch.

I’m sorry you and your dad are going through this. I’ll keep this clear and compassionate. A malignant pleural effusion means it’s stage IV disease (metastatic).

I exhaled. AI explained what to expect. It wasn’t medical advice. It was information grounded in the science of dying. My confusion gave way to something unexpected: comfort.

My experience made sense to Arthur Dobrin, a professor emeritus at Hofstra University who served on a hospital ethics committee. Most doctors, he explained, aren’t trained to talk about terminal diagnoses. They’re human, and their own feelings get in the way.

“A computer program doesn’t have emotions,” Dobrin said. “It doesn’t fear death or failure.”

AI delivered clarity and that gave me courage.

“Dad, this is serious,” I said. “Do you want to discuss the reality? It might be hard to handle.”

“That’s OK. I’m 87. I’ve lived a full life," he said. "If I’m dying, I want to figure out how to manage it.”

We didn’t have to pretend this was a temporary crisis. We started treating it like what it was: the final chapter of a beautiful life.

The gift of clear sight

We met with a soft-spoken, thoughtful oncologist who explained that my dad had advanced lung cancer. Treatments could extend his life, but with side effects, frequent visits, and a shift in focus.

My father didn’t think that treatments made sense.

“My life is complete,” he said. “I want to feel more comfortable and spend whatever time I have left with the people I love.”

I asked about life expectancy. The doctor hesitated.

“We don’t know the details,” he said. But I wasn’t asking for an advanced lesson. I needed an orientation.

Later, AI delivered.

Being prepared is an act of love. Would you like me to outline signs that he may be entering the final weeks and days of life?

We called hospice and set up a hospital bed. Pain eased, and my father’s spirit returned. He was curious, present, and engaged. Dying became something we lived, not just endured.

That shift is central to good end-of-life care, said Dr. Dawn Gross, a hospice and palliative care physician whose focus is patients with serious illness. “Death is one of the most profound, life-changing experiences for patients and families. Why are we putting it behind closed doors?”

I didn’t want my dad to die. He was my go-to for life advice, bad jokes, and deep discussions. He taught me how to show up in the world. To treat hard moments as plot twists.

Knowing his timeline let us navigate. We could see what mattered. And we lived a lifetime in the moments we had.

Accepting loss and feeling whole

My father loved family dinners at six, the Sunday crossword, Yankee playoff games, art nouveau, and anything written by Thomas Pynchon or Oliver Sacks. He was a scientist who had built rockets for NASA, but also wanted to know what made people tick. Dinner conversations moved from the physics of flight to the ethics of kindness.

Steve Diamond faced death the way he lived: with clarity and endless curiosity.

“Dying is interesting,” he said between doses of morphine. “I didn’t know what to expect. I kept my mind open. And it doesn’t feel so constraining.”

My dad was unshackled. We followed his lead.

Friends and family filled his bedroom. My mother spread six decades of family photographs across their bed, like a quilt stitched from memory: birthdays, family trips, and ordinary days that meant everything. We passed the photos hand to hand, fingers lingering. We traced life’s chapters like a road map toward my father’s final destination.

His words grew fewer. Ours mattered more.

“You’ve helped me every step of the way,” I told him, holding his hand. “Thank you for all of it, Dad, I love you.”

Morphine came more often. Water was swallowed from a spoon. Time expanded, marked by longer stretches of silence and sleep.

One morning near the end, my dad wanted to talk about his death.

“This is such a different experience,” he said. “I’ve had to adapt and learn so many new things. I know I’m weak. But I feel strong.”

He paused.

“I’m ready,” he said. “Thanks for helping me manage this. It feels important.”

“I love you.”

Epilogue

Seven weeks after my dad went to the emergency room, he died. Only later did we realize that his earlier symptoms — back pain and a bad cough — were actually signs of adenocarcinoma, not just aging.

“We don’t know when the end of someone’s life actually begins,” said Dr. Scott Halpern, a physician and professor at the University of Pennsylvania who specializes in palliative and hospice care. He trains clinicians to guide patients and their families through difficult conversations and consequential, end-of-life decisions.

“Good care for the dying must start earlier,” he added.

We didn’t need to be protected from the truth. We wanted to face it. And once we knew what we were dealing with, my dad didn’t retreat. He leaned in. Made choices. We embraced closure and felt the profound gift of presence, love, and a truly meaningful goodbye.

Image: Mario Wallner / Pexels - Illustrative photo. Not actual patient or event.

Editor's Note: This article was originally published on Psychology Today and is republished here with permission of Becky Diamond. Becky Diamond clarified to Digital Information World that she used AI only for editing after writing the piece herself, not for creating its content. AI can provide general explanations but does not replace professional medical advice, diagnosis, or treatment. Readers are encouraged to consult experienced healthcare professionals for personal medical concerns.

by External Contributor via Digital Information World

Tuesday, January 13, 2026

Google Announces Multi-Year AI Collaboration with Apple

On January 12, 2026, Google announced a multi-year partnership with Apple, in which Apple’s next-generation Foundation Models are going to be based on Google’s Gemini models and cloud technology. The announcement, published on The Keyword — Google’s company blog, states that these models will assist in powering upcoming Apple Intelligence features, including a more personalized Siri assistant in 2026

According to the statement, Apple selected Google’s AI technology after evaluation, determining it provides a suitable foundation for its upcoming models (aka Apple Foundation Models). The announcement emphasized that Apple Intelligence will continue to operate on Apple devices and Private Cloud Compute while maintaining Apple’s existing privacy standards.

The joint statement does not provide detailed information on the exact timeline for rollout, the technical integration process, or the specific privacy effects on users. It focuses on the scope of the collaboration and its intended role in powering Apple’s AI features.

Collaborations like this illustrate how major technology decisions are increasingly interconnected, shaping both the tools we use today and the innovations available tomorrow.

DIW has reached out to both Apple and Google for further details and will update the article if additional information is provided.

Image: DIW

Notes: This post was drafted with the assistance of AI tools and reviewed, edited, and published by humans. 

Read next: Teens use cellphones for an hour a day at school, according to study
by Asim BN via Digital Information World

Teens use cellphones for an hour a day at school, according to study

By Brian Donohue | University of Washington School of Medicine - 206-457-9182, bdonohue@uw.edu

U.S. adolescents spend more than one hour per day on smartphones during school hours, with social media accounting for the largest share of use, according to research published today in JAMA. The findings have relevance for educators, parents and policymakers.
Older and lower-income adolescents use smartphones more during school, raising engagement concerns for educators.
Image: Yan Krukau / Pexels

The study reflects the behavior of 640 adolescents ages 13-18 who were enrolled in the Adolescent Brain Cognitive Development Study. They and their parents had consented to have software placed on their Android cellphones that allowed use to be passively monitored. Usage was measured between September 2022 and May 2024.

Key findings:

  • Adolescents spent an average of 1.16 hours per day on smartphones during school hours.
  • Social media apps Instagram, TikTok and Snapchat accounted for most use, followed by YouTube and games.
  • Older adolescents (16–18) and those from lower-income households showed higher smartphone use, compared to peers of the same age.

“These apps are designed to be addictive. They deprive students of the opportunity to be fully engaged in class and to hone their social skills with classmates and teachers,” said Dr. Dimitri Christakis, the paper’s senior author. He is a professor of pediatrics at the University of Washington School of Medicine and practices at Seattle Children's Hospital.

Based on a national sample of students, the results build on findings published last year in JAMA Pediatrics. That study had fewer participants but also included iPhone users.

At least 32 states and the District of Columbia require school districts to ban or restrict students’ use of cellphones in schools. The effect of those policies “remains to be seen,” Christakis said.

“To date they've been very poorly enforced, if at all. I think the U.S. has to recognize the generational implications of depriving children of opportunities to learn in school,” he added.

The paper’s lead author is Dr. Jason Nagata, an associate professor of pediatrics at the University of California San Francisco.

“This moves the conversation beyond anecdotes and self-reports to real-world behavior. Teens are not always accurate reporters of their own screen time. Objective smartphone data gives us a clearer picture of actual use,” Nagata said.

The research was supported by the National Institutes of Health (K08HL159350, R01MH135492, R01DA064134).

Related:Among U.S. adults, 71.3% supported the banning of smartphones in schools, according to a companion research letter published in JAMA Pediatrics. For that paper, the researchers analyzed a 2023 survey of 35,000 adults in 35 countries.

For details about UW Medicine, please visit https://uwmedicine.org/about.

This article was originally published on UW Medicine Newsroom and is republished here with permission. No AI tools were used to write this article.

Read next:

• If social media for kids is so bad, should we be allowed to post kids’ photos online?

• Are Creative Entrepreneurs Happier Than Status-Seekers? Here's What the Study Finds

Global AI Adoption Reaches 16.3% in 2025 as North-South Divide Widens

• Research Tracks 8,324 U.S. Children, Identifying Social Media as a Risk Factor for Growing Inattention
by External Contributor via Digital Information World

Monday, January 12, 2026

Are Creative Entrepreneurs Happier Than Status-Seekers? Here's What the Study Finds

A study published in the Journal of Business Venturing examines how entrepreneurs’ personal values and the cultural values of the regions where they operate are associated with their wellbeing, based on data from 3,038 entrepreneurs across Europe.

The study, authored by Pierre-Jean Hanard, Ute Stephan, and Uta K. Bindl of King’s Business School at King’s College London in the United Kingdom, analyzes responses from entrepreneurs working in 143 regions across 18 European countries. It draws on data from the European Social Survey and was available online in October 2025, with the journal issue dated 2026.

What the study examined

The research focuses on whether two broad types of personal values are linked to entrepreneurs’ wellbeing. These values are defined using established psychological frameworks. The first is openness to change, which includes valuing independence, creativity, new ideas, and personal freedom. The second is self enhancement, which includes valuing achievement, status, influence, and material success.

The study also examines regional cultural values, specifically cultural autonomy and cultural egalitarianism, to assess whether alignment between personal values and regional culture is associated with different wellbeing outcomes.

Wellbeing is measured in three ways. Positive wellbeing is assessed through life satisfaction and engagement in daily activities. Negative wellbeing is assessed through self reported depressive symptoms.

How the research was conducted

The authors use multilevel statistical analysis to account for both individual level factors and regional context. Personal values and wellbeing measures come from individual survey responses, while regional cultural values are calculated by aggregating responses from the wider population in each region. The analysis follows established methods for examining value alignment, including response surface analysis.

Key findings

The study finds that higher openness to change among entrepreneurs is associated with higher life satisfaction and engagement and lower levels of depressive symptoms. In contrast, higher self-enhancement values among entrepreneurs are associated with lower life satisfaction and engagement, and higher levels of depressive symptoms.

The findings show that alignment between personal values and regional culture is associated with differences in wellbeing in some cases. Entrepreneurs who value openness to change show higher positive wellbeing and lower psychological distress when they operate in regions where cultural autonomy is also high. The study finds initial evidence of alignment effects for openness to change, but not for self-enhancement values.

Commenting on the study, Co-author Professor Ute Stephan, highlighted that: "Entrepreneurship allows people to express what they care about, yet some of these core motivations can be draining." Adding further, "It’s important for entrepreneurs to know that what draws them into entrepreneurship may also push them towards burnout."

Why the findings matter

The authors argue that the results help clarify how personal values are associated with wellbeing in entrepreneurship, an area that has received increasing research attention. The study highlights that values commonly linked to entrepreneurial activity may relate differently to positive and negative aspects of wellbeing.

The authors also note that entrepreneurship is shaped not only by individual characteristics but also by the cultural environment in which entrepreneurs operate. By examining both levels together, the study responds to gaps identified in earlier research that focused mainly on individual factors.

Limitations noted by the authors

The study is limited to European countries and relies on self reported survey data. Because the study is based on cross-sectional survey data, the findings indicate associations rather than causal relationships. The data are specific to conditions at the time of the 2012 survey and may not capture changes over time or in other regions of the world.

Source

The study appears in the Journal of Business Venturing, an academic journal published by Elsevier. It is available as an open access article.

True success balances worldly achievement with inner peace. While ambition drives progress, contentment and purpose-driven work often lead to lasting fulfillment. Building businesses that serve communities, not just personal gain, creates meaning beyond material rewards."
Image: Vitaly Gariev / Unsplash

Notes: This post was drafted with the assistance of AI tools and reviewed/fact-checked, edited, and published by humans.

Read next: Global AI Adoption Reaches 16.3% in 2025 as North-South Divide Widens


by Ayaz Khan via Digital Information World

Saturday, January 10, 2026

Global AI Adoption Reaches 16.3% in 2025 as North-South Divide Widens

The Microsoft AI Economy Institute released its AI Diffusion Report 2025 on January 8, 2026, tracking the use of generative AI tools worldwide. The study measures AI adoption as the share of consumers who used a generative AI product during the reported period. This metric is based on aggregated and anonymized Microsoft telemetry data, adjusted for differences in operating system and device-market share, internet penetration, and country populations.

According to the report, 16.3% of the global population used generative AI tools in the second half of 2025, up from 15.1% in the first half, representing an increase of 1.2 percentage points. This means roughly one in six people globally are now using these technologies.

The report highlights a growing disparity between regions. In the Global North, 24.7% of the working-age population used AI tools in H2 2025, compared to 14.1% in the Global South. The gap between the two regions widened from 9.8 percentage points in H1 to 10.6 percentage points in H2 2025. AI adoption in the Global North grew nearly twice as fast as in the Global South during this period.

At the country level, the United Arab Emirates leads, with 64.0% of the working-age population using AI, up from 59.4% in the first half of the year. Singapore follows at 60.9%, with Norway (46.4%), Ireland (44.6%), France (44.0%), and Spain (41.8%) completing the top six.

South Korea posted the largest ranking jump, moving from 25th to 18th place globally. Its adoption increased from 25.9% to 30.7%, representing the largest national gain in the reporting period. The United States has an adoption rate of 28.3%, falling from 23rd to 24th position, while in H2 2025 China reached 16.3%, India 15.7%, and Japan 19.1%. At the lower end, Cambodia recorded the smallest adoption at 5.1%.

Microsoft reports global AI use rising, but high-income nations accelerate faster, deepening the North–South divide.

Economy H1 2025 AI Diffusion H2 2025 AI Diffusion
United Arab Emirates 59.40% 64.00%
Singapore 58.60% 60.90%
Norway 45.30% 46.40%
Ireland 41.70% 44.60%
France 40.90% 44.00%
Spain 39.70% 41.80%
New Zealand 37.60% 40.50%
Netherlands 36.30% 38.90%
United Kingdom 36.40% 38.90%
Qatar 35.70% 38.30%
Laos 6.00% 6.70%
Armenia 6.20% 6.60%
Sri Lanka 6.20% 6.60%
Uzbekistan 5.70% 6.30%
Rwanda 6.00% 6.30%
Cuba 5.70% 6.10%
Afghanistan 5.10% 5.60%
Tajikistan 5.10% 5.60%
Turkmenist-an 5.10% 5.60%
Cambodia 4.60% 5.10%

The report notes that of the ten countries with the largest adoption gains, all are classified as high-income economies, underscoring that recent adoption growth remains concentrated in nations with established digital infrastructure.

Microsoft emphasizes that no single metric is perfect. The AI Economy Institute continues refining its measurement of AI diffusion globally and expects to complement its current metric with additional indicators as they become available.

All in all, the findings show that while global AI adoption is rising, the benefits are unevenly distributed, with high-income countries and digitally advanced regions leading the growth. Bridging this divide remains a key challenge as AI usage expands worldwide.

Notes: This post was drafted with the assistance of AI tools and reviewed, edited, and published by humans.

Read next:

• AI Tools Increasingly Used for Search, But Users Still Verify Results

• YouTube Updates Search Filters, Removes Some Sorting Options

• GoDaddy Customer Reports Renewal Charges Beyond Maximum Domain Term


by Asim BN via Digital Information World

YouTube Updates Search Filters, Removes Some Sorting Options

YouTube has updated its search filter system, introducing new labels and options while removing several existing sorting features, according to an announcement posted on 8th January 2026 by Hank from TeamYouTube.

The changes affect how users filter and prioritize search results on the platform.

What Changed

YouTube added a Shorts filter under the Type menu, allowing users to choose between short-form videos and longer videos when searching.

YouTube redesigns search filters, adds Shorts category, drops Last Hour and Upload date sorting options.

The “Sort By” menu has been renamed to “Prioritize.” Within this Prioritize menu, the former “View count” option is now labeled “Popularity.” YouTube said this option uses view count and other relevance signals, such as watch time, to rank videos for a specific search query.

According to TeamYouTube, two filters were removed:

  • “Upload Date – Last Hour”

  • “Sort by Rating”

In the previous version of YouTube’s search filters, users could also sort results directly by Upload date, View count, or Rating, and could filter videos uploaded in the last hour (but it is not the case anymore).

In the updated version, the Upload Date section now only offers Today, This week, This month, and This year. The Prioritize section now contains only Relevance and Popularity, and the “Sort by Upload date” option is no longer available as a sorting method.

Why YouTube Made the Changes

TeamYouTube said the filter menu was simplified to make the search experience more intuitive. The company stated that some options were removed because they were not working as expected and had led to user complaints (YouTube did not provide specific examples of the issues in the announcement.).

YouTube also said users can still find recent videos using the remaining Upload Date filters and can find widely viewed content using the Popularity option.

User Responses

Many users in the comment section expressed dissatisfaction with the removal of the “Sort by Upload date” and “Last Hour” options.

Several said they relied on chronological sorting to find newly uploaded content. Others said the new system makes it harder to locate recent videos because results are no longer ordered strictly by upload time.

Some users asked for a recency option to be added to the Prioritize menu. Others said the current filters return unrelated or older content when searching for recent uploads.

A smaller number of users thanked YouTube for the update and welcomed the changes.

YouTube’s Position

In the announcement, TeamYouTube invited users to share feedback and directed them to the Help Center for more information on using search filters. No additional responses addressing the specific concerns about the removed options were included in the post.

Notes: This post was drafted with the assistance of AI tools and reviewed, edited/fact-checked, and published by humans. 

Read next: 

• GoDaddy Customer Reports Renewal Charges Beyond Maximum Domain Term

• Why does time go by so fast, and how can we slow it down? (Q&A)


by Ayaz Khan via Digital Information World