DeepSeek may have made a lot of heads turn after it released AI models that many felt were superior to OpenAI at a fraction of the cost. However, the fame is slowly dwindling down thanks to some questionable findings.
The latest nation to ban the Chinese Startup is South Korea which confirmed that the decision was taken after it sent user data to TikTok’s parent firm ByteDance. This news comes days after we saw the PIPC share that new downloads of the app were suspended after it failed to consider the agency’s rules on data protection.
The company did set up a legal team to probe the matter in South Korea where it acknowledged its neglected actions towards the country’s data laws. Now the question still arises about which data was sent and to what kind of extent.
Under this law from South Korea, explicit content is needed from users if the matter has to do with personal information given to third parties. DeepSeek was installed close to more than one million times before it was removed from various app stores this past weekend.
We’ve seen the data protection authority Garante also order a probe and block the chatbot after it could not defend the concerns of the regulator linked to privacy policies. So far, critics from China have long mentioned how the nation’s National Intelligence Law provides the government full access to all data it needs from companies in China if they’re investigating threats related to national security issues or major offenses.
The context of this law in China is nearly identical to how the US handles issues such as data protection. Many businesses in America will need to cooperate with the authorities if and when asked to do so.
Image: DIW-Aigen
Read next: Facebook No Longer Wants to Be Your Live Video Archive, Store Your Content Elsewhere or Get Ready to Be Deleted
by Dr. Hura Anwar via Digital Information World
"Mr Branding" is a blog based on RSS for everything related to website branding and website design, it collects its posts from many sites in order to facilitate the updating to the latest technology.
To suggest any source, please contact me: Taha.baba@consultant.com
Wednesday, February 19, 2025
Facebook No Longer Wants to Be Your Live Video Archive, Store Your Content Elsewhere or Get Ready to Be Deleted
Social media giant Facebook just confirmed that it imposed a one-month deadline for all live videos on the app.
This means after the 30-day threshold is crossed, all live videos will cease to exist on the app and get deleted automatically. In the past, live videos were stored for an indefinite period but that won’t be happening anymore.
The latest policy comes into play starting today, while the initial video deletions will not take place for several months. Before any deletion takes place, the app promises to notify people through email. After that, they will get 90 days to either transfer the content or download it.
The company shared recently how many live video views arise during the initial weeks of them getting broadcasted. So clearing the material out will reduce service as well as storage expenses. However, the news is not being taken well by many whose old videos will end up getting wiped out after automated deletion processes begin.
The company is also helping people by providing user tools via the app’s interface for video downloads to either their computer or smartphone. They also have options to use cloud storage providers like Dropbox or Google Drive.
Users can download every video individually or as a group. After that, the footage can go live for an indefinite period as Facebook Reels. This is provided videos get edited to shorter clips that are 90 seconds long.
Users can predict to see these video deletions take place in wave patterns over the next few months. But those needing more time can avail of the app’s option to defer the removal for another six months. After that, if no choice is made, all old live videos will be deleted and no longer available to retain.
Image: FB
Read next: The Demand for Premium Segment Smartphones is Increasing, with Apple Dominating the Market Share
by Dr. Hura Anwar via Digital Information World
This means after the 30-day threshold is crossed, all live videos will cease to exist on the app and get deleted automatically. In the past, live videos were stored for an indefinite period but that won’t be happening anymore.
The latest policy comes into play starting today, while the initial video deletions will not take place for several months. Before any deletion takes place, the app promises to notify people through email. After that, they will get 90 days to either transfer the content or download it.
The company shared recently how many live video views arise during the initial weeks of them getting broadcasted. So clearing the material out will reduce service as well as storage expenses. However, the news is not being taken well by many whose old videos will end up getting wiped out after automated deletion processes begin.
The company is also helping people by providing user tools via the app’s interface for video downloads to either their computer or smartphone. They also have options to use cloud storage providers like Dropbox or Google Drive.
Users can download every video individually or as a group. After that, the footage can go live for an indefinite period as Facebook Reels. This is provided videos get edited to shorter clips that are 90 seconds long.
Users can predict to see these video deletions take place in wave patterns over the next few months. But those needing more time can avail of the app’s option to defer the removal for another six months. After that, if no choice is made, all old live videos will be deleted and no longer available to retain.
Image: FB
Read next: The Demand for Premium Segment Smartphones is Increasing, with Apple Dominating the Market Share
by Dr. Hura Anwar via Digital Information World
Tuesday, February 18, 2025
The Demand for Premium Segment Smartphones is Increasing, with Apple Dominating the Market Share
According to new data from Counterpoint Research, 25% of the smartphones which were shipped in 2024 had average wholesale price of $600 or more and this shows that the smartphone market is doing well right now. People are also willing to buy expensive smartphones, no matter their price, with the market share of premium smartphones rising from 15% in 2020 to 25% in 2024. The smartphones which are leading the premium segment devices are of Apples, with the market share of 67%. The second highest share of the premium segment in the smartphone market is Samsung, followed by Huawei, Xiaomi and Google.
The market share of the ultra-premium segment (with average wholesale price of more than $1,000) also increased 40% because people are also slowly wanting to buy extra premium smartphones. Apple is also the top smartphone brand in the ultra-premium segment with the average selling price of more than $900. Some reports also noted that device makers are now prioritizing revenue on volume as the premium segment saw an 8% YoY growth. This growth is higher than the overall smartphone market, which was just 5% YoY.
Most of the premium segment smartphones were well received in the US with 25% of the market share, followed by 24% from China. The largest smartphone market by volume is India and there has been a five times increase in its volume since 2020. Most of the customers in India cannot afford premium smartphones, but some policies and trade-in offers make it easier for them to buy expensive smartphones.
As customers for premium smartphones are growing, their demand will continue to rise because of advantages like better displays, processors, high-quality cameras and AI features. To justify high prices of premium smartphones, device makers are also offering future-proof hardware and multi-year software support in those smartphones.
Read next:
• LinkedIn Surpasses X, Instagram, and Facebook, Securing the Highest Revenue Among Social Media Platforms Globally
• Overtrust in AI Alters Decision-Making, Raising Concerns for Military Applications
by Arooj Ahmed via Digital Information World
The market share of the ultra-premium segment (with average wholesale price of more than $1,000) also increased 40% because people are also slowly wanting to buy extra premium smartphones. Apple is also the top smartphone brand in the ultra-premium segment with the average selling price of more than $900. Some reports also noted that device makers are now prioritizing revenue on volume as the premium segment saw an 8% YoY growth. This growth is higher than the overall smartphone market, which was just 5% YoY.
Most of the premium segment smartphones were well received in the US with 25% of the market share, followed by 24% from China. The largest smartphone market by volume is India and there has been a five times increase in its volume since 2020. Most of the customers in India cannot afford premium smartphones, but some policies and trade-in offers make it easier for them to buy expensive smartphones.
As customers for premium smartphones are growing, their demand will continue to rise because of advantages like better displays, processors, high-quality cameras and AI features. To justify high prices of premium smartphones, device makers are also offering future-proof hardware and multi-year software support in those smartphones.
Read next:
• LinkedIn Surpasses X, Instagram, and Facebook, Securing the Highest Revenue Among Social Media Platforms Globally
• Overtrust in AI Alters Decision-Making, Raising Concerns for Military Applications
by Arooj Ahmed via Digital Information World
Overtrust in AI Alters Decision-Making, Raising Concerns for Military Applications
According to a recent research published in Scientific Reports, most of the AI users are overly influenced by it, even though AI admits its limitations to them. For the study, 558 participants were asked to do two experiments and the results showed that people are blindly trusting AI especially in uncertain situations. One of the researchers, Colin Holbrook, said that it is a concerning situation and society should know the risks if they are overly dependent on AI, especially when the AI technology is still improving day by day.
The researchers designed the experiments which mimicked real life high pressure and uncertain real world military decisions. Participants were shown the images of innocent civilians first and then an image of drone strike after. Participants faced a zero-sum dilemma where if they failed to identify and eliminate enemies, it could result in civilians dying. Mistakenly targeting innocent civilians as enemies could also result in them killing innocent people. The participants were shown quick images with enemy or civilian symbols in 650 milliseconds and AI was assisting participants to identify the symbols in those images. Participants were given two opportunities to confirm or change their choices and AI was offering encouragement.
In the first experiment, researchers wanted to test whether the presence of a physical robot would influence the trust level more than a virtual one so in one scenario, participants were given a full-size, human-like android with 1.75 meters height. The results showed that the physical presence of the robot had little effect on how much participants trusted its advice. The second experiment was online with a larger group of participants and half of the participants interacted with a highly anthropomorphic virtual robot that had human-like behavior, while the other half interacted with a basic computer interface that only responded with texts. The results showed that even if the AI was basic, it had a significant influence on decision-making of participants.
The results of both experiences showed that participants changed their decisions based on random advice by AIs, with 58.3% changing their decisions in the first experience and 67.3% changing their decisions in the second experiment. Participants were correct 70% of the times initially but their accuracy dropped to 50% when they followed AI’s unreliable guidance.
When AI agreed with the initial decisions of participants, the participants felt 16% more confident but when the AI disagreed, participants felt a 9.48% drop in their confidence. The participants who felt that AI is smarter were more likely to trust its judgement. U.S Air Force is testing AI co-pilots so it is better to understand and address the risks of excessive reliance on AI, especially in military decisions.
Image: DIW-Aigen
Read next: As ChatGPT Evolves, Researchers Uncover Unforeseen Political Leanings in AI Models
by Arooj Ahmed via Digital Information World
The researchers designed the experiments which mimicked real life high pressure and uncertain real world military decisions. Participants were shown the images of innocent civilians first and then an image of drone strike after. Participants faced a zero-sum dilemma where if they failed to identify and eliminate enemies, it could result in civilians dying. Mistakenly targeting innocent civilians as enemies could also result in them killing innocent people. The participants were shown quick images with enemy or civilian symbols in 650 milliseconds and AI was assisting participants to identify the symbols in those images. Participants were given two opportunities to confirm or change their choices and AI was offering encouragement.
In the first experiment, researchers wanted to test whether the presence of a physical robot would influence the trust level more than a virtual one so in one scenario, participants were given a full-size, human-like android with 1.75 meters height. The results showed that the physical presence of the robot had little effect on how much participants trusted its advice. The second experiment was online with a larger group of participants and half of the participants interacted with a highly anthropomorphic virtual robot that had human-like behavior, while the other half interacted with a basic computer interface that only responded with texts. The results showed that even if the AI was basic, it had a significant influence on decision-making of participants.
The results of both experiences showed that participants changed their decisions based on random advice by AIs, with 58.3% changing their decisions in the first experience and 67.3% changing their decisions in the second experiment. Participants were correct 70% of the times initially but their accuracy dropped to 50% when they followed AI’s unreliable guidance.
When AI agreed with the initial decisions of participants, the participants felt 16% more confident but when the AI disagreed, participants felt a 9.48% drop in their confidence. The participants who felt that AI is smarter were more likely to trust its judgement. U.S Air Force is testing AI co-pilots so it is better to understand and address the risks of excessive reliance on AI, especially in military decisions.
Image: DIW-Aigen
Read next: As ChatGPT Evolves, Researchers Uncover Unforeseen Political Leanings in AI Models
by Arooj Ahmed via Digital Information World
Researchers Uncover YouTube’s True Scale as Google Withholds Platform Insights
If you're an avid fan of the YouTube app, you're well aware of how transparent and very public-facing it's proven to be over the years. Users get the chance to see content galore and research anything and everything under the sun.
However, there are times when one wonders why the video-sharing app doesn't like to detail too many statistics about its success. For instance, why does the app go quiet when asking simple queries like how much content viewers see. Interestingly, other things are so public like effects on the algorithm and the economy of today. The platform is quite silent in that regard considering it’s got more than 2.5 Billion users every month. That’s one in every three people on earth using it. Did we mention how the average user watches up to 29 hours of content each month?
When you do the math, it’s about 8.3 Million years of content seen on YouTube each month. Over the past year, this is the equivalent of 100 Million years which is a hundred times greater than the total of human history.
But wait, the curiosity does not end there. We want to know how many videos are actually there and what are they all about. Which languages do most YouTubers speak and beyond? Sadly, the app isn’t going to be upfront on that at first.
This is where the issue lies. Many feel YouTube might be operating a lot of things in the dark which users should be aware of. That’s partly because there’s no simple way to attain random video samples. You can either pick what the algorithm recommends or use the manual approach. So that means unbiased options that are worth real-time and study are difficult to attain.
Several years back, we saw teams of research experts come up with the best possible solution. This is designed to give rise to a new computer program that pulls up content in a random fashion. It tries billions of URLs at a single time. Some might refer to it as a bot but that’s going into extreme. Zuckerman feels it’s more accurate to refer to it as a scraper.
Surveys display that in the two decades of operations, YouTube remains at the top of the list of the most popular apps in America. Up to 83% of all adults and 93% of all teens are part of its user base. It’s also the second most popular website on the planet by estimates. Only Google managed to top it.
Now the platform has entered the third decade but still, it’s such a secret for many. One spokesperson mentioned through a blog post about the recommendation algorithm. It refused to comment on the stats and other problems highlighted above so the mystery does continue.
It’s hard to get an idea of what’s happening inside apps because while organizations operate them to make public disclosures, these are fragmentary and misleading. Google does not wish to tell others about how large and brilliant the platform is. They don’t want others to know about the figures for users and how great the content is. To be honest, it’s almost as if Google doesn’t wish to share the major influential stance it holds in people’s lives.
However, Zuckerman and his team of research experts are hard to beat. They want a program that can roll out random characters and quick checks depending on the corresponding video. Whenever a scraper finds one, it installs it. It’s all thanks to the fact that URLs on the app use classic formats. They were able to get a huge data set and the scraper had to go through nearly 18T potential URLs. Despite the large figure of bad guesses for each video found, the findings were finally analyzed.
Secret stats including the figure of videos users uploaded on the app. Google used to share such findings but not anymore. By the middle of 2024, the figure stood at 14.8 Billion videos which was a 60% rise than those seen in previous years.
While YouTube was created at the start to serve regular people, the company is more keen on serving professional creators than anyone else. The recent scraping project by Zuckerman’s lab proves that it’s actually less like television and more in tune with being an infrastructure.
Take a look at the charts below for more insights:
Key takeaways from above charts:
The first chart illustrates the distribution of estimated views per YouTube video, showing that most videos receive relatively few views. The highest frequency occurs in the 17-32 views range, with a peak around 10-11%. The majority of videos fall below 2,048 views, while only a tiny fraction surpasses millions.
The second chart demonstrates YouTube's rapid expansion, growing from under a billion videos in 2010 to over 14 billion by 2024. The increase has been particularly sharp since 2018, reflecting YouTube’s accelerating content production.
The third graphic highlights language distribution, with English dominating at nearly 30%, followed by Hindi (around 10%), Spanish, Portuguese, and Russian, each contributing approximately 5-10%. Other languages like Arabic, Japanese, and Bengali hold smaller shares, with diverse representation across global languages.
H/T: BBC
Read next: Meta Acknowledges Error Sent To Some Facebook Pages Which Asked Them To Confirm That Their Page Isn’t Aimed at Kids Under 13
by Dr. Hura Anwar via Digital Information World
However, there are times when one wonders why the video-sharing app doesn't like to detail too many statistics about its success. For instance, why does the app go quiet when asking simple queries like how much content viewers see. Interestingly, other things are so public like effects on the algorithm and the economy of today. The platform is quite silent in that regard considering it’s got more than 2.5 Billion users every month. That’s one in every three people on earth using it. Did we mention how the average user watches up to 29 hours of content each month?
When you do the math, it’s about 8.3 Million years of content seen on YouTube each month. Over the past year, this is the equivalent of 100 Million years which is a hundred times greater than the total of human history.
But wait, the curiosity does not end there. We want to know how many videos are actually there and what are they all about. Which languages do most YouTubers speak and beyond? Sadly, the app isn’t going to be upfront on that at first.
This is where the issue lies. Many feel YouTube might be operating a lot of things in the dark which users should be aware of. That’s partly because there’s no simple way to attain random video samples. You can either pick what the algorithm recommends or use the manual approach. So that means unbiased options that are worth real-time and study are difficult to attain.
Several years back, we saw teams of research experts come up with the best possible solution. This is designed to give rise to a new computer program that pulls up content in a random fashion. It tries billions of URLs at a single time. Some might refer to it as a bot but that’s going into extreme. Zuckerman feels it’s more accurate to refer to it as a scraper.
Surveys display that in the two decades of operations, YouTube remains at the top of the list of the most popular apps in America. Up to 83% of all adults and 93% of all teens are part of its user base. It’s also the second most popular website on the planet by estimates. Only Google managed to top it.
Now the platform has entered the third decade but still, it’s such a secret for many. One spokesperson mentioned through a blog post about the recommendation algorithm. It refused to comment on the stats and other problems highlighted above so the mystery does continue.
It’s hard to get an idea of what’s happening inside apps because while organizations operate them to make public disclosures, these are fragmentary and misleading. Google does not wish to tell others about how large and brilliant the platform is. They don’t want others to know about the figures for users and how great the content is. To be honest, it’s almost as if Google doesn’t wish to share the major influential stance it holds in people’s lives.
However, Zuckerman and his team of research experts are hard to beat. They want a program that can roll out random characters and quick checks depending on the corresponding video. Whenever a scraper finds one, it installs it. It’s all thanks to the fact that URLs on the app use classic formats. They were able to get a huge data set and the scraper had to go through nearly 18T potential URLs. Despite the large figure of bad guesses for each video found, the findings were finally analyzed.
Secret stats including the figure of videos users uploaded on the app. Google used to share such findings but not anymore. By the middle of 2024, the figure stood at 14.8 Billion videos which was a 60% rise than those seen in previous years.
While YouTube was created at the start to serve regular people, the company is more keen on serving professional creators than anyone else. The recent scraping project by Zuckerman’s lab proves that it’s actually less like television and more in tune with being an infrastructure.
Take a look at the charts below for more insights:
Key takeaways from above charts:
The first chart illustrates the distribution of estimated views per YouTube video, showing that most videos receive relatively few views. The highest frequency occurs in the 17-32 views range, with a peak around 10-11%. The majority of videos fall below 2,048 views, while only a tiny fraction surpasses millions.
The second chart demonstrates YouTube's rapid expansion, growing from under a billion videos in 2010 to over 14 billion by 2024. The increase has been particularly sharp since 2018, reflecting YouTube’s accelerating content production.
The third graphic highlights language distribution, with English dominating at nearly 30%, followed by Hindi (around 10%), Spanish, Portuguese, and Russian, each contributing approximately 5-10%. Other languages like Arabic, Japanese, and Bengali hold smaller shares, with diverse representation across global languages.
H/T: BBC
Read next: Meta Acknowledges Error Sent To Some Facebook Pages Which Asked Them To Confirm That Their Page Isn’t Aimed at Kids Under 13
by Dr. Hura Anwar via Digital Information World
Meta Acknowledges Error Sent To Some Facebook Pages Which Asked Them To Confirm That Their Page Isn’t Aimed at Kids Under 13
Tech giant Meta has just confirmed that an alert was sent out to all pages on Facebook that their page was not aimed at kids under 13.
Facebook’s parent firm acknowledged that this was an error in the form of a bug and now it has been fixed. The matter was concerning for many page managers who reviewed the alert and became worried. We saw a user Wocky-Slush-Jo-Mama on Reddit be one of the first few who shared the picture of the alert which was visible on a page.
Image: u/Wocky-Slush-Jo-Mama
The image showed Meta looking to clarify if the page was directed to kids or not. They asked them and many others to confirm by September 30 that it was not meant for those below the 13-year age bracket. When you clicked on that alert, it brought forward more information about this.
The entire process seemed like it was created to attain more explicit agreements from pages to ensure it isn’t directed toward youngsters. This would give the social media giant the liberty to remove any page that it felt was aimed to target minors. Since page owners were sent reminders and they would directly need to agree, it did appear more like enforcement to ensure kids remain safe on Meta’s apps.
Meta confirmed that they were aware of the bug and therefore fixed it. They’re now experimenting with alerts to make sure the pages are in line with the Terms of Use that stop people under the age of 13 from using this platform. In such a case, the alerts were sent by error, it added.
Therefore, it was a false alarm and there’s no reason to get worried right now. Meta says users will no longer be seeing this on the app but if they are, they shouldn’t be worried. From what we can see, it does appear the tech giant might be looking to confirm such facts in the future.
This would mean that if your content did end up targeting minors, they might need to rethink the whole approach. It’s already a part of Meta’s policies that minors shouldn’t be on the app and if they are, pages cannot produce content that targets them.
Read next: New Study Shows AI Cannot Be Trusted for News as It Lacks Accuracy
by Dr. Hura Anwar via Digital Information World
Facebook’s parent firm acknowledged that this was an error in the form of a bug and now it has been fixed. The matter was concerning for many page managers who reviewed the alert and became worried. We saw a user Wocky-Slush-Jo-Mama on Reddit be one of the first few who shared the picture of the alert which was visible on a page.
Image: u/Wocky-Slush-Jo-Mama
The image showed Meta looking to clarify if the page was directed to kids or not. They asked them and many others to confirm by September 30 that it was not meant for those below the 13-year age bracket. When you clicked on that alert, it brought forward more information about this.
The entire process seemed like it was created to attain more explicit agreements from pages to ensure it isn’t directed toward youngsters. This would give the social media giant the liberty to remove any page that it felt was aimed to target minors. Since page owners were sent reminders and they would directly need to agree, it did appear more like enforcement to ensure kids remain safe on Meta’s apps.
Meta confirmed that they were aware of the bug and therefore fixed it. They’re now experimenting with alerts to make sure the pages are in line with the Terms of Use that stop people under the age of 13 from using this platform. In such a case, the alerts were sent by error, it added.
Therefore, it was a false alarm and there’s no reason to get worried right now. Meta says users will no longer be seeing this on the app but if they are, they shouldn’t be worried. From what we can see, it does appear the tech giant might be looking to confirm such facts in the future.
This would mean that if your content did end up targeting minors, they might need to rethink the whole approach. It’s already a part of Meta’s policies that minors shouldn’t be on the app and if they are, pages cannot produce content that targets them.
Read next: New Study Shows AI Cannot Be Trusted for News as It Lacks Accuracy
by Dr. Hura Anwar via Digital Information World
New Study Shows AI Cannot Be Trusted for News as It Lacks Accuracy
According to a new study by British Broadcasting Corporation, AI assistants often provide inaccurate news and misleading news to users which can have drastic effects. The journalists at BBC asked AI chatbots like CoPilot, ChatGPT, Perplexity and Gemini 100 questions about current news and asked them to cite BBC articles as their sources. The results showed that 51% of the responses from AI had significant issues, while 91% had slight issues. 19% of the sources which cited BBC content had incorrect statistics and date while 13% of the quotes from BBC articles were fabricated or altered. AI assistants couldn't differentiate between facts and opinions and couldn't provide context.
This shows that AI assistants shouldn't be used for reliable news because their hallucination and misinformation issues can mislead the audience. One of the responses from Google Gemini stated that the NHS advises people to not start vaping but the actual article advised people to start vaping if they want to quit smoking. Some other responses also provided inaccurate information about political leaders as well as TV presenters.
This study matters because it is important for people to trust news, no matter where it is from even from AI assistants. Some people prefer human-centric journalism over AI while others said they partly trust news from AI. So this means accuracy matters the most to people and human reviews is essential even with AI use. AI also lacks context often so it can also become misleading and problematic if used for news.
Image: DIW-Aigen
Read next: New Study Shows LLMs are Good At Generalizing on their Own Without Human Input
by Arooj Ahmed via Digital Information World
This shows that AI assistants shouldn't be used for reliable news because their hallucination and misinformation issues can mislead the audience. One of the responses from Google Gemini stated that the NHS advises people to not start vaping but the actual article advised people to start vaping if they want to quit smoking. Some other responses also provided inaccurate information about political leaders as well as TV presenters.
This study matters because it is important for people to trust news, no matter where it is from even from AI assistants. Some people prefer human-centric journalism over AI while others said they partly trust news from AI. So this means accuracy matters the most to people and human reviews is essential even with AI use. AI also lacks context often so it can also become misleading and problematic if used for news.
Image: DIW-Aigen
Read next: New Study Shows LLMs are Good At Generalizing on their Own Without Human Input
by Arooj Ahmed via Digital Information World
Subscribe to:
Posts (Atom)









