According to a new study published in International Journal of Human Computer Studies, some students use AI chatbots for emotional support and social conversations. The students who do this show high levels of loneliness and lower social support so they turn towards AI chatbots to fill the void. There are many social chatbots like Character AI and Replika that people use for conversation in an engaging, stimulating and empathetic manner. These social chatbots have shifted how humans communicate and as there will be more advancements in them, our socio-cultural environment is going to change as well. This change will be mostly seen in young teens.
The study aimed to provide answers to some questions like why teens are engaging in social conversations with chatbots, how many of them do so and if it is all linked to less social support and more loneliness. For the study, the researchers conducted a survey of 1,599 students from 15 Danish schools and different qualitative and quantitative methods were used for it. The main focus of the study was to find whether students are engaging in friend-like conversations with chatbots, emotionally and casually.
The results of the survey showed that 14.6% students reported having friend-like conversations with chatbots but most of the interactions were utility based and not social. Only 2.4% of students reported having social or emotional conversations with AI chatbots. This shows that this phenomenon is not widespread as many people think so but we cannot say whether it is going to increase in the future or not.
The students who were having emotional conversations with chatbots showed signs of loneliness and they were less supported by their peers as well. Whenever these students are in a lonely, angry or bad mood, they turn to chatbots to share their feelings and use them as coping tools. But as students are turning to chatbots in times of loneliness, this raises concerns about their emotional well being. It is not clear whether chatbots are a cause of loneliness or students or if lonely students are turning towards chatbots during times of loneliness.
There are some limitations in the research too, like it was based on self-participation bias and no clear definition of friend-like conversation was given. There should be more methods to understand the phenomenon of students using social chatbots and more research should be done to understand why they use them.
Image: DIW-Aigen
Read next: New Study Shows Bots Can Increase Online Engagement but This Can Decrease Human-to-Human Interaction
by Arooj Ahmed via Digital Information World
"Mr Branding" is a blog based on RSS for everything related to website branding and website design, it collects its posts from many sites in order to facilitate the updating to the latest technology.
To suggest any source, please contact me: Taha.baba@consultant.com
Saturday, February 1, 2025
Sam Altman Admits OpenAI’s Closed Strategy Is Flawed as DeepSeek Disrupts AI Market
Sam Altman made a shocking admission yesterday, referring to his OpenAI company as being on the wrong side of history. This is regarding the subject of open-source AI. The news is awfully interesting as it proves a major shift in terms of strategy.
Competition from China continues to heat up and so many open models are being launched with incredible capabilities and at a fraction of the cost. The very candid session opened many people’s eyes in terms of DeepSeek rattling the global markets of today with the new R1 model. It is believed to be offering great performance and promising results, allowing users to shift focus from the ChatGPT makers and move on to better alternatives.
Altman did share how he personally feels the company might be on the wrong side of history and therefore needs to think of a different open-source strategy. Remember, not all share the same views as Altman, and right now, this is definitely his company’s top priority at the moment.
The statement is a representation of a major leap from the proprietary approach taken up in recent times. This drew serious backlash from many AI experts and ex-allies including SpaceX CEO Elon Musk. The latter is suing the firm for trying to betray the real open-source mission.
Altman’s comments on this matter come at a time when the market is experiencing serious turmoil thanks to DeepSeek. Using just $5B in training, it saw many tech giant stocks going down including Nvidia. To be more exact, a whopping $600 billion worth of market value was instantly wiped out which is a major drop for any firm working in the US.
As per the organization, it would seek to produce better models but does feel the lead would be far less than what it had in the past thanks to the breakthrough impact of DeepSeek. Right now, Altman feels the company’s closed strategy is more flawed than anything else.
The success of DeepSeek gives rise to a major shift in AI dynamics. The fact that it could give rise to startling results with just 2000 Nvidia H800 GPUs is remarkable. On average, 10,000 chips and more are used by most leading AI labs. Hence, the results do imply that innovation in algorithms and optimizing architecture could mean more than just raw computing power.
Furthermore, such a revelation threatens not only the firm’s tech strategy but the complete business model designed for exclusive access to huge computational resources.
The debate has to do with innovation versus the security of open-source models. The matter keeps getting intensified with time with many firms storing user data on different servers across China. This means government officials have full access. We’ve already seen lawmakers in the US speak about restrictions on this move and NASA is the latest to block the app, linking major privacy violations.
Image: DIW-Aigen
Read next: OpenAI’s o3-Mini Debuts with Enhanced Reasoning, Challenging DeepSeek R1
by Dr. Hura Anwar via Digital Information World
Competition from China continues to heat up and so many open models are being launched with incredible capabilities and at a fraction of the cost. The very candid session opened many people’s eyes in terms of DeepSeek rattling the global markets of today with the new R1 model. It is believed to be offering great performance and promising results, allowing users to shift focus from the ChatGPT makers and move on to better alternatives.
Altman did share how he personally feels the company might be on the wrong side of history and therefore needs to think of a different open-source strategy. Remember, not all share the same views as Altman, and right now, this is definitely his company’s top priority at the moment.
The statement is a representation of a major leap from the proprietary approach taken up in recent times. This drew serious backlash from many AI experts and ex-allies including SpaceX CEO Elon Musk. The latter is suing the firm for trying to betray the real open-source mission.
Altman’s comments on this matter come at a time when the market is experiencing serious turmoil thanks to DeepSeek. Using just $5B in training, it saw many tech giant stocks going down including Nvidia. To be more exact, a whopping $600 billion worth of market value was instantly wiped out which is a major drop for any firm working in the US.
As per the organization, it would seek to produce better models but does feel the lead would be far less than what it had in the past thanks to the breakthrough impact of DeepSeek. Right now, Altman feels the company’s closed strategy is more flawed than anything else.
The success of DeepSeek gives rise to a major shift in AI dynamics. The fact that it could give rise to startling results with just 2000 Nvidia H800 GPUs is remarkable. On average, 10,000 chips and more are used by most leading AI labs. Hence, the results do imply that innovation in algorithms and optimizing architecture could mean more than just raw computing power.
Furthermore, such a revelation threatens not only the firm’s tech strategy but the complete business model designed for exclusive access to huge computational resources.
The debate has to do with innovation versus the security of open-source models. The matter keeps getting intensified with time with many firms storing user data on different servers across China. This means government officials have full access. We’ve already seen lawmakers in the US speak about restrictions on this move and NASA is the latest to block the app, linking major privacy violations.
Image: DIW-Aigen
Read next: OpenAI’s o3-Mini Debuts with Enhanced Reasoning, Challenging DeepSeek R1
by Dr. Hura Anwar via Digital Information World
OpenAI’s o3-Mini Debuts with Enhanced Reasoning, Challenging DeepSeek R1
It looks like OpenAI is battling the rising success of DeepSeek’s latest R1 model by debuting a new o3-mini model. However, the question is if it will be good enough to bar the Chinese startup giant’s success.
The launch comes a few days after we heard growing rumors about the rise in anticipation among AI users through social media. This is the company’s second model that comes under the reasoning category of models. In other words, the company says it can take more time to evaluate situations and reflect on its chain of thought. This is before it jumps ahead of gives answers to prompts put in by the user.
The final result is a model that performs at the same level as those having a PhD degree or those giving replies to hard queries such as math, engineering, science, and more. The latest model is now up for grabs through ChatGPT and also entails a free-of-cost tier. You can find it through the company’s API.
Thankfully, the model is cheaper in cost, quicker in terms of speed, and also able to perform better than models of a similar kind seen in the past. This includes its own sibling o1-mini. While it can be compared to the DeepSeek R1, many don’t know that this is a planned release for the company. It was shared well before the launch of DeepSeek’s R1 so you can’t actually deem it as a strategy by the company. To be exact, it was declared in December 2024 and that’s when Sam Altman also shared through his social media profile that it was under discussion and would launch together on ChatGPT and the OpenAI API together.
This mini model won’t be up for grabs in terms of an open-source model. This means codes can’t get downloaded for usage offline nor could they undergo customization to a similar extent. However, it could restrict appeal when compared to the DeepSeek-R1 for a few applications.
The company failed to roll out any more details about the bigger o3 model shared in December with the o3-mini. During that moment, it shared a delay of several weeks before testing with third parties.
In terms of performance and its features, it’s quite like o1. The OpenAI o3-mini is designed to give reasoning in coding, science, and math. The performance is similar to OpenAI o1 in terms of reasoning but it does offer great advantages.
There’s a 24% quicker response time when you compare the o1-mini speed time for replies is down to just 10.3 seconds. You can expect better accuracy and external testers putting the mini model replies 56% of the time.
Meanwhile, it also provides 39% fewer errors on complex queries. There’s similarly greater performance in coding and performing STEM tasks, not to mention good reasoning effort. Users can benefit from reasoning done on a low, medium, and high level. It similarly gives users the chance to balance both speed with reliability.
The model even has great benchmarks including outpacing o1 in plenty of cases as per the o3-mini system card shared by the company online. The context window is 200k and has a maximum of 100k in every output. It’s quite like the complete o1 model that edges out DeepSeek’s content window for R1. However, it’s yet to outperform Google Gemini 2.0 Flash Thinking model. The latter can produce a higher performance that goes up to 1M tokens.
Reasoning capabilities might be there but the model does fall behind in terms of vision capabilities. This is why any user wishing to upload pictures could be better off with the o1 for now.
Read next: Global AI Safety Report Warns of Cyber Threats, Manipulation, and Weaponization Risks
by Dr. Hura Anwar via Digital Information World
The launch comes a few days after we heard growing rumors about the rise in anticipation among AI users through social media. This is the company’s second model that comes under the reasoning category of models. In other words, the company says it can take more time to evaluate situations and reflect on its chain of thought. This is before it jumps ahead of gives answers to prompts put in by the user.
The final result is a model that performs at the same level as those having a PhD degree or those giving replies to hard queries such as math, engineering, science, and more. The latest model is now up for grabs through ChatGPT and also entails a free-of-cost tier. You can find it through the company’s API.
Thankfully, the model is cheaper in cost, quicker in terms of speed, and also able to perform better than models of a similar kind seen in the past. This includes its own sibling o1-mini. While it can be compared to the DeepSeek R1, many don’t know that this is a planned release for the company. It was shared well before the launch of DeepSeek’s R1 so you can’t actually deem it as a strategy by the company. To be exact, it was declared in December 2024 and that’s when Sam Altman also shared through his social media profile that it was under discussion and would launch together on ChatGPT and the OpenAI API together.
This mini model won’t be up for grabs in terms of an open-source model. This means codes can’t get downloaded for usage offline nor could they undergo customization to a similar extent. However, it could restrict appeal when compared to the DeepSeek-R1 for a few applications.
The company failed to roll out any more details about the bigger o3 model shared in December with the o3-mini. During that moment, it shared a delay of several weeks before testing with third parties.
In terms of performance and its features, it’s quite like o1. The OpenAI o3-mini is designed to give reasoning in coding, science, and math. The performance is similar to OpenAI o1 in terms of reasoning but it does offer great advantages.
There’s a 24% quicker response time when you compare the o1-mini speed time for replies is down to just 10.3 seconds. You can expect better accuracy and external testers putting the mini model replies 56% of the time.
Meanwhile, it also provides 39% fewer errors on complex queries. There’s similarly greater performance in coding and performing STEM tasks, not to mention good reasoning effort. Users can benefit from reasoning done on a low, medium, and high level. It similarly gives users the chance to balance both speed with reliability.
The model even has great benchmarks including outpacing o1 in plenty of cases as per the o3-mini system card shared by the company online. The context window is 200k and has a maximum of 100k in every output. It’s quite like the complete o1 model that edges out DeepSeek’s content window for R1. However, it’s yet to outperform Google Gemini 2.0 Flash Thinking model. The latter can produce a higher performance that goes up to 1M tokens.
Reasoning capabilities might be there but the model does fall behind in terms of vision capabilities. This is why any user wishing to upload pictures could be better off with the o1 for now.
Read next: Global AI Safety Report Warns of Cyber Threats, Manipulation, and Weaponization Risks
by Dr. Hura Anwar via Digital Information World
Friday, January 31, 2025
Global AI Safety Report Warns of Cyber Threats, Manipulation, and Weaponization Risks
The first International AI Safety Report is here which was released by Professor Yoshua Bengio and other 100 AI experts and it talked about the future of AI showing potential as well as risks. The report says that AI can bring a lot of advantages to areas like healthcare, education and scientific research which will improve global well being. But the report also highlights that there will be some negative effects of AI too which will vary according to the development made. This report was supported by 30 countries and institutions like UN, OECD and EU and it comprises 298 pages involving insights of experts including Turing Award Winners and Nobel laureates.
Artificial Intelligence Action Summit presented the report and the focus of the report was to create a vision for evolution of AI and how it will be integrated into society. The report addresses some key questions like what is the work of general purpose AI, what are its risks and how can these risks be minimized. The risks of using general purpose AI include its malicious use like manipulation of public opinion, cyberattacks and AI being used in chemical or biological weapon attacks. There are also some systematic risks of using general use AI that have global and environmental impacts. There are some societal, policymaking, management and technological risks that can come with AI development, the report stated.
Bengio said that if AI is developed and used responsibly, it can contribute to economic growth as well as modernize public services which can improve the overall lives of people. But before that, we have to completely understand AI so we can use it for the betterment of society.
Read next:
• AI App Spending Hits $1.42B in 2024, ChatGPT Leads with 274% Growth
• New Study Shows Bots Can Increase Online Engagement but This Can Decrease Human-to-Human Interaction
by Arooj Ahmed via Digital Information World
Artificial Intelligence Action Summit presented the report and the focus of the report was to create a vision for evolution of AI and how it will be integrated into society. The report addresses some key questions like what is the work of general purpose AI, what are its risks and how can these risks be minimized. The risks of using general purpose AI include its malicious use like manipulation of public opinion, cyberattacks and AI being used in chemical or biological weapon attacks. There are also some systematic risks of using general use AI that have global and environmental impacts. There are some societal, policymaking, management and technological risks that can come with AI development, the report stated.
Bengio said that if AI is developed and used responsibly, it can contribute to economic growth as well as modernize public services which can improve the overall lives of people. But before that, we have to completely understand AI so we can use it for the betterment of society.
• AI App Spending Hits $1.42B in 2024, ChatGPT Leads with 274% Growth
• New Study Shows Bots Can Increase Online Engagement but This Can Decrease Human-to-Human Interaction
by Arooj Ahmed via Digital Information World
New Study Shows Bots Can Increase Online Engagement but This Can Decrease Human-to-Human Interaction
In the mid of 2024, Meta announced AI Studio to let users create their own chatbot using AI which can be used for specific tasks like creating social media captions or making them use an avatar. Meta’s VC said that they are also trying to make these AI chatbots exist on the platforms with a proper account having a bio and profile picture. If AI chatbots start to have separate accounts, they can spread false information on social feeds with their automatically generated content. This is spreading concerns about the role of bots on social media platforms and even though Meta has removed some of its AI bots from its platforms, there are still user-generated AI bots there. Firms are also looking for ways to make users interact with AI technologies more because of their heavy investments so they are using AI bots for that. Reddit and X also have many pre-programmed bots that moderate content and interact with users but they are not the AI ones.
According to a new research published in MIS Quarterly, bots are helpful in increasing user engagement but they are also impacting human-to-human interactions on different platforms. There are different types of bots, from simple to advanced, and they perform tasks according to the guidelines given to them. WikiTextBot is a bot on Reddit that replies Wikipedia summaries to posts containing Wikipedia links. These types of bots are known as “reflexive bots” which work because of the application programming interface (API) which allows them to see every post that comes under their area of expertise. There are also some “supervisory bots” on Reddit which moderate posts and delete the ones which are against the community guidelines.
These bots are rigid and only perform tasks according to the guidelines given to them but they can become more advanced if AI technologies get incorporated into them. It is also important to know how these bots can impact human-to-human interaction in online communities. Researchers analyzed some Reddit posts between 2005 and 2019 to know what was the structure of human-to-human interaction in the posts as the bot activity increased on the platform. It was found that increase in reflexive bots that generate and share content also increased human-to-human interaction, but it was also observed that this resulted in fewer human posts and back and forth interaction between humans. Supervisory bots decreased human interactions as well because there were less human moderators who could enforce community laws. Key members used to interact with each other to create and implement community norms and guidelines but this has decreased now because of supervisory bots. AI bots can create new accounts and interact with users, which will result in higher engagement on their platforms but this will come with the cost of human-to-human interaction.
Image: DIW-Aigen
Read next: Cybercrime on the Rise: The Dangers of Phishing Scams and How to Protect Yourself
by Arooj Ahmed via Digital Information World
According to a new research published in MIS Quarterly, bots are helpful in increasing user engagement but they are also impacting human-to-human interactions on different platforms. There are different types of bots, from simple to advanced, and they perform tasks according to the guidelines given to them. WikiTextBot is a bot on Reddit that replies Wikipedia summaries to posts containing Wikipedia links. These types of bots are known as “reflexive bots” which work because of the application programming interface (API) which allows them to see every post that comes under their area of expertise. There are also some “supervisory bots” on Reddit which moderate posts and delete the ones which are against the community guidelines.
These bots are rigid and only perform tasks according to the guidelines given to them but they can become more advanced if AI technologies get incorporated into them. It is also important to know how these bots can impact human-to-human interaction in online communities. Researchers analyzed some Reddit posts between 2005 and 2019 to know what was the structure of human-to-human interaction in the posts as the bot activity increased on the platform. It was found that increase in reflexive bots that generate and share content also increased human-to-human interaction, but it was also observed that this resulted in fewer human posts and back and forth interaction between humans. Supervisory bots decreased human interactions as well because there were less human moderators who could enforce community laws. Key members used to interact with each other to create and implement community norms and guidelines but this has decreased now because of supervisory bots. AI bots can create new accounts and interact with users, which will result in higher engagement on their platforms but this will come with the cost of human-to-human interaction.
Image: DIW-Aigen
Read next: Cybercrime on the Rise: The Dangers of Phishing Scams and How to Protect Yourself
by Arooj Ahmed via Digital Information World
Apple Tops $124B Revenue: iPhone Slips in China, 2.35B Devices Active, 550M Added in 2024
Apple just shared its very important holiday quarter report, deeming it the best quarter ever for the company.
Revenue crossed $124 billion which was up 4% YOY. However, sales for the iPhone witnessed a drop in China but that was far from what was on the company’s mind. Tim Cook blamed the drop on China’s lack of acceptance of Apple Intelligence as sales went below the expected target of $71 billion.
Meanwhile, the biggest performer for the company was Mac which had a 15% rise while its Services grew to a new high of $26 billion in terms of sales. The company rolled out several new M4 Macs during the quarter that entailed a redesigned version of Mac Mini, not to mention a revamped version of the MacBook Pros.
Sales for the popular iPad went up, hitting the $8 billion mark for the first time since the first quarter of 2023. The firm also shared the launch of its iPad mini towards the end of last year while many of its Tablets flew off the shelves during the festive shopping period. If rumors are said to be true, Apple will also share its latest entry-level iPad during the springtime.
The earnings call shared by Apple's CEO included a lot of fine details that many have been in search of for years. This includes the number of subscriptions which the Cupertino firm tends not to disclose during such earnings calls. This year is different and we’re sure investors are loving the news.
Previous figures stood at 2.2 billion devices which is certainly 400M more than that seen in 2022. Apple’s CEO mentioned how the figure hit a new high last year with over 2.35 billion active devices. As a whole, the install base did rise sharply over the recent past with more than 550M devices.
The iPhone did shrink YoY to $69 billion but it’s still helping the company earn some serious revenue. Cook did share how the arrival of Apple Intelligence did see a positive impact on sales for iPhones but analysts didn’t agree with that.
The figures for total revenue for the first quarter of 2025 stood at $124 billion as the organization hopes to expand more into certain sectors such as Apple Vision Pro. Predictions for the upcoming future will also include more revenue and a bigger install base for the company.
While the figures were certainly great, many investors were keen to know how these figures might be impacted by the regulatory environment that could change under President Trump.
The question had to do with whether a better and more controlled regulatory environment might benefit the organization or not. As per Apple’s CFO, more focus was on quoting figures instead of directly answering the question. Kevan Parekh chose to focus more on discussing the rise in customer engagement across all services and in different parts of the world. Tim Cook also had his lips sealed on what new changes could affect the company.
Image: DIW-Aigen
Read next: Users Face Legal and Financial Burdens Under DeepSeek’s Strict Terms of Use
by Dr. Hura Anwar via Digital Information World
Revenue crossed $124 billion which was up 4% YOY. However, sales for the iPhone witnessed a drop in China but that was far from what was on the company’s mind. Tim Cook blamed the drop on China’s lack of acceptance of Apple Intelligence as sales went below the expected target of $71 billion.
Meanwhile, the biggest performer for the company was Mac which had a 15% rise while its Services grew to a new high of $26 billion in terms of sales. The company rolled out several new M4 Macs during the quarter that entailed a redesigned version of Mac Mini, not to mention a revamped version of the MacBook Pros.
Sales for the popular iPad went up, hitting the $8 billion mark for the first time since the first quarter of 2023. The firm also shared the launch of its iPad mini towards the end of last year while many of its Tablets flew off the shelves during the festive shopping period. If rumors are said to be true, Apple will also share its latest entry-level iPad during the springtime.
The earnings call shared by Apple's CEO included a lot of fine details that many have been in search of for years. This includes the number of subscriptions which the Cupertino firm tends not to disclose during such earnings calls. This year is different and we’re sure investors are loving the news.
Previous figures stood at 2.2 billion devices which is certainly 400M more than that seen in 2022. Apple’s CEO mentioned how the figure hit a new high last year with over 2.35 billion active devices. As a whole, the install base did rise sharply over the recent past with more than 550M devices.
The iPhone did shrink YoY to $69 billion but it’s still helping the company earn some serious revenue. Cook did share how the arrival of Apple Intelligence did see a positive impact on sales for iPhones but analysts didn’t agree with that.
The figures for total revenue for the first quarter of 2025 stood at $124 billion as the organization hopes to expand more into certain sectors such as Apple Vision Pro. Predictions for the upcoming future will also include more revenue and a bigger install base for the company.
While the figures were certainly great, many investors were keen to know how these figures might be impacted by the regulatory environment that could change under President Trump.
The question had to do with whether a better and more controlled regulatory environment might benefit the organization or not. As per Apple’s CFO, more focus was on quoting figures instead of directly answering the question. Kevan Parekh chose to focus more on discussing the rise in customer engagement across all services and in different parts of the world. Tim Cook also had his lips sealed on what new changes could affect the company.
Image: DIW-Aigen
Read next: Users Face Legal and Financial Burdens Under DeepSeek’s Strict Terms of Use
by Dr. Hura Anwar via Digital Information World
Thursday, January 30, 2025
Users Face Legal and Financial Burdens Under DeepSeek’s Strict Terms of Use
Most of the internet users do not read terms of use. They install the app, press agree, and continue using apps and digital platform. It has become a norm these days. As consumers appears to have no time to check what is written inside. But sometimes, inside the long text, there are words that can change everything. DeepSeek’s terms of use is one example.
This is not usual terms that just set rules for using app. It is more than that. It shifts responsibility in a way that can cost users real money. If user violates terms, it is not only about losing access. It is also about financial responsibility. And not a small one. Legal fees, travel costs, evidence collection expenses, administrative fines. DeepSeek puts all these on the user’s shoulders. But how many people notice this before clicking accept.
Sometimes companies play with terms of use because they know people do not read. Amazon once included strange line in their AWS Service Terms policy. It said that their rules do not apply if zombie apocalypse happens. It was hidden joke inside serious document. But DeepSeek’s terms are not joke. They are serious words with serious effect.
DeepSeek writes clear policy about how they handle rule violations. They decide if user breaks rules. Nobody else. No outside review, no appeal system. If DeepSeek believes user violated terms, they take action. This can mean limiting account, removing content, blocking access, permanently banning user. No warning needed and no explanation required. It is their decision. The AI chatbot platform also have rights to announce it publicly. If they want, they can restore account later. If not, then you've no other options.
Their control does not stop at app usage. If DeepSeek thinks user has done something illegal, they take further steps. They do not only ban you they also keep records and report case to authorities. They cooperate with investigation. What kind of actions are illegal. It is not written clearly. But if DeepSeek believes there is problem, they act immediately.
One of the most concerning part is financial responsibility. If legal problem happens because of user’s actions, DeepSeek does not take responsibility. They put all financial burden on user. If third party makes legal claim against DeepSeek because of something user did, DeepSeek does not pay. Their policy says user must pay. This is not only about fines. It also includes attorney charges, arbitration payments, evidence collection, investigation fees, even DeepSeek’s travel costs for handling case. It is all listed inside terms.
There is one important note inside document. It says no contract can take away consumer rights protected by law. Legal protections that exist in country remain valid. DeepSeek’s terms cannot erase those rights.
Most people never check what they are agreeing to. They just press accept and continue using app. But DeepSeek’s terms raise serious question. How much risk is too much for using one app.
Image: Solen Feyissa / Unsplash
Read next:
• Apple’s AI Transparency at Risk Amid Growing Privacy and Data Scrutiny
• Cybercrime on the Rise: The Dangers of Phishing Scams and How to Protect Yourself
• Your Weight Loss App Might Be Spying on You, Here’s What You Need to Know!
by Asim BN via Digital Information World
This is not usual terms that just set rules for using app. It is more than that. It shifts responsibility in a way that can cost users real money. If user violates terms, it is not only about losing access. It is also about financial responsibility. And not a small one. Legal fees, travel costs, evidence collection expenses, administrative fines. DeepSeek puts all these on the user’s shoulders. But how many people notice this before clicking accept.
Sometimes companies play with terms of use because they know people do not read. Amazon once included strange line in their AWS Service Terms policy. It said that their rules do not apply if zombie apocalypse happens. It was hidden joke inside serious document. But DeepSeek’s terms are not joke. They are serious words with serious effect.
DeepSeek writes clear policy about how they handle rule violations. They decide if user breaks rules. Nobody else. No outside review, no appeal system. If DeepSeek believes user violated terms, they take action. This can mean limiting account, removing content, blocking access, permanently banning user. No warning needed and no explanation required. It is their decision. The AI chatbot platform also have rights to announce it publicly. If they want, they can restore account later. If not, then you've no other options.
Their control does not stop at app usage. If DeepSeek thinks user has done something illegal, they take further steps. They do not only ban you they also keep records and report case to authorities. They cooperate with investigation. What kind of actions are illegal. It is not written clearly. But if DeepSeek believes there is problem, they act immediately.
One of the most concerning part is financial responsibility. If legal problem happens because of user’s actions, DeepSeek does not take responsibility. They put all financial burden on user. If third party makes legal claim against DeepSeek because of something user did, DeepSeek does not pay. Their policy says user must pay. This is not only about fines. It also includes attorney charges, arbitration payments, evidence collection, investigation fees, even DeepSeek’s travel costs for handling case. It is all listed inside terms.
There is one important note inside document. It says no contract can take away consumer rights protected by law. Legal protections that exist in country remain valid. DeepSeek’s terms cannot erase those rights.
- Related: Your Data Privacy Is at Risk: 50+ Major Tech Platforms Exposed for Gaps in Terms of Service!
Most people never check what they are agreeing to. They just press accept and continue using app. But DeepSeek’s terms raise serious question. How much risk is too much for using one app.
Image: Solen Feyissa / Unsplash
Read next:
• Apple’s AI Transparency at Risk Amid Growing Privacy and Data Scrutiny
• Cybercrime on the Rise: The Dangers of Phishing Scams and How to Protect Yourself
• Your Weight Loss App Might Be Spying on You, Here’s What You Need to Know!
by Asim BN via Digital Information World
Subscribe to:
Comments (Atom)






