Monday, January 13, 2025

Researchers Explore How Personality and Integrity Shape Trust in AI Technology

AI has become an important part of our lives and we cannot escape it no matter how hard we try. But the question arises if people trust it enough and if they do, what influences them to make this decision? Researchers from University of Basel conducted a study to find out to what extent do people trust AI chatbots and what factors does it depend upon. For the study, the researchers made up a text based AI platform called Conversea, and analyzed the interactions between the chatbot and the users.

There are a lot of factors that make us trust something. It can be our own personality, how others behave with us, others’ personality and also some specific situation that calls for us to trust someone. The ability of people to trust someone develops from childhood and helps us decide how open we want to be with someone. The researchers say that the factors which play a role in trusting someone also play the same role in our trust in AI systems.

The characteristics most important for trust are integrity and competence and these two help humans evaluate if an AI is reliable or not. The study also found that participants do not think of AI in the light of the company which created it, they think of it as a whole separate unit. Impersonal and personalized chatbots also play a role in our perception of trust in them. If a chatbot is referring to us by our name and also mentions previous conversations, the participants say that the AI chatbot is competent and kind.

When an AI chatbot is personalized, the users think of it as a human and that's why they tend to share more personal information with it and want to use it more. But the study found that there was no difference in trust in personalized and impersonal chatbots by the participants. The study says that for trust to develop, integrity is the most important factor so developing integrity in AI chatbots should be prioritized. Most of the lonely people have started relying on AI because they seem personalized to them.

The researchers of the study said that they think that AI systems should be reliable above anything else. The researchers haven't said anything about whether trusting AI is good or bad, but they think that too much usage of AI as a friend can isolate us from our social environments. AI chatbots should always give advice to users along with the consequences and risks of it. They should also stop inventing answers and just tell the users that they don't have an answer for their question to give them some reality check.

Image: DIW-Aigen

Read next: China’s AI Chatbot Market Sees ByteDance’s Doubao Leading Through Innovation and Accessibility
by Arooj Ahmed via Digital Information World

Sunday, January 12, 2025

China’s AI Chatbot Market Sees ByteDance’s Doubao Leading Through Innovation and Accessibility

Doubao is an AI chatbot created by ByteDance, the parent company of Douyin/TikTok. It was released in August 2024 in China and soon became one of the most prominent AI chatbots in the country. It doesn't have a clear profit model, but it still became successful and is now being used by millions of users in China. As of November 2024, Doubao has 60 million monthly active users and this shows that other competitors have no chance. Baidu’s Wen Xiaoyan has 13 million monthly active users while Moonshot AI’s Kimi has 12.8 monthly active users. Doubao is being well liked by users because of its high functionality and user experience.

Just like ChatGPT, Doubao also offers users advanced image, video and text processing capabilities. It is a multimodal on which users can generate high quality texts, images and videos, and can even ask it to generate image interpretations and audio-based content. The platform is versatile and ByteDance promises users to bring more innovations to Doubao. It is also greatly helping users for their professional needs, like academic research, content creation and personal entertainment. It resonates well with users and it is providing them with helpful and deeper interactions.

ByteDance also has a powerful ecosystem which has contributed to Doubao’s success. Douyin, Chinese equivalent of TikTok, has a massive user base so the company has integrated Doubao into this digital landscape too. It also offers data-driven personalization which helps users build their own experiences, and helps them connect with Doubao directly.
Doubao gives users highly relevant and contextualized responses, which makes it different from its competitors. ByteDance also has technological superiority and offers advanced functionality, which is helping Doubao in its rapid growth. Right now, Doubao is a leader in the AI chatbots landscape because of all the qualities mentioned above.

Chinese tech giants are also lowering their LLM prices to make them more accessible to users. The price of Doubao’s main model is 99.3% less than average industry prices for business users. As affordability is a big factor in China to access something, this pricing strategy is a good way to make the AI chatbot widespread so all types of users can use it.


Domestic Ranking AI Product (Company) November App MAU November MAU Monthly Change
1 Doubao (Douyin/ByteDance) 59.98M +16.92%
2 ERNIE Bot (Baidu) 12.99M +3.33%
3 Kimi (Moonshot AI) 12.82M +27.40%
4 ChatGLM (Zhipu AI) 6.37M +22.18%
5 iFlyTek Spark (iFlyTek) 5.94M +4.23%

Read next: Social Media’s Youngest Fans: The Platforms Kids Can’t Stay Away From
by Arooj Ahmed via Digital Information World

Social Media’s Youngest Fans: The Platforms Kids Can’t Stay Away From

TikTok is the most used social media platform among users of all ages, and a new study published in Academic Pediatrics also found that it is the most popular platform among underage users too. There are many age restrictions for children under the age of 13 on platforms like Snapchat, Instagram and TikTok, but children are still using them. The study says that many 11-15 years old in America have at least one social media account, while 6.3% of young children also have secret accounts their parents do not know about. Children’s Online Privacy Protection Act was made to protect children from harmful content on social media but a lot of children somehow bypass age restrictions on the apps and get exposed to problematic content. It also affects their mental as well as physical health.

The study used data from Adolescent Brain Cognitive Development (ABCD) study which researched about 11,000 children in the US to know about their cognitive development. All the participants in the study were from diverse ethnic groups, demographics, socioeconomic, geographical and racial backgrounds. The researchers of this study analyzed a dataset of 10,092 participants between the ages of 11-15, between the years 2019 to 2021. Participants were given surveys about their social media usage, and there were questions about how much they use social media, what are their platform preferences and whether they have a secret account or not. Social Media Addiction Questionnaire was also added in the survey to measure the harmful effects of prolonged social media usage in children.

The results of the survey showed that 69.5% of the participants of the survey had at least one social media account, even though most of the platforms require users to be 13 years or older. 63.8% of children under 13 also admitted having at least one social media account and TikTok was the most popular network among them. 68.2% of social media users under 13 used TikTok, while 62.9% used YouTube. Instagram (57.3%) and Snapchat (55.2%) was also some most used platforms among children under 13 years.

Study Highlights Adolescent Social Media Habits and Addiction Patterns in the ABCD Dataset

Social Media Addiction Questionnaire* Never Very Rarely Rarely Sometimes Often Very Often
I spend a lot of time thinking about social media apps or planning my use of social media apps. 31.0% 22.9% 20.7% 18.8% 4.7% 1.8%
I feel the need to use social media apps more and more. 43.2% 19.1% 22.8% 10.9% 3.0% 0.9%
I use social media apps so I can forget about my problems. 47.9% 14.2% 12.7% 16.7% 5.6% 2.9%
I've tried to use my social media apps less but I can’t. 52.9% 15.1% 14.9% 11.2% 4.0% 1.8%
I've become stressed or upset if I am not allowed to use my social media apps. 58.0% 15.0% 12.1% 10.0% 3.3% 1.5%
I use social media apps so much that it has had a bad effect on my schoolwork or job. 66.6% 13.3% 9.3% 7.6% 2.3% 0.9%

It is not surprising that a lot of underage children are using social media, because there are no solid age verification systems on these platforms. Children can easily enter an older date of birth and access social media app. The study also found that under 13 children also had an average 3.38 accounts on social networks. Adolescents were more inclined to have a secret account hidden from their parents than under 13 children. There was also a gender difference in social media usage among under-age children. Girls were more likely to use platforms like Snapchat, TikTok and Pinterest while Boys were more likely to use Reddit and YouTube. Girls were also likely to become emotionally dependent on social media and spend significant time there. The researchers also noted that social media usage among under-age kids increased during Covid-19 as they became highly dependent on digital communication. The study sheds light on how social media usage among under-age children can have serious consequences if social media platforms do not take any measures to have strict age requirements.

Read next: Downloading Cracked Software? Beware of the Hidden Malware Stealing Your Info
by Arooj Ahmed via Digital Information World

What Are AI Companies Hiding? New Report Exposes Transparency Gaps in Top Models

There are a lot of AI models right now, but are AI companies really transparent about the "technical underpinnings" of their large language models (LLMs)? According to a new report from Americans for Responsible Innovation (ARI), the organization which advocates for AI regulation, many AI startups are not really open and transparent about the technical details of their AI models as compared to tech giants. Tech giants are also not very open, but they still have some transparency as compared to closed models. The company made this conclusion after analyzing different AI models from Anthropic, xAI, OpenAI, Google, Meta and 21 other companies.

The policy analyst of ARI, David Robusto, said that there are a lot of factors why many companies do not tend to be open and transparent about each AI update. To make detailed documentation about every update, it takes a lot of time, effort and resources. There is always also a chance that company rivals try to reverse-engineer the work based on details on the documents. When companies are secretive about the technical details of their models or other tech devices, it gives them a competitive advantage over other companies. That's why they do not find it necessary to give all the details about updates.

The report says that third parties and policy makers need technical details to understand how the models work, especially in defense and healthcare areas. As some big foundation models are not transparent, it makes the decision making process difficult. There should be some regulations and industry-wide standards for the issues regarding transparency of AI models. There should be some mandatory details that companies should have to disclose no matter what. If we do not know the details about LLMs, we cannot make comparisons between the models even despite the industry benchmarks.

According to the report, LLama 3.2 is the most transparent, with detailed information about training procedures, model architecture and computational requirements. GPT-4o and Gemini 1.5 were also somewhat transparent. The model with least transparency was Grok-2. The area where AI models were the least transparent was in technical transparency. The report also found that user-facing documentation was the best scoring category, with an average score of 3.19 out of 4.0. In systematic risk evaluations, almost all models scored good except Grok-2. All the models scored low on security, as many of the companies didn't provide much information about how they are protecting the systems.



Read next:

• Downloading Cracked Software? Beware of the Hidden Malware Stealing Your Info

• Privacy Concerns Rise as Hackers Threaten to Expose Data from Top Apps Used by Millions
by Arooj Ahmed via Digital Information World

Saturday, January 11, 2025

Downloading Cracked Software? Beware of the Hidden Malware Stealing Your Info

There are a lot of people who do not want to pay big amounts on software and tools like Lightroom, Photoshop, AutoCAD and many others, so they just use cracked versions from the internet. Even though the crack versions do not cost any money apparently, they come with a bigger price like malware and stealing your sensitive information. Researchers from Trend Micro, a security firm, found that attackers spread fake installers on the internet and social media platforms like YouTube, but they have malware that steals your sensitive information but cannot be detected.

There are a lot of YouTube videos that give you cracked links of software you want but as soon as the user clicks on the link, it takes you to reputable file hosting sites like Mega.nz and Mediafire. But most of the time, the legitimate-looking software installer has malware in it and gets into the user’s system when they hit download. This malware is called infostealer which is designed to steal sensitive information from the system which has been infected. All types of sensitive information like your back accounts, personal data, credentials and other private information becomes easily accessible to attackers due to the malware and they can exploit your data for fraud and identity theft.

The researchers gave an example of software Autodesk Keygen which generates serial numbers. When a user searches for it on the web, many legitimate websites like OpenSea appear with a shortened link which directs the user to the malicious link.

Now the question arises how these malwares do not get detected. The answer is that many threat actors use reputable file hosting services that hide the origin of malware and many anti-virus programs are unable to detect it. Many malicious links are also 900MB or more in size with a password protection so the malware is unable to get detected.

How Your Search for Free Software May Lead Straight to Data Theft
Image: Trendmicro

Read next: Privacy Concerns Rise as Hackers Threaten to Expose Data from Top Apps Used by Millions
by Arooj Ahmed via Digital Information World

Meta CEO Says Apple’s Success Is Restricted to the iPhone While Calling It Out For Limited Innovation

Mark Zuckerberg’s recent podcast appearance is causing a stir online, especially after his comments on Apple.

The Meta CEO did not shy away from speaking his mind on yesterday’s Joe Rogan podcast. This is where he was asked to comment on an array of different topics including the upcoming Trump administration and content moderation. However, what really got people’s attention was the discussion on Apple.

Zuckerberg is no fan of Apple’s stringent policies and we’re all quite aware of this. In his latest interview, the Meta CEO shared how 15 to 30% of all the fees that the iPhone maker charges for its App Store is a means to disguise the massive sales it makes for iPhones. If that was not brutal enough, Zuckerberg also limited the Cupertino firm’s success to just iPhones with limited innovation in terms of any new product.

Apple made use of this device to help place several rules on what it feels is arbitrary. They don’t think they’ve really created anything in a long time. It’s almost similar to the likes of Steve Jobs who devised the iPhone and now they are sitting on its success for the past two decades.

Zuckerberg also shared how he was not aware if they were doing great in the iPhone department either as he suspected sales to be on the declining end of the spectrum. Every generation you do get a device but that does not mean it’s better than the one launched previously, he continued. Therefore, so many users don’t feel the need to undergo the upgrade, he continued.

When asked what his perspective was about the company making more money each year, the logic put out by Zuckerberg here was interesting. He mentioned that this might be linked to hiring more people as developers and taking a 30% tax from them.

Another leading issue that seemed to be on his mind was AirPods and how the company refused to give the Facebook parent firm access to iPhones linked to the Meta Ray-Ban glasses.

While AirPods were called out as a great product, what’s the purpose when you cannot enable another archrival to build another great product that links to this device in the same manner.

So AirPods have exclusive capabilities to get linked to iPhones only so only they connect and no one else. It’s like a very seamless connection that gets enabled but not anyone else can use this kind of protocol. If they tried, there would be a lot of great people competing with AirPods in the end.

Meta has been trying long and hard to get Apple to give it some access to better its connectivity for the Meta Ray-Ban glasses but clearly it’s a big no. Apple does not allow this and feels it violates user privacy and safety. Zuckerberg blamed the creators at Apple for the poor design of the device that compromises user safety and therefore blames others for it.

In the end, Zuckerberg showed great optimism about Apple being dethroned from its top position soon because their game in regards to innovation is certainly not top-notch for now.

The next topic of discussion was Google and RCS for iPhones and how Apple failed to approve both security and encryption for messaging instead of designing its own. The RCS Universal Profile is yet to be adopted but they do hope that it’s coming soon.

When asked about the VisionPro, he said Apple failed miserably in that department. A product that pricey had no chance of competing with Meta’s Quest version that came at $300 or $400. He feels that the second and third generations will hopefully be better than the first but only time can tell because right now, it’s just not working.

Image: DIW-Aigen

Read next:

• Musk and Experts Agree: AI Training Faces Challenges Amid Declining Real-World Data

• TikTok Reveals 2025 Trends: Partnerships, Generative AI Ads, and Community-Centered Content Drive Growth

• Researchers Highlight A-B Testing Issues Disrupting Digital Advertising Effectiveness

• Marketing Salaries Surge as AI Skills Drive Demand in Creative and Strategic Roles
by Dr. Hura Anwar via Digital Information World

Friday, January 10, 2025

Marketing Salaries Surge as AI Skills Drive Demand in Creative and Strategic Roles

According to a report from Robert Half research, the median annual salary of a corporate chief marketing officer in the US is $200,250. Nowadays marketing professionals with AI experience are much needed as using AI tools can boost work efficiency and lessen the workload. Because of this new added skill, there's an increase in salaries in marketing professionals. Professionals from marketing, content and PR professionals also have flexibility to work in remote and hybrid environments.


The report also talked about different salaries by profession in the US. The average salary for a creative director with advanced qualifications and skills is $163,500 while employees at starting positions get $101,750 median salary of $101,750. The creative services manager new to their role gets a $76,500 median salary while a professional gets a salary of $113,250.


69% of the marketers agree that advancements in AI are reshaping the skill sets needed in some roles. When managers were asked where they use contract talent the most, 42% said digital marketing, 41% said traditional marketing and 33% said that they use contract talent for content strategy, development and management. 57% of the marketers say that their top priority is a hybrid work arrangement, with managers saying that their ideal working plan is 4 days on-site for staff.

There are some specific skills that need to increase in salaries in marketing and creative fields. The most potential for a salary increase is in creative development and art direction (37%), UX and UI design (34%) and content strategy (26%). 52% of the managers said that they bridge the skill gap through upskilling, 50% pay for professional certifications and 45% reskill employees for new skills to bridge their skill gap. The most sought-after marketing and creative jobs right now are copywriter, digital marketing manager, digital project manager and graphic designer.

When managers were asked how they use AI, 47% use it for data analysis and reporting, 45% use it for customer service and 44% use AI for email marketing. The emerging AI marketing roles are prompt engineer, AI graphic designer and AI trainer.

Read next: LinkedIn Report Highlights Soaring Demand for AI Engineers and Consultants
by Arooj Ahmed via Digital Information World