A group of AI research scientists at Apple published a paper to find if machine learning can really do reasoning and thinking. They gave the AI models a simple arithmetic equation and asked it to solve it. It was a pretty simple equation which could be solved easily, and LLMs did it too. But once they added some extra useless information, LLMs got confused and got the answer wrong.
Why do most LLMs answer incorrectly with simple information? This is probably because they are trained on simple data and can only give to the point answers. When we throw in a bit of irrelevant information where the actual reasoning is required, they cannot answer correctly.
The researchers say that LLMs are not capable of actual reasoning so they get confused when more clauses are added in an equation. They just try to repeat the reasoning steps they are taught in their training data. This research just shows that LLMs can only repeat what they are taught. They do not personally do anything. They just use their data to answer specific questions.
An OpenAI researcher says that correct answers can be achieved from LLMs with a little bit of prompt engineering. Even though better prompting can work for derivations, it may not work on contextual data. This just shows that LLMs cannot reason on its own so it is best to not use it for academics.
Image: DIW-Aigen
Read next: Consumer Advisory Group Which? Warns Users To Keep Mobile Number Active After Serious Security Issues
by Arooj Ahmed via Digital Information World
"Mr Branding" is a blog based on RSS for everything related to website branding and website design, it collects its posts from many sites in order to facilitate the updating to the latest technology.
To suggest any source, please contact me: Taha.baba@consultant.com
Saturday, October 12, 2024
New Study Shows Parents Prefer AI for Child Healthcare Advice, Raising Concerns
A new study by the University of Kansas Life Span Institute finds that parents trust AI more than humans when it comes to health of their kids. This is a really shocking reveal and the study also found out that many parents find AI written text credible and trustworthy too. Parents trust AI that they are now seeking healthcare information from AI, instead of human healthcare workers.
ChatGPT and other AI models are also known for creating many errors and giving false information. So it is a concerning thing that many parents are using AI for their children. The lead author of the study, Leslie-Miller, says that this research was done so the researchers can learn about ChatGPT's impact and potential industry concerns. Before AI parents used to search about healthcare stuff for their children online but now they ask ChatGPT about it.
For the study, 116 parents were gathered and they were given text papers related to healthcare concerns in children. Half of the texts were generated by AI, mostly ChatGPT, and the other half were written by experts. The results found out that most parents couldn't distinguish between AI written content and human content. Although they weren't told that there would be two types of texts, most of them still chose AI written texts as the most reliable.
If parents are going to blindly trust AI, it is important that human domain specific expertise on healthcare information should be presented to parents. AI is also dangerous because it has a tendency to hallucinate which means that it can give responses that are very convincing but in reality, they are made up by it. LLMs are also just trained online which means they do not have real world information and experiences. The lead author suggests that parents should look for an AI system that has been integrated into a system with some expertise. Just stay cautious and always double check AI responses.
Image: DIW-Aigen
Read next: Study Shows Many Advanced AI Chatbots Would Rather Give Wrong Answers than Admit They Do Not Know the Answer
by Arooj Ahmed via Digital Information World
ChatGPT and other AI models are also known for creating many errors and giving false information. So it is a concerning thing that many parents are using AI for their children. The lead author of the study, Leslie-Miller, says that this research was done so the researchers can learn about ChatGPT's impact and potential industry concerns. Before AI parents used to search about healthcare stuff for their children online but now they ask ChatGPT about it.
For the study, 116 parents were gathered and they were given text papers related to healthcare concerns in children. Half of the texts were generated by AI, mostly ChatGPT, and the other half were written by experts. The results found out that most parents couldn't distinguish between AI written content and human content. Although they weren't told that there would be two types of texts, most of them still chose AI written texts as the most reliable.
If parents are going to blindly trust AI, it is important that human domain specific expertise on healthcare information should be presented to parents. AI is also dangerous because it has a tendency to hallucinate which means that it can give responses that are very convincing but in reality, they are made up by it. LLMs are also just trained online which means they do not have real world information and experiences. The lead author suggests that parents should look for an AI system that has been integrated into a system with some expertise. Just stay cautious and always double check AI responses.
Image: DIW-Aigen
Read next: Study Shows Many Advanced AI Chatbots Would Rather Give Wrong Answers than Admit They Do Not Know the Answer
by Arooj Ahmed via Digital Information World
Google’s AI Overviews Filter Out HCU-Affected Sites, Experts Speak Out
Google’s AI Overviews are ignoring sites affected by the Helpful Content Update (HCU).
Users noticed that the same is the case for core updates, even if the AI Overview is directly asked about those sites. The same goes for those websites that are ranking well when a query is put forward on Google Search.
It’s interesting because we are all well aware of how AI Overviews are directly impacted by leading updates. The same goes for helpful updates which are now a direct part of the core update.
The user who first noted the change is Lily Ray who spoke about the matter on the X app. She was shocked to see the search result did not feature any links for that site. It was just several sites talking about that certain site, nothing else.
Sites do rank well and can even be in the first position for traditional search results. But these have zero links for the AI Overview. It appears like they have a certain filter in place to stop HCU sites from getting linked there. The user asked others facing similar problems to come forward and share more on the matter.
Another expert, Glenn Gabe mentioned that he is not seeing any AI Overviews for his brand. Even if there was one, it was not very interesting. Clearly, it’s a problem that Google needs to look into before things get too late.
Image: DIW-Aigen
Read next: ByteDance’s Shift To AI Content Moderation Results In Hundreds of TikTok Employees Getting Fired
by Dr. Hura Anwar via Digital Information World
Users noticed that the same is the case for core updates, even if the AI Overview is directly asked about those sites. The same goes for those websites that are ranking well when a query is put forward on Google Search.
It’s interesting because we are all well aware of how AI Overviews are directly impacted by leading updates. The same goes for helpful updates which are now a direct part of the core update.
The user who first noted the change is Lily Ray who spoke about the matter on the X app. She was shocked to see the search result did not feature any links for that site. It was just several sites talking about that certain site, nothing else.
Sites do rank well and can even be in the first position for traditional search results. But these have zero links for the AI Overview. It appears like they have a certain filter in place to stop HCU sites from getting linked there. The user asked others facing similar problems to come forward and share more on the matter.
Another expert, Glenn Gabe mentioned that he is not seeing any AI Overviews for his brand. Even if there was one, it was not very interesting. Clearly, it’s a problem that Google needs to look into before things get too late.
Image: DIW-Aigen
Read next: ByteDance’s Shift To AI Content Moderation Results In Hundreds of TikTok Employees Getting Fired
by Dr. Hura Anwar via Digital Information World
Meta Blames Breakdown Of ‘Internal Tool’ For Moderation Mistakes On Threads
Meta says it’s working to fix an internal tool that recently broke down and was responsible for the moderation mistakes on Threads.
The company says it’s working to fix all the errors after receiving numerous complaints about poor content moderation on the Threads app. This latest update came from Instagram and Threads boss Adam Mosseri.
The news comes as users criticized the platform for not doing enough to control the problem. They felt the way things were moderated was bizarre and quite harsh. One prominent example is how users’ accounts were suspended if they used a term like cracker or saltines.
For now, Mosseri failed to mention exactly why these errors were occurring but did reveal some vague information on how one of the internal tools broke down. This prevented human moderators from seeing the right context and which posts were undergoing moderation. Sadly, there's no information on the fate of those previously affected by these issues.
Meta has tried to reassure everyone that the matter was getting attention and that it should be resolved soon. They are looking into it and found where the error lies. All necessary changes will be done in due time.
They hope to make better decisions now and fewer mistakes as that’s the goal for moderation on the app, he continued. He concluded that the entire purpose is to ensure users have the safest experience and make things better.
While Meta might be struggling with this issue right now, it’s not the only problem that’s caught many off guard in recent times. During the start of the week, Mosseri did promise users on the app that it would curb the issue of engagement bait. It was getting out of control after users complained.
Image: DIW-Aigen
Read next:
• BlueSky Capitalizes on Threads' User Frustration Amid Content Moderation Controversy
• Meta's Threads Surges to 21.7 Million US Users — But Can It Ever Catch X?
by Dr. Hura Anwar via Digital Information World
The company says it’s working to fix all the errors after receiving numerous complaints about poor content moderation on the Threads app. This latest update came from Instagram and Threads boss Adam Mosseri.
The news comes as users criticized the platform for not doing enough to control the problem. They felt the way things were moderated was bizarre and quite harsh. One prominent example is how users’ accounts were suspended if they used a term like cracker or saltines.
For now, Mosseri failed to mention exactly why these errors were occurring but did reveal some vague information on how one of the internal tools broke down. This prevented human moderators from seeing the right context and which posts were undergoing moderation. Sadly, there's no information on the fate of those previously affected by these issues.
Meta has tried to reassure everyone that the matter was getting attention and that it should be resolved soon. They are looking into it and found where the error lies. All necessary changes will be done in due time.
They hope to make better decisions now and fewer mistakes as that’s the goal for moderation on the app, he continued. He concluded that the entire purpose is to ensure users have the safest experience and make things better.
While Meta might be struggling with this issue right now, it’s not the only problem that’s caught many off guard in recent times. During the start of the week, Mosseri did promise users on the app that it would curb the issue of engagement bait. It was getting out of control after users complained.
Image: DIW-Aigen
Read next:
• BlueSky Capitalizes on Threads' User Frustration Amid Content Moderation Controversy
• Meta's Threads Surges to 21.7 Million US Users — But Can It Ever Catch X?
by Dr. Hura Anwar via Digital Information World
Friday, October 11, 2024
Research Highlights Dominance of Supportive Tweets Over Negative Content on X During Elections
A study published in Scientific Reports has provided insights about which type of partisan content spreads more quickly on social media, especially X, after looking at activities on X during four consecutive elections. It was found that in-party love is shared more actively on X than out-party hate. For the study, the researchers focused on Twitter activities during elections in Spain in 2015, 2016 and two times in 2019. They identified tweets with keywords related to parties, slogans, campaigns and candidate names.
The researchers focused more on retweets as retweets identify that the users are acknowledging the message behind the tweets. Researchers analyzed the data based on three variables. Users who retweeted right wing parties were named right-learning and users who retweeted left ring parties were named left-learning. The second variable was knowing the efficiency of retweets, meaning how much retweets users were doing on the platform. Thirdly, in-party tweets and out-party tweets were examined.
The research found out that users tweet more about the parties they love than the parties they hate. This trend was seen in all four elections in Spain. The study also found out that even though there were many out-party negative tweets on the platform, they do not tend to spread widely and get only a few retweets. In-party positive tweets were more likely to spread widely and were more in number than out-paty hate tweets.
This study shows an interesting difference between supportive and hate tweets about parties during election days. But it has some limitations too. First of all, this study was only done on Twitter and not other social media platforms. Secondly, this study only focused on retweets as a measure of influence. The study also categorized tweets in only right-wing and left-wing blocks. So, if a more extensive study happens, all of these points should also be considered.
Image: DIW-Aigen
Read next: Researchers Call For Stricter Laws Requiring Apps To Remove Revenge-Porn As X Accused Of Ignoring Requests Until DMCA Used
by Arooj Ahmed via Digital Information World
The researchers focused more on retweets as retweets identify that the users are acknowledging the message behind the tweets. Researchers analyzed the data based on three variables. Users who retweeted right wing parties were named right-learning and users who retweeted left ring parties were named left-learning. The second variable was knowing the efficiency of retweets, meaning how much retweets users were doing on the platform. Thirdly, in-party tweets and out-party tweets were examined.
The research found out that users tweet more about the parties they love than the parties they hate. This trend was seen in all four elections in Spain. The study also found out that even though there were many out-party negative tweets on the platform, they do not tend to spread widely and get only a few retweets. In-party positive tweets were more likely to spread widely and were more in number than out-paty hate tweets.
This study shows an interesting difference between supportive and hate tweets about parties during election days. But it has some limitations too. First of all, this study was only done on Twitter and not other social media platforms. Secondly, this study only focused on retweets as a measure of influence. The study also categorized tweets in only right-wing and left-wing blocks. So, if a more extensive study happens, all of these points should also be considered.
Image: DIW-Aigen
Read next: Researchers Call For Stricter Laws Requiring Apps To Remove Revenge-Porn As X Accused Of Ignoring Requests Until DMCA Used
by Arooj Ahmed via Digital Information World
BlueSky Capitalizes on Threads' User Frustration Amid Content Moderation Controversy
Threads is in the hot waters because of its content moderation problems and a lot of users complaining that their follower growth rate is not increasing and the engagement on their posts is almost nonexistent. It just seems that there are a lot of issues going on among accounts with good followers and engagements. It is a bad thing for Threads, especially if it is trying to take Twitter’s place. Amidst all of these problems, BlueSky has also joined Threads to capitalize on discussions that are taking place on the platform.
A lot of users are saying that Threads is totally relying on AI for moderation which is causing a lot of problems like some accounts getting removed or flagged as underage, users having their posts blocked and having no engagement on posts. A lot of users were threatening to leave Threads for BlueSky.
Taking advantage of all of this chaos, BlueSky made an account on Threads and posted that they heard people were talking about them, so they have made an account here to give out more information. BlueSky then posted a lot of things in its favor and talked about how this platform is different from Threads in a lot of ways. It also explained how their platform has a moderation team too but it doesn’t de-rank content about politics, something Meta, Instagram and Threads are doing a lot nowadays.
In February, Meta announced that it would no longer recommend political content to users, unless the users are actively following political accounts. This resulted in backlash from the users as they said that it could hide all the experienced folks talk about on the platform.
BlueSky also mentioned that its moderation tools allow users to filter their feeds according to their interests and the company is also in support of algorithm choice, open source code and account portability. Even if many Threads users go to BlueSky, it is still small compared to Threads. Threads has 200 million active users right now, while BlueSky has just 10.7 million active users.
Some users may still choose to stay on Threads regardless of any problems. There is also a chance that Threads will address the moderation issue so users can have some answers. BlueSky also said that it is reading all of users’ feedback and they will try to improve themselves.
Read next: YouTube Shorts Tests "Save" Button, Replacing Dislike in New Feature Update
by Arooj Ahmed via Digital Information World
A lot of users are saying that Threads is totally relying on AI for moderation which is causing a lot of problems like some accounts getting removed or flagged as underage, users having their posts blocked and having no engagement on posts. A lot of users were threatening to leave Threads for BlueSky.
Taking advantage of all of this chaos, BlueSky made an account on Threads and posted that they heard people were talking about them, so they have made an account here to give out more information. BlueSky then posted a lot of things in its favor and talked about how this platform is different from Threads in a lot of ways. It also explained how their platform has a moderation team too but it doesn’t de-rank content about politics, something Meta, Instagram and Threads are doing a lot nowadays.
In February, Meta announced that it would no longer recommend political content to users, unless the users are actively following political accounts. This resulted in backlash from the users as they said that it could hide all the experienced folks talk about on the platform.
BlueSky also mentioned that its moderation tools allow users to filter their feeds according to their interests and the company is also in support of algorithm choice, open source code and account portability. Even if many Threads users go to BlueSky, it is still small compared to Threads. Threads has 200 million active users right now, while BlueSky has just 10.7 million active users.
Some users may still choose to stay on Threads regardless of any problems. There is also a chance that Threads will address the moderation issue so users can have some answers. BlueSky also said that it is reading all of users’ feedback and they will try to improve themselves.
Read next: YouTube Shorts Tests "Save" Button, Replacing Dislike in New Feature Update
by Arooj Ahmed via Digital Information World
YouTube Shorts Tests "Save" Button, Replacing Dislike in New Feature Update
YouTube is experimenting with a new feature for YouTube Shorts which will add a “Save” button so users can come back to their favorite Shorts. This button will be added in the place of the dislike button and the dislike button would be eliminated from the user interface. Right now, YouTube Shorts has five buttons at the right side. These include like, dislike, comments, share and remix. YouTube is thinking about removing the dislike button to add the save button there.
YouTube has explained the addition of the Save button of YouTube Shorts that they are adding this button so users can bookmark their favorite shorts and come back to watch them whenever they want. This feature is already available for some of the test group and they can see it in the place of the dislike button. If a user wants to dislike a Shorts or send feedback, he would have to click the three dots at the top right corner of the screen to do so.
This means that users will still be able to like and dislike Shorts. For liking the Shorts, just press the like button at the right side of the screen. But if they want to dislike it, they’ll have to dig deeper. YouTube’s decision to remove the dislike button from the UI tells us that people do not frequently dislike Shorts. The feature is still under experiments and testing to a small number of viewers on YouTube.
Image: DIW-Aigen
Read next: Meta's Threads Surges to 21.7 Million US Users — But Can It Ever Catch X?
by Arooj Ahmed via Digital Information World
YouTube has explained the addition of the Save button of YouTube Shorts that they are adding this button so users can bookmark their favorite shorts and come back to watch them whenever they want. This feature is already available for some of the test group and they can see it in the place of the dislike button. If a user wants to dislike a Shorts or send feedback, he would have to click the three dots at the top right corner of the screen to do so.
This means that users will still be able to like and dislike Shorts. For liking the Shorts, just press the like button at the right side of the screen. But if they want to dislike it, they’ll have to dig deeper. YouTube’s decision to remove the dislike button from the UI tells us that people do not frequently dislike Shorts. The feature is still under experiments and testing to a small number of viewers on YouTube.
Image: DIW-Aigen
Read next: Meta's Threads Surges to 21.7 Million US Users — But Can It Ever Catch X?
by Arooj Ahmed via Digital Information World
Subscribe to:
Posts (Atom)