Sunday, October 13, 2024

Study: ChatGPT Matches Terrorist Themes With TRAP-18, Identifies Cybercrimes

According to a study by Charles Darwin University (CDU) researchers published in the Journal of Language Aggression and Conflict found that ChatGPT can be used to identify cyberterrorist. The researchers used four sample statements by terrorists that were given to ChatGPT. The researchers asked ChatGPT about the main themes and topics in the text, and what messages it finds behind those statements.

ChatGPT was able to find the terrorists behind those statements and found the motivation and purpose of each text. The themes concluded by ChatGPT were opposition to secularism and apostate rulers, criticism of mass immigration, retaliation and self-defense, opposition to multiculturalism, struggle and martyrdom and more. ChatGPT was easily able to identify clues that hinted at violence.

The themes identified by ChatGPT also matched with Terrorist Radicalization Assessment Protocol-18 (TRAP-18), a tool which is able to identify potential terrorists. The lead author of the study said that ChatGPT can easily be used as a tool to identify terrorists and it doesn’t even need any additional training to do so.

Even though many LLMs cannot take the place of human judgment, they can still act as an assistant to offer valuable investigation clues. Even though there are concerns about weaponization of AI tools, they can still be helpful in providing a great map to identify terrorists.

Image: DIW-AIgen

Read next: Researchers Say LLMs Do Not Have Any Ability to Reason After Finding Out That AI Models Cannot Even Solve Math Problems with Some Changes
by Arooj Ahmed via Digital Information World

Instagram Usage Jumps Among Teens in 2024, TikTok Popularity Rises

According to the latest research by Piper Sandler, Instagram is the most used app among teens but TikTok is also popular. The survey of 13,515 US teens shows 30% are using Netflix more than YouTube (27%). TikTok (39%) is the most liked app among teens, while Instagram (32%) is the second most liked social media app.


On the other hand, Instagram is being used by 87% of the teens in Fall 2024. This is a 7% increase in its usage from last year. On the other hand, TikTok usage increased from 74% in Fall 2023 to 79% in Fall 2024. Snapchat is the third most used app by teens this year, but it saw a 3% decrease from last year. In 2023, 74% of teens were using Snapchat but now its usage has come down to 71%.

Pinterest and Facebook are also seeing some rise in usage among teens. Pinterest saw a 6% increase in its usage from last year. 41% of the teens are using Pinterest this year, making it the fourth most used app. Facebook saw a 2% increase with 30% of teens using the app this year. Meta reported that 40 million young adults are using Facebook this year in the USA and Canada every day. It can be due to AI-recommended content too.

Read next: Researchers Say LLMs Do Not Have Any Ability to Reason After Finding Out That AI Models Cannot Even Solve Math Problems with Some Changes
by Arooj Ahmed via Digital Information World

Researchers Have Developed a New Algorithm Called RoVi-Aug Which Can be Used to Train Other Robots

Robots are being used these days to perform various tasks and their algorithm is important for them to perform the actual work. Researchers at UC Berkeley have developed a RoVi-Aug framework which will augment robotic data and will help transferring the data in other robots. Modern machine learning and generative models have generalized their data which is used in almost all of the models. So, the researchers wanted to make something similar to that for the robots that can generalize their data.

The researchers had been trying to generalize robotic data since the start of this year and had been doing various experiments on it. In their previous research, the researchers realized that there are some challenges in generalizing robotic data too. They found that if the robotic data is unevenly distributed, it can become less effective in teaching other robots the same skills.

But the researchers soon found out that a lot of robots have uneven datasets, including the Open-X Embodiment (OXE) dataset which is widely used for training robotics algorithms. This type of imbalance can limit the performance of robots too. To solve this issue, researchers proposed a new algorithm called Mirage which uses a technique called cross-painting to transform unseen robots into source robots. But there are some limitations of this algorithm too.

First of all, it needs exact robot models and cameras, and cannot adjust with camera angles. As an alternative, the researchers presented RoVi-Aug which is flexible and adaptable and can create synthetic images that show robots tasks from different angles.

RoVi-Aug also doesn’t require any extra processing during its deployment and allows changing camera angles from different perspectives. It can help researchers to train other robots because it has precise camera setups which are essential for robot training. RoVi-Aug is also cost-effective and can help other robots improve in learning and training.

Image: DIW-Aigen

Read next: New Study Shows Parents Prefer AI for Child Healthcare Advice, Raising Concerns
by Arooj Ahmed via Digital Information World

Saturday, October 12, 2024

Researchers Say LLMs Do Not Have Any Ability to Reason After Finding Out That AI Models Cannot Even Solve Math Problems with Some Changes

A group of AI research scientists at Apple published a paper to find if machine learning can really do reasoning and thinking. They gave the AI models a simple arithmetic equation and asked it to solve it. It was a pretty simple equation which could be solved easily, and LLMs did it too. But once they added some extra useless information, LLMs got confused and got the answer wrong.

Why do most LLMs answer incorrectly with simple information? This is probably because they are trained on simple data and can only give to the point answers. When we throw in a bit of irrelevant information where the actual reasoning is required, they cannot answer correctly.

The researchers say that LLMs are not capable of actual reasoning so they get confused when more clauses are added in an equation. They just try to repeat the reasoning steps they are taught in their training data. This research just shows that LLMs can only repeat what they are taught. They do not personally do anything. They just use their data to answer specific questions.

An OpenAI researcher says that correct answers can be achieved from LLMs with a little bit of prompt engineering. Even though better prompting can work for derivations, it may not work on contextual data. This just shows that LLMs cannot reason on its own so it is best to not use it for academics.


Image: DIW-Aigen

Read next: Consumer Advisory Group Which? Warns Users To Keep Mobile Number Active After Serious Security Issues
by Arooj Ahmed via Digital Information World

New Study Shows Parents Prefer AI for Child Healthcare Advice, Raising Concerns

A new study by the University of Kansas Life Span Institute finds that parents trust AI more than humans when it comes to health of their kids. This is a really shocking reveal and the study also found out that many parents find AI written text credible and trustworthy too. Parents trust AI that they are now seeking healthcare information from AI, instead of human healthcare workers.

ChatGPT and other AI models are also known for creating many errors and giving false information. So it is a concerning thing that many parents are using AI for their children. The lead author of the study, Leslie-Miller, says that this research was done so the researchers can learn about ChatGPT's impact and potential industry concerns. Before AI parents used to search about healthcare stuff for their children online but now they ask ChatGPT about it.

For the study, 116 parents were gathered and they were given text papers related to healthcare concerns in children. Half of the texts were generated by AI, mostly ChatGPT, and the other half were written by experts. The results found out that most parents couldn't distinguish between AI written content and human content. Although they weren't told that there would be two types of texts, most of them still chose AI written texts as the most reliable.

If parents are going to blindly trust AI, it is important that human domain specific expertise on healthcare information should be presented to parents. AI is also dangerous because it has a tendency to hallucinate which means that it can give responses that are very convincing but in reality, they are made up by it. LLMs are also just trained online which means they do not have real world information and experiences. The lead author suggests that parents should look for an AI system that has been integrated into a system with some expertise. Just stay cautious and always double check AI responses.

Image: DIW-Aigen

Read next: Study Shows Many Advanced AI Chatbots Would Rather Give Wrong Answers than Admit They Do Not Know the Answer
by Arooj Ahmed via Digital Information World

Google’s AI Overviews Filter Out HCU-Affected Sites, Experts Speak Out

Google’s AI Overviews are ignoring sites affected by the Helpful Content Update (HCU).

Users noticed that the same is the case for core updates, even if the AI Overview is directly asked about those sites. The same goes for those websites that are ranking well when a query is put forward on Google Search.

It’s interesting because we are all well aware of how AI Overviews are directly impacted by leading updates. The same goes for helpful updates which are now a direct part of the core update.

The user who first noted the change is Lily Ray who spoke about the matter on the X app. She was shocked to see the search result did not feature any links for that site. It was just several sites talking about that certain site, nothing else.

Sites do rank well and can even be in the first position for traditional search results. But these have zero links for the AI Overview. It appears like they have a certain filter in place to stop HCU sites from getting linked there. The user asked others facing similar problems to come forward and share more on the matter.

Another expert, Glenn Gabe mentioned that he is not seeing any AI Overviews for his brand. Even if there was one, it was not very interesting. Clearly, it’s a problem that Google needs to look into before things get too late.

Image: DIW-Aigen

Read next: ByteDance’s Shift To AI Content Moderation Results In Hundreds of TikTok Employees Getting Fired
by Dr. Hura Anwar via Digital Information World

Meta Blames Breakdown Of ‘Internal Tool’ For Moderation Mistakes On Threads

Meta says it’s working to fix an internal tool that recently broke down and was responsible for the moderation mistakes on Threads.

The company says it’s working to fix all the errors after receiving numerous complaints about poor content moderation on the Threads app. This latest update came from Instagram and Threads boss Adam Mosseri.

The news comes as users criticized the platform for not doing enough to control the problem. They felt the way things were moderated was bizarre and quite harsh. One prominent example is how users’ accounts were suspended if they used a term like cracker or saltines.

For now, Mosseri failed to mention exactly why these errors were occurring but did reveal some vague information on how one of the internal tools broke down. This prevented human moderators from seeing the right context and which posts were undergoing moderation. Sadly, there's no information on the fate of those previously affected by these issues.

Meta has tried to reassure everyone that the matter was getting attention and that it should be resolved soon. They are looking into it and found where the error lies. All necessary changes will be done in due time.

They hope to make better decisions now and fewer mistakes as that’s the goal for moderation on the app, he continued. He concluded that the entire purpose is to ensure users have the safest experience and make things better.

While Meta might be struggling with this issue right now, it’s not the only problem that’s caught many off guard in recent times. During the start of the week, Mosseri did promise users on the app that it would curb the issue of engagement bait. It was getting out of control after users complained.

Image: DIW-Aigen

Read next:

• BlueSky Capitalizes on Threads' User Frustration Amid Content Moderation Controversy

• Meta's Threads Surges to 21.7 Million US Users — But Can It Ever Catch X?
by Dr. Hura Anwar via Digital Information World