Thursday, December 5, 2024

New Study Shows Instagram is Promoting Self-Harm on the Platform and Meta is Turning Blind Eye to It

According to Danish researchers, Instagram and other social media apps are promoting self harm content and making them reachable to the users. The researchers who did the study created fake Instagram profiles of people as young as 13 years old and shared different content about self harm, including videos and photos, to encourage people to self harm. The study was done to understand if Meta’s claim of moderating harmful content was right or wrong as Neta claims that it removes 99% of the content that doesn't meet its guidelines. During this whole segment of experiment, Digital Ansvar found that not a single self harm content was removed from Instagram.

The organization also created its own AI tool to find and remove harmful content. 88% of the content that was very harmful and 38% of the content about self harm was immediately identified by the AI tool. This shows that Meta has advanced AI tools to identify the content but it is simply choosing not to use them to identify harmful content on the app. It also shows that Meta isn't following EU law for content moderation.

EU law known as The Digital Services Act states that any harmful material that poses risk to mental or physical health should be immediately identified and removed from digital services. One of the Meta’s spokesmen said that the company always removes content with harmful intentions and self harm. The company claims that they removed 12 million images and videos related to suicide and self harm on Instagram. Meta has also.launched Instagram Teen Accounts which gives teens a safe platform and where the content control is also stricter.

The study, on the other hand, says that Meta is spreading self harm content instead of stopping out. Instagram algorithm is also helping in the spread of self harm networks where children become friends with self harm groups. The chief executive of Digitalt Ansvar, Hesby Holm, said that he is extremely alarmed by the study because Instagram is contributing to the spread of self-harm even though it should do all things to make it stop. He added that we thought that AI tools will be a big help to identify and remove self-harm content but it doesn't seem that way. There are going to be severe consequences if we do not stop it because many children may try to inflict self harm on themselves without their parents knowing. As Instagram does not moderate small groups, it may take some time for Instagram to identify self-harm groups as they are smaller and private on the platform.

Leading psychologist who left Meta’s global expert group said that she left the company because it was not paying attention to harmful content on Instagram. She was shocked that Instagram hasn't removed explicit and harmful content. She added that even with all this technology, Instagram is promoting self-harm in children and young women and there's no one to stop it. Right now, moderating content on Instagram is a matter of life and death but nobody seems to care.

Image: DIW-Aigen

H/T: TG

Read next: New Security Alert For Android DroidBot Malware That Steals Credentials For More Than 77 Crypto Exchanges and Banking Apps
by Arooj Ahmed via Digital Information World

No comments:

Post a Comment