Tuesday, August 15, 2023

Recently Developed Technology Identifies Prejudice in Advanced Generative Artificial Intelligence Model

As technology advances, new helpful and powerful tools come to the forefront.

A recent example of one is T2I (Text-to-image) generative AI tools that have been adopted by many as they possess the impressive capability to generate images after giving them just a few words as prompts. Text-to-image generates videos and photos that appear so genuine that people may find it difficult to figure out whether they are real or AI-generated. Those generated images and videos offer use in many areas, such as advertising, electoral campaigning, and art.

But prejudices have been reported after viewing the images these tools generated as developers developed them on data taken from humans. These prejudices surround the color of skin and gender.

The tools may be helpful and massive but may also be prone to causing harm to marginalized groups by increasing discrimination and boosting stereotypes.

Researchers from Baskin Engineering at UC Santa Cruz teamed up with Xin Eric Wang, an assistant professor of CS and Engineering, to develop a tool they named, Text to Image Association test. The tool quantifies and identifies those complex prejudices related to humans that the text-to-image generative AI tools have integrated, and the researchers tested it with Stable Diffusion, the advanced generative model. The tool analyzes these biases that encompass various aspects such as faith, ethnicity, gender, and occupation.

Jailu Wang, the leading writer of the paper and CS and engineering Ph.D. student from UCSC, states that both owners and users of the model are concerned about those biases, such as the high possibility of the model only generating images of people from privileged communities to a user from a marginalized community.

The tool that the researchers developed requires the user to instruct the model to generate an image based on a neutral prompt. It could be a prompt related to anything, like a child learning math. After that, the user provides instructions on gender, such as boy learning math and girl learning math’ As a result, the tool compares the disparity between the generated images from the neutral prompt and the specific prompts. Then the tool measures how different these pictures are from the model, quantifying the extent of bias.

Researchers concluded that Stable Diffusion repeats and intensifies these human biases by incorporating them into their generated images. They further tested the tool to draw the connection between two ideas, such as arts and science, along with two characteristics, like female and male. Once it examines it, the tool reveals a score that represents the connection linked to the idea and the characteristic, along with a value to showcase how sure the tool is about the provided score.

To further test the tool’s ability, the team made the model to examine six sets of contrasting ideas by relating them with positive or negative features. The ideas included musical instruments and weapons, insects and flowers, dark skin and light skin, African American and European American, Christianity and Judaism, and gay and straight.

Results showed that the model complied with the harmful stereotypes associated with the concepts but surprised the team by relating dark skin as pleasant and light skin as unpleasant. The model revealed to associate art more with females, science more with males, family more with females, and careers more with males.

This new tool is an advancement after comparing it to a previous technique for assessing prejudice in T21 models, which involved annotating the results after instructing a neutral prompt. The researcher would have to give a neutral prompt, such as a student studying math’ and then identify it according to the image generated by the model, whether it is of a boy studying math or a girl. This technique proved to be inefficient as it was prone to biases relating to gender and was costly.

Xin Wang reported that they aim to replace the annotating technique done by humans with a quick and efficient digital tool that assesses these prejudices. Moreover, the team’s tool also takes note of the background of the generated images, such as the tone and colors used.

Furthermore, the team incorporated Implicit Association Test in their tool, a test to assess stereotypes and human biases. The team hopes that their tool can help software engineers during the development process of their models to address and prevent such biases.


Read next: Google’s AI Ambassador Agrees That AI is a Threat, Claims Google is Different
by Ahmed Naeem via Digital Information World

No comments:

Post a Comment