USC student develops AI tools to fight hate speech and hate crime – USC Viterbi

Aida Mostafazadeh Davani is part of USC’s Computational Social Sciences Lab, where researchers and students come together to apply computational methods to problems related to human behavior and psychology. Design / Izzy Lux.

Human choices influence almost every aspect of AI, from keeping data sets to labeling training data. This creates a particularly complex hurdle for IT professionals: how to prevent human biases from entering AI systems, with potentially harmful results.

The stakes are high. With AI now used to make important banking, hiring, and law enforcement decisions, it can help decide, for example, who gets a bank loan, job interview, or parole . Enter Aida Mostafazadeh Davani, a PhD in computer science from USC. student, whose work focuses on computational social sciences and AI ethics.

Recommended by Morteza Deghani, associate professor of psychology and computer science, Davani is part of USC Computational Social Sciences Laboratory, where researchers and students come together to apply computational methods to problems related to human behavior and psychology.

When AI learns social stereotypes

One area where prejudice can creep in is hate speech detection, the automated task of determining whether a piece of text contains hate speech, especially on social media. In particular, these prejudices are often based on stereotypes: the fixed and over-generalized ideas that we have about a particular type of person or thing.

If the humans who train AI identify hate speech based on social stereotypes, the AI ​​will eventually do the same. This can determine, for example, which tweets go viral and which are deleted.

In a recent article, presented at the CogSci 2020 virtual conference on August 1, Davani and his co-authors found that hate speech classifiers did indeed learn human-like social stereotypes.

Specifically, the team analyzed the data against the Stereotypical Content Model (SCM), a theory of social psychology, which hypothesizes that all group stereotypes and interpersonal impressions form along two dimensions: the warmth and competence.

“The available data is the result of biases that already exist in our society. »Aida Mostafazadeh Davani

In this case, the researchers determined that people are less likely to label the text as hate speech when the groups referred to are seen as very competent, but less warm. Conversely, they are more likely to qualify the text as hate speech when the group is perceived as warm, but less competent. Two categories that people tend to stereotype as warmer but less competent? Women and immigrants.

Identifying this type of bias is important because, in order to make AI systems fair, we must first be aware of the existence of the bias – and identifying it requires the expertise of working psychologists and computer scientists. together.

“The data available is the result of prejudices that already exist in our society,” Davani said. “Some people think we can use machine learning to find bias in the system, but I don’t think it’s that easy. You must first know that these biases exist. This is why in our lab we focus on identifying this type of bias in society and then relating it to how the model becomes biased as a result. “

Hate crimes underreporting

Originally from Iran, where she obtained a master’s degree in software engineering from Sharif University of Technology in Tehran, justice and fairness are at the heart of Davani’s work.

“At some point in your education, I think you are asking yourself: what impact will the work I do have on society? Davani said.

“Machine learning is used in a lot of applications right now and people trust it, but if you have the knowledge to dig in and see if it hurts, I think it’s more important than try to make the models more precise. You want to know who is hurt, not just looking at people as data points. “

In his previous work, Davani and his colleagues examined the issue of underreported hate crimes in the United States using computer methods. Hate crimes in the United States remain vastly underreported by victims, law enforcement and the local press.

For example, in 2017, agencies as large as the Miami Police Department did not report any hate incidents. It “seems unrealistic,” Davani and his co-authors wrote in the paper, Posted in Empirical Methods of Natural Language Processing (EMNLP).

Additionally, there are no official reports from a large number of US cities regarding hate incidents. To solve this problem, Davani and his coauthors used event extraction methods and machine learning to analyze news articles and predict hate crime cases.

“You want to know who is injured, not just looking at people as data points. »Aida Mostafazadeh Davani

Then they used this model to detect hate crime cases in cities for which the FBI lacks statistics. By comparing the model’s predictions to FBI reports, they were able to establish that hate incidents are underreported compared to other types of crime in the local press.

By using event detection methods in conjunction with local news articles, they were then able to provide conservative estimates of the frequency of hate crimes in cities without official representation. The researchers say the models’ predictions are lower estimates, in other words, the real number is likely even higher.

One possible application of this method is to create a real-time hate crime detector based on local online news agencies, providing researchers and community workers with an estimate of the number of hate crimes for which there are no statistics. .

In his future work, Davani plans to continue to design approaches to reduce social group biases in machine learning to support the fair treatment of groups and individuals, which requires an understanding of social stereotypes and social hierarchies.

As a supervisor, Dehghani said he was eager to see Davani make waves on the pitch.

“Aida is a brilliant computer scientist and, at the same time, she can fully engage and understand social science theories,” he said. “She is on the way to becoming a star in the AI ​​fairness arena.”

Posted on September 10, 2020

Last updated September 14, 2020

Comments are closed.