Release Date: October 8, 2018
BUFFALO, N.Y. — A new system for automatically detecting prejudice in social media posts can help curb abuse and harassment online, according to new research from the University at Buffalo School of Management.
Recently published in Decision Support Systems, the study analyzed online intergroup prejudice — a distorted opinion held by one social group about another without examining the facts behind aversion, hatred and hostility.
“In social media, users often express prejudice without thinking about how members of the other group would perceive their comments,” says study co-author Haimonti Dutta, PhD, assistant professor of management science and systems in the UB School of Management. “This not only alienates the targeted group members but also encourages the development of dissent and negative behavior toward that group.”
Using Twitter data of more than 68,000 tweets from nearly 31,000 users collected immediately following the 2013 Boston Marathon bombing, the researchers developed a system for automatically detecting intergroup prejudice in social media messages using machine learning. The system can flag messages as having the potential to spread misinformation and ill will, assisting crisis information systems managers.
Dutta says that manual detection of prejudiced social media messages is daunting due to the sheer volume, and systems like theirs can assist with an increasingly important task.
“The prejudice expressed publicly via social media is dangerous because those messages are likely to spread far more rapidly and broadly than private sharing,” says Dutta.
Dutta collaborated on the study with K. Hazel Kwon, assistant professor in the Walter Cronkite School of Journalism and Mass Communication at Arizona State University, and H. Ragav Rao, the AT&T Distinguished Chair in Infrastructure Assurance and Security at the University of Texas at San Antonio College of Business.