Addressing Online Misinformation through Soft Moderation Interventions and AI-assisted Fact-Checking

Image

Philip Mai, MA, JD 
Toronto Metropolitan University, Canada

Abstract 
Misinformation is a major problem on social media. Platforms are under pressure from the public to combat misinformation while protecting free speech and freedom of expression. To achieve this delicate balance, platforms have implemented a range of content moderation policies and practices. These include algorithms that detect and remove offensive or harmful content, human moderators who review flagged content and make decisions on its removal, and user reporting tools that allow users to report content that violates platform’s community standards. Platforms have also introduced soft moderation interventions, which aim to educate and inform rather than block access to content. Such interventions include fact-checking, warning labels, and redirecting users to verified sources.
This talk will cover two related projects on misinformation currently underway at the Social Media Lab. The first project is a user study that examines the effectiveness of Facebook’s soft moderation interventions for reducing the spread of misinformation. The study aims to independently test the interventions in an ecologically valid setting. The results of the study could have implications for the future development of soft moderation interventions and their effectiveness in combating misinformation.
The second project explores the use of AI-assisted technology to facilitate and speed up the work of human fact-checkers. Specifically, the talk will introduce the Fact Check Assistant, an AI-powered bot based on OpenAI's GPT-4 model designed to facilitate simple fact-checking available at factcheckassistant.socialmedialab.ca
These projects provide insights into ongoing efforts to study and combat online misinformation and demonstrate the potential for AI-assisted technologies to aid in this fight. However, there are also concerns about the potential for such technologies to reinforce biases and the need to consider the ethical implications of their use. This talk aims to spark further discussion on these topics and contribute to the ongoing efforts to combat online misinformation.

Bio: Philip Mai is a Senior Researcher and Co-Director of the Social Media Lab at Toronto Metropolitan University. He co-founded the International Conference on Social Media & Society and has varied research interests, including misinformation, social media usage, and influencer impact. Along with his longtime collaborator, Dr. Anatoliy Gruzd, Philip also developed research tools like Netlytic.org and Communlaytic.org, used by thousands for conducting research on topics such as online participation, the spread of misinformation and the proliferation of anti-social behavior online. His goal is to make societal processes more transparent and to connect people, knowledge, and ideas.
 

Contacts

Bryan Heidorn