Algorithmically supported moderation in children's online communities
Author(s)Tan, Flora, M. Eng. Massachusetts Institute of Technology
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
Andrew Sliwinski and Mitch Resnick.
MetadataShow full item record
The moderation of harassment and cyberbullying on online platforms has become a heavily publicized issue in the past few years. Popular websites such as Twitter, Facebook, and YouTube employ human moderators to moderate user-generated con- tent. In this thesis, we propose an automated approach to the moderation of online conversational text authored by children on the Scratch website, a drag-and-drop programming interface and online community. We develop a corpus of children's comments annotated for inappropriate material, the first of its kind. To produce the corpus of data, we introduce a comment moderation website that allows for the review and label of comments. The web-tool acts as a data-pipeline, designed to keep the machine learning models up to date with new forms of inappropriate content and to reduce the need for maintaining a blacklist of profane words. Finally, we apply natural language processing and machine learning techniques towards detecting inappropriate content from the Scratch website, achieving an F1-score of 73%.
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student-submitted PDF version of thesis.Includes bibliographical references (pages 61-64).
DepartmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
Massachusetts Institute of Technology
Electrical Engineering and Computer Science.