Title: Tackling Toxicity Online: Developing Comprehensive Approaches for Online Risk Detection
With the increase of cyberbullying, sexual harassment, and hate speech on social media, there is an increased need to identify and combat risky behaviors. Most of the previous research in this space is predominantly focused on isolated aspects of online risk detection and often lacks a complete view of the problem. For example for online risk detection systems, it is crucial to ensure that the models and systems developed can effectively address real-world challenges, as online spaces are continuously evolving. To this end, I propose a more holistic approach towards online risk detection. Firstly, there is a need for ecologically valid datasets that reflect the real-world complexity of online interactions. Rather than relying solely on third-party annotations, I argue for the incorporation of genuine victim experiences into these datasets. Secondly, my research explores the growing importance of multi-modal feature extraction, as images and videos play an increasingly prominent role in online communication, particularly among younger users. Additionally, end-to-end encryption (E2EE) adoption on platforms like Meta poses a unique challenge as linguistic and semantic features may not suffice in E2EE scenarios. Furthermore, I advocate for proactive and context-aware moderation that can adapt to the evolving language used in online interactions. Lastly, I investigate cross-platform behavior, shedding light on how users migrate to alternative platforms after being suspended and the wider implications of active moderation on various social platforms. Ultimately, my research aims to provide a comprehensive approach to mitigate online risks and promote a safer online environment.
Shiza Ali is a PhD candidate at Boston University under the advisory of Dr. Gianluca Stringhini in the Security Lab (SeclaBU) in the ECE Department. Her research expertise lies at the intersection of Social Computing, Machine Learning, Privacy, and Online Safety. Specifically, she employs cross-platform and multi-modal approaches to study the dissemination of abusive content online. Her ultimate goal is to bridge the gap between social science and computer science research fields to gain a more thorough understanding of complex societal issues and develop innovative technology-based strategies to address them. Her doctoral research has been supported by the NSF.
Title: Understanding and Regulating Dark Patterns
"Dark patterns" are generally considered to be design practices in online services that either influence user decisions towards unintended or negative outcomes either purposefully or in effect—often through manipulative or deceptive means. In recent years they have come under considerable scrutiny in technology law and policy circles, with primarily Western governments leveraging extant regulations (e.g. Section 5 of the FTC Act) or new regulatory provisions (CCPA/CPRA, EU DSA) against these practices. This talk provides discusses the presenter's prior and in-progress interdisciplinary work on dark patterns, with a focus on contextualizing dark design practices and identifying opportunities to strengthen regulatory protections against them.
Johanna Gunawan is a PhD Candidate in Cybersecurity at Northeastern University, supervised by David Choffnes, Christo Wilson, and Woodrow Hartzog. Her work broadly spans consumer and data protections issues from the CS, HCI, and legal disciplinary perspectives. Prior to the PhD, Johanna earned an MS in Cybersecurity and BA in Political Science at Northeastern University, and has worked in-industry as a technical/UX writer for cybersecurity products.
Title: Supporting Governance of Dynamic, Diverse Online Communities
Online communities are essential social destinations on the internet. In online groups, people share memes, seek technical advice, and provide emotional support. Many such groups are volunteer run; laypeople devote their free time to creating and enforcing rules to keep these communities goal oriented. However, unlike their offline counterparts, online communities possess a unique capacity for rapid, large-scale growth. And further, online communities can connect people from vastly different backgrounds and geographic locations. These factors make governing online communities uniquely difficult for volunteer moderators.
In this talk, I will discuss my work on computational approaches to supporting the governance of dynamic and diverse online communities. First, I demonstrate how community norms can evolve during growth periods through a large-scale observational study of Reddit subreddits. I reveal an apparent trade-off between growth and community distinctiveness, as measured through language-use. Second, I will describe a framework for measuring user support for community moderation policies. I use this framework to conduct an empirical evaluation of rule alignment within one online community, r/ChangeMyView. Crucially, this framework can be applied longitudinally, enabling communities to update their rules as their user base changes. I conclude with future directions for supporting the governance of online communities as they evolve over their lifespan.
Vinay Koshy is a computer science PhD student at the University of Illinois, Urbana-Champaign in the Social Spaces group, advised by Karrie Karahalios. In his work, he applies techniques from NLP to study online community evolution and builds tools to improve decision quality in content moderation contexts. He has won awards at top conferences in human-computer interaction, including a Best Paper award at CSCW and an Honorable Mention at CHI.