New Papers Accepted for CSCW 2024
2024 April 30Very excited to share two new papers accepted to The 27th ACM Conference on Computer-Supported Cooperative Work and Social Computing (CSCW 2024)!
- The Online Identity Help Center: Designing and Developing a Content Moderation Policy Resource for Marginalized Social Media Users, coauthored by Shannon Li, Hibby Thach, Daniel Delmonaco, Christian Paneda, Andrea Wegner, and Oliver L. Haimson
- “What are you doing, TikTok?”: How Marginalized Social Media Users Perceive, Theorize, and “Prove” Shadowbanning, authored by Daniel Delmonaco, coauthored by Samuel Mayworm (myself), Josh Guberman, Hibby Thach, Aurelia Augusta, and Oliver L. Haimson
"The Online Identity Help Center" Abstract:
"Marginalized social media users struggle to navigate inequitable content moderation they experience online. We developed the Online Identity Help Center (OIHC) to confront this challenge by providing information on social media users' rights, summarizing platforms' policies, and providing instructions to appeal moderation decisions.
We discuss our findings from interviews (n = 24) and surveys (n = 75) which informed the OIHC's design, along with interviews about and usability tests of the site (n = 12). We found that the OIHC's resources made it easier for participants to understand platforms' policies and access appeal resources. Participants expressed increased willingness to read platforms' policies after reading the OIHC's summarized versions, but expressed mistrust of platforms after reading them. We discuss the study's implications, such as the benefits of providing summarized policies to encourage digital literacy, and how doing so may enable users to express skepticism of platforms' policies after reading them."
"What are you doing, TikTok?" Abstract:
"Shadowbanning is a unique content moderation strategy receiving recent media attention for the ways it impacts marginalized social media users and communities. Social media companies often deny this content moderation practice despite user experiences online. In this paper, we use qualitative surveys and interviews to understand how marginalized social media users make sense of shadowbanning, develop folk theories about shadowbanning, and attempt to prove its occurrence.
We find that marginalized social media users collaboratively develop and test algorithmic folk theories to make sense of their unclear experiences with shadowbanning. Participants reported direct consequences of shadowbanning, including frustration, decreased engagement, the inability to post specific content, and potential financial implications. They reported holding negative perceptions of platforms where they experienced shadowbanning, sometimes attributing their shadowbans to platforms’ deliberate suppression of marginalized users’ content. Some marginalized social media users acted on their theories by adapting their social media behavior to avoid potential shadowbans.
We contribute collaborative algorithm investigation: a new concept describing social media users’ strategies of collaboratively developing and testing algorithmic folk theories. Finally, we present design and policy recommendations for addressing shadowbanning and its potential harms."