The Chilling: Identifying Strategic Antisocial Behavior Online and Examining the Impact on Journalists

Yian Wang,Mukhilshankar Umashankar,Eshwar Chandrasekharan,Hari Sundaram

Published 2025 in Proc. ACM Hum. Comput. Interact.

ABSTRACT

On social platforms like Twitter, strategic targeted attacks are becoming increasingly common, especially against vulnerable groups such as female journalists. Two key challenges in identifying strategic online behavior are the complex structure of online conversations and the hidden nature of potential strategies that drive user behavior. To address these, we develop a new tree-structured Transformer model that categorizes replies based on their hierarchical conversation structures, offering insights into the latent strategies underlying these interactions. Extensive experiments demonstrate that our proposed classification model can effectively detect different user groups--namely attackers, supporters, and bystanders--and their latent strategies. To demonstrate the utility of our approach, we apply this classifier to real-time Twitter data and conduct a series of quantitative analyses on the interactions between journalistswith diverse cultural backgrounds and different groups of users--attackers, supporters, and bystanders. Our classification approach allows us to not only explore strategic behaviors of attackers but also those of supporters and bystanders who engage in online interactions. When examining the impact of online attacks, we find a strong correlation between the presence of attackers' interactions and chilling effects, where journalists tend to slow their subsequent posting behavior. Additionally, we find that attackers tend to negatively influence the posting behavior of other users within these conversations. As conversations deepen, replies often deviate from original posts and get more toxic. This paper provides a deeper understanding of how different user groups engage in online discussions and highlights the detrimental effects of attacker presence on journalists, other users, and conversational outcomes. Our findings underscore the need for social platforms to develop tools that address coordinated toxicity and foster healthier conversation dynamics. By detecting patterns of coordinated attacks early, platforms could limit the visibility of toxic content to prevent escalation. Additionally, providing journalists and users with tools for real-time reporting and de-escalation could empower them to manage hostile interactions more effectively. Enhanced moderation tools targeting coordinated behaviors, particularly among attackers, could ensure a safer environment for vulnerable groups like female journalists, ultimately supporting constructive discussions and resilient online communities.

PUBLICATION RECORD

CITATION MAP

EXTRACTION MAP

CLAIMS

  • No claims are published for this paper.

CONCEPTS

  • No concepts are published for this paper.

REFERENCES

Showing 1-28 of 28 references · Page 1 of 1