ICWSM 2026 Workshop
Los Angeles, CA, USA · 26 May, 2026 (Half-Day)
Large Language Models (LLMs) are increasingly used not only as analytic tools, but as socially situated agents that reason, interact, and generate behavior in simulated environments. This shift enables new forms of computational social science, where LLM-driven agents are used to model decision-making, social norms, cooperation, persuasion, and collective dynamics at scale. The SocialLLM workshop invites submissions that explore how LLMs can serve as generative agents to simulate, analyze, and probe social behavior in online and networked contexts. We are particularly interested in work that connects micro-level language interactions to macro-level social phenomena central to ICWSM and computational social science. Beyond technical advances, we encourage contributions that critically examine when LLM-based social simulations are appropriate, what kinds of social processes they can meaningfully capture, and how their outputs should be evaluated and interpreted. The workshop aims to foster a principled, responsible, and empirically grounded research agenda for using LLMs in social reasoning and simulation.
We welcome empirical, methodological, theoretical, and conceptual submissions. Topics include, but are not limited to:
Please check the style guidelines of ICWSM 2026. Accepted papers will be published at ICWSM workshop proceedings.
Submissions open on OpenReview.
Note: Submitting authors must have an OpenReview profile. Co-authors are allowed to be added through name and email. New profiles created with an institutional email will be activated automatically. New profiles created without an institutional email will go through a moderation process that can take up to two weeks.
| Submission Deadline | April 1, 2026, 11:59 PM AoE |
| Paper Notification | April 15, 2026, AoE |
| Camera-Ready Deadline | TBD |
| Workshop Date | May 26, 2026 (Half-Day Workshop) |
| Time | Activity |
|---|---|
| 00:00 – 00:10 | Opening Remarks + Icebreaker |
| 00:10 – 00:50 | Keynote Talk 1 |
| 00:50 – 01:30 | Keynote Talk 2 |
| 01:30 – 02:00 | Coffee Break & Networking Session |
| 02:00 – 03:00 | Oral Session |
| 03:00 – 03:55 | Lightning Talks + Poster / Interactive Presentations |
| 03:55 – 04:00 | Closing Remarks |
Zhijing Jin (she/her) is an Assistant Professor in Computer Science at the University of Toronto, and also a Research Scientist at Max Planck Institute in Germany. She is a faculty member at the Vector Institute, a CIFAR AI Chair, an ELLIS advisor, and faculty affiliate at the Schwartz Reisman Institute in Toronto, CHAI at UC Berkeley, and the Future of Life Institute. She co-chairs the ACL Ethics Committee and the ACL Year-Round Mentorship. Her research focuses on Causal Reasoning with LLMs, and AI Safety in Multi-Agent LLMs. She has received the ELLIS PhD Award, three Rising Star awards, two Best Paper awards at NeurIPS 2024 Workshops, two PhD Fellowships, and a postdoc fellowship. She has authored over 100 papers and her work has been featured in CHIP Magazine, WIRED, and MIT News.
As AI systems take on more autonomous roles across the economy, governance, and daily life, they'll increasingly interact with each other. However, will the AI agents coordinate for social good, or exploit rival agents and people in ways that put humans at serious risk?
In this talk I will explain how we assess these dangers with large-scale social simulations and game-theoretic analysis. Across thousands of high-stakes scenarios, from arms race escalation to common pool resource depletion, frontier models choose socially beneficial actions in only 62% of cases, with systematic biases in framing and ordering worsening outcomes. Surprisingly, stronger reasoning capabilities often make models more prone to selfish strategies like free-riding, and recent models consistently defect in unmodified social dilemmas regardless of scale or reasoning ability. However, game-theoretic interventions offer a promising path forward: cooperation mechanisms such as mediation, enforceable contracts, and reputation systems improve collective welfare significantly and become more effective under stronger optimization pressures. Beyond formal mechanisms, self-organizing social structures like elected leadership oriented toward group welfare further sustain cooperation in sequential dilemmas. These results suggest that safer multi-agent AI requires principled institutional design rather than reliance on models' inherent prosociality.
Maarten Sap is an assistant professor in Carnegie Mellon University's Language Technologies Department (CMU LTI), with a courtesy appointment in the Human-Computer Interaction Institute (HCII). He is also a part-time research scientist and AI safety lead at the Allen Institute for AI. His research focuses on measuring and improving AI systems' social and interactional intelligence, assessing and combatting social inequality and biases in language, and building narrative language technologies for prosocial outcomes. He has received paper awards at NeurIPS 2025, NAACL 2025, EMNLP 2023, ACL 2023, FAccT 2023, and was named a 2025 Packard Fellow and recipient of the 2025 Okawa Research Award. His work has been covered by the New York Times, Forbes, Fortune, Vox, and more.
Talk details coming soon — stay tuned!
Interested in serving on the program committee? Sign up here: SocialLLM Reviewer Self-Nomination Form.
Please check the OpenReview workshop page for full content.
TBD
Join SocialLLM Slack channel for updates and discussions.
For questions, please contact us at SocialLLM Slack channel (preferred) or social.llm.workshop@gmail.com.