PluriHarms : Benchmarking the Full Spectrum of
Human Judgments on AI Harm

1UC Berkeley, 2Carnegie Mellon University, 3Allen Institute for AI, 4New York University

Abstract

Current AI safety frameworks, which often treat harmfulness as binary, lack the flexibility to handle borderline cases where humans meaningfully disagree. To build more pluralistic systems, it is essential to move beyond consensus and instead understand where and why disagreements arise. We introduce PluriHarms, a benchmark designed to systematically study human harm judgments across two key dimensions—the harm axis (benign to harmful) and the agreement axis (agreement to disagreement). Our scalable framework generates prompts that capture diverse AI harms and human values while targeting cases with high disagreement rates, validated by human data. The benchmark includes 150 prompts with 15,000 ratings from 100 human annotators, enriched with demographic and psychological traits and prompt-level features of harmful actions, effects, and values. Our analyses show that prompts that relate to imminent risks and tangible harms amplify perceived harmfulness, while annotator traits (e.g., toxicity experience, education) and their interactions with prompt content explain systematic disagreement. We benchmark AI safety models and alignment methods on PluriHarms, finding that while personalization significantly improves prediction of human harm judgments, considerable room remains for future progress. By explicitly targeting value diversity and disagreement, our work provides a principled benchmark for moving beyond "one-size-fits-all" safety toward pluralistically safe AI.

šŸ“¢ Paper accepted to ICLR 2026
Code and data are now available on GitHub

BibTeX

@article{li2026pluriharms,
        title={PluriHarms: Benchmarking the Full Spectrum of Human Judgments on AI Harm},
        author={Li, Jing-Jing and Mire, Joel and Fleisig, Eve and Pyatkin, Valentina and Collins, Anne and Sap, Maarten and Levine, Sydney},
        journal={arXiv preprint arXiv:2601.08951},
        year={2026}
      }