FIND DATA: By Author | Journal | Sites   ANALYZE DATA: Help with R | SPSS | Stata | Excel   WHAT'S NEW? US Politics | Int'l Relations | Law & Courts
   FIND DATA: By Author | Journal | Sites   WHAT'S NEW? US Politics | IR | Law & Courts
If this link is broken, please report as broken. You can also submit updates (will be reviewed).
How Automated Coding Reveals Human Rights Groups' Naming-and-Shaming
Insights from the Field
Human Rights
Naming-and-Shaming
Automated Coding
Text Analysis
NGOs
Methodology
JHR
1 Other
Dataverse
Advocacy Output: Automated-coding Documents from Human Rights Organizations was authored by Baekkwan Park, Amanda Murdie and David Davis. It was published by Taylor & Francis in JHR in 2020.

Human Rights Organizations (HROs) are central actors in promoting rights and frequently deploy naming-and-shaming against states, but systematic measurement of those shaming efforts remains uneven. This paper reviews prior data efforts, introduces scalable quantitative approaches, and presents two new automated-coded datasets to map shaming events and their intensity.

๐Ÿงญ Why measuring shaming matters

  • Existing data projects make important contributions but leave gaps in coverage, consistency, and comparability when it comes to identifying shaming events across many actors and outlets.
  • Establishing systematic measures of shaming enables comparative study of HRO strategies and assessment of their effectiveness against much more powerful state actors.

๐Ÿงพ What documents were coded and how

  • A class of quantitative, text-as-data approaches is introduced to generate reproducible measures of shaming events from large document collections.
  • The paper describes the automated-coding workflow used to turn raw HRO reports and media coverage into event-level data, including document selection, coding rules, and validation steps.

๐Ÿ”Ž How automated coding identifies shaming events

  • The automated process detects statements that attribute responsibility, issue accusations, or call out actors โ€” the core behaviors associated with naming-and-shaming.
  • Attention is paid to distinguishing routine reporting from targeted shaming language and to ensuring comparability across different HROs and media sources.

๐Ÿ“ฆ New datasets produced

  • Two new datasets are produced using these automated approaches:
  • A cross-sectional dataset coding shaming statements across a broad set of HROs.
  • A dataset coding international media coverage for shaming language using the same coding approach.
  • These datasets are documented with coding logic, coverage, and validation diagnostics.

๐Ÿ“Š How the new data compare with existing sources

  • The new datasets are systematically compared with existing shaming datasets to illustrate strengths in coverage, replicability, and the ability to capture actor-specific variation.
  • The comparison highlights where prior data succeed and where automated coding provides added breadth or nuance.

โš–๏ธ Measuring intensity and assessing effectiveness

  • A novel intensity metric for HRO shaming statements is introduced to capture variation in forcefulness or emphasis.
  • Intensity measures can be used to differentiate HRO approaches (e.g., mild critique versus emphatic accusation) and to analyze whether stronger shaming correlates with indicators of influence or change.

Overall, the study offers a transparent, scalable toolkit for measuring naming-and-shaming through automated coding and demonstrates how those measures expand research possibilities on HRO advocacy and its consequences.

data
Find on Google Scholar
Find on JSTOR
Taylor & Francis
Journal of Human Rights
Podcast host Ryan