FIND DATA: By Author | Journal | Sites   ANALYZE DATA: Help with R | SPSS | Stata | Excel   WHAT'S NEW? US Politics | Int'l Relations | Law & Courts
   FIND DATA: By Author | Journal | Sites   WHAT'S NEW? US Politics | IR | Law & Courts
If this link is broken, please report as broken. You can also submit updates (will be reviewed).
Insights from the Field

Why Experts' Electoral Integrity Ratings Can't Be Compared Across Regions


measurement invariance
expert surveys
electoral integrity
PEI
comparability
Comparative Politics
Pol. An.
1 archives
Dataverse
Comparative Research Is Harder Than We Thought: Regional Differences in Experts' Understanding of Electoral Integrity Questions was authored by Bruno Castanho Silva and Levente Littvay. It was published by Cambridge in Pol. An. in 2019.

Expert evaluations of countries are central to comparative political research, but that usefulness depends on experts across regions sharing a common understanding of survey items. The assumption of cross-regional equivalence is tested here using techniques that have not previously been applied to expert surveys.

🔎 How comparability was tested

  • Measurement invariance techniques were used to evaluate whether survey items measure the same concepts across groups. These techniques are most often applied to public-opinion scales to test cross-cultural validity and translation effects.
  • The tests assess whether the meaning and measurement properties of scale items are equivalent across regions, a prerequisite for valid cross-country comparisons.

📊 Data and scope: Perceptions of Electoral Integrity (PEI)

  • The Perceptions of Electoral Integrity (PEI) dataset is the empirical focus.
  • The analysis covers all eleven PEI dimensions used to capture different facets of electoral integrity.

⚠️ Key findings

  • Cross-regional comparability fails for all eleven PEI dimensions: items do not exhibit full measurement invariance across regions.
  • The analysis identifies specific items that remain comparable across most regions, even where full-scale comparability breaks down.
  • Results reveal systematic regional differences in how experts interpret electoral integrity questions, undermining a core assumption of expert-based comparative measures.

💡 Why it matters

  • If experts in different regions interpret items differently, cross-country comparisons based on expert surveys can be misleading. This calls for more rigorous question development and validation procedures for expert surveys and routine use of measurement invariance tests before pooling or comparing scores across regions.
data
Find on Google Scholar
Find on JSTOR
Find on CUP
Podcast host Ryan