Expert evaluations of countries are central to comparative political research, but that usefulness depends on experts across regions sharing a common understanding of survey items. The assumption of cross-regional equivalence is tested here using techniques that have not previously been applied to expert surveys.
🔎 How comparability was tested
- Measurement invariance techniques were used to evaluate whether survey items measure the same concepts across groups. These techniques are most often applied to public-opinion scales to test cross-cultural validity and translation effects.
- The tests assess whether the meaning and measurement properties of scale items are equivalent across regions, a prerequisite for valid cross-country comparisons.
📊 Data and scope: Perceptions of Electoral Integrity (PEI)
- The Perceptions of Electoral Integrity (PEI) dataset is the empirical focus.
- The analysis covers all eleven PEI dimensions used to capture different facets of electoral integrity.
⚠️ Key findings
- Cross-regional comparability fails for all eleven PEI dimensions: items do not exhibit full measurement invariance across regions.
- The analysis identifies specific items that remain comparable across most regions, even where full-scale comparability breaks down.
- Results reveal systematic regional differences in how experts interpret electoral integrity questions, undermining a core assumption of expert-based comparative measures.
💡 Why it matters
- If experts in different regions interpret items differently, cross-country comparisons based on expert surveys can be misleading. This calls for more rigorous question development and validation procedures for expert surveys and routine use of measurement invariance tests before pooling or comparing scores across regions.