Problem: Ensuring Protocols in In-Person Surveys
Large, in-person public opinion surveys face a persistent challenge: getting enumerators to follow fieldwork protocols. Quality control procedures—like audio capture and geo-tracking—can boost data quality and protect the representativeness of the final sample, but little research compares which tools matter most.
📊 What Was Tested: AmericasBarometer 2016/17
Data come from the 2016/17 wave of the AmericasBarometer study. The evaluation used a large classification task to test whether a limited set of routine indicators can identify the same final set of interviews produced when a full suite of quality control procedures is applied.
Key features of the test:
- Focus on variables that are both automated and human-coded and that are commonly available across popular survey platforms
- Comparison between a small subset of indicators and the outcome of a comprehensive quality-control protocol
- Use of classification techniques to evaluate how well the smaller set recovers the final sample
🔎 Key Findings
- A compact set of automated and human-coded variables can recover the final sample of interviews that results when a full suite of quality control procedures is implemented.
- These variables are widely available across popular survey platforms, suggesting practical adoptability.
- Implementing and automating only a few procedures can both streamline quality-control workflows and substantially improve data quality.
⚖️ Why It Matters
- Practical implication: Survey teams can achieve most of the benefits of extensive quality control with far fewer measures, lowering operational costs and complexity.
- Methodological implication: Prioritizing a small, automatable set of indicators offers a replicable approach for improving representativeness and data integrity in large, face-to-face public opinion surveys.