๐งพ How attentiveness was detected
Data come from a recent online survey that used instructed-response items (so-called trap questions) to identify inattentive respondents. These items flag participants who fail to follow simple instructions embedded in the instrument.
๐ What was compared
The analysis compares responses from those who pass versus fail the instructed-response items across two common survey modalities:
- Standard closed-ended questions
- List experiments designed to measure sensitive items
๐ Key findings
- Inattentive respondents are common in the sample and are disproportionately young and less educated.
- Those who fail the trap questions interact with the survey differently: they take less time to answer, are more likely to report nonattitudes, and show lower consistency in their choices.
- Ignoring inattentiveness produces a biased portrait of the distribution of important political attitudes and behaviors.
- This bias appears in both routine closed-ended items and in list experiments intended to reveal sensitive behaviors or opinions.
- Attentiveness is not random; failing to account for it leads to inaccurate estimates of the prevalence of both sensitive and more prosaic political attitudes and behaviors.
โ ๏ธ Why it matters
Survey-based estimates of public opinion and political behavior can be distorted if inattentive respondents are not identified and handled appropriately. Careful attention checks and analytic strategies to address inattentiveness are therefore critical for accurate measurement of political attitudes, including both sensitive topics and everyday policy preferences.