Participants in online surveys and experiments are often inattentive, which can undermine substantive and causal inferences. Many practitioners therefore include multiple factual or instructional closed-ended manipulation checks to identify low-attention respondents. Closed-ended checks are binary (correct or incorrect), which makes them easier to guess and limits variation in measured attention.
📝 What This Paper Proposes
An automatic, standardized text-as-data approach that uses the text respondents provide in an open-ended manipulation check to measure attention. The method converts open-ended answers into an automated, continuous attention score that can be applied consistently across studies.
🔎 How Open-Ended Text Is Used to Measure Attention
- Uses respondents' free-text manipulation-check answers as the raw data source.
- Applies automated text-as-data techniques to produce a continuous attention measure rather than a binary pass/fail indicator.
- Reduces reliance on subjective, paid human coders by standardizing the coding process.
📈 Key Benefits and Diagnostics
- Continuous measurement: captures greater variation in attention across respondents compared with closed-ended checks.
- Robust automation: minimizes the cost and variability introduced by human coding of open-ended responses.
- Diagnostic tools: outlines procedures to diagnose how inattentive respondents affect overall results and how to estimate the average treatment effect among respondents likely to have received the treatment.
⚙️ Practical Tools and Implementation
Easy-to-use R software is provided to implement the open-ended manipulation-check workflow and the accompanying diagnostics, enabling researchers to adopt the approach without bespoke coding.
Why it matters: this standardized, automated text-based measurement improves attention assessment in survey experiments, clarifies the influence of low-attention respondents on inference, and offers practical code for immediate use.