List experiments are a widely used survey technique for estimating the prevalence of socially sensitive attitudes. However, their design creates potential bias: treatment group respondents see more list items than control groups, which may mechanically inflate mean responses.
Data & Methods
This study uses an original Singapore dataset alongside analyses from previous studies to identify patterns in mechanical inflation. We examine how item count differences affect response prevalence across various demographics.
Key Findings
We find clear evidence of mechanical inflation, but crucially only among respondents with low educational attainment. Similar heterogeneous effects appear across different datasets and contexts, indicating a systematic bias issue.
Why It Matters
These findings have significant implications for interpreting list experiment results globally, particularly in developing contexts where education levels vary. We recommend using necessarily false placebo statements in control groups to equalize item counts as a simple solution that mitigates mechanical inflation without compromising substantive interpretations.