FIND DATA: By Author | Journal | Sites   ANALYZE DATA: Help with R | SPSS | Stata | Excel   WHAT'S NEW? US Politics | Int'l Relations | Law & Courts
   FIND DATA: By Author | Journal | Sites   WHAT'S NEW? US Politics | IR | Law & Courts
If this link is broken, please report as broken. You can also submit updates (will be reviewed).
Proving Regression Discontinuity Credibility With Equivalence Tests
Insights from the Field
regression discontinuity
equivalence testing
falsification tests
density test
close elections
Methodology
Pol. An.
2 HTML files
2 other files
1 text files
Dataverse
Equivalence Testing for Regression Discontinuity Designs was authored by Erin Hartman. It was published by Cambridge in Pol. An. in 2021.

๐Ÿ“Œ Background

Regression discontinuity (RD) designs are increasingly common in political science because the treatment assignment mechanism is known and observable. To argue that an RD design is credible, researchers typically run two standard falsification checks: continuity in a pretreatment covariate's regression function and continuity in the density of the forcing variable. Both of these conventional checks use a null hypothesis of no difference at the cutoff.

๐Ÿ” Why current RD checks fall short

  • Using a null of no difference can produce misleading conclusions: failing to reject that null is often (incorrectly) treated as evidence that the design is valid rather than as inconclusive evidence.
  • The well-known equivalence testing framework addresses this inferential problem, but directly applying equivalence tests in the RD context is not straightforward.

๐Ÿ”ฌ New approach: equivalence tests tailored to RD

  • Two equivalence tests are developed specifically for RD applications, adapted to the two common falsification contexts: continuity of pretreatment covariate regression functions and continuity of the forcing-variable density.
  • These tests are designed to allow researchers to provide affirmative statistical evidence that any discontinuities at the cutoff are smaller than a researcher-specified, substantively meaningful threshold (an equivalence margin).

๐Ÿ“Š Key findings from simulations

  • Simulation studies demonstrate superior performance of the equivalence-based tests compared with the traditional tests-of-difference used in current practice.
  • Equivalence-based tests reduce the risk of falsely accepting a flawed design and give researchers a means to quantify evidence that violations are practically negligible.

๐Ÿงพ Applied example

  • The proposed methods are illustrated using the close elections RD dataset from Eggers et al. (2015b), showing how equivalence testing can be implemented in an empirical RD application.

โœจ Why it matters

  • Offering practical, RD-specific equivalence tests helps move falsification checks from inconclusive non-rejections toward affirmative evidence of design credibility, improving the robustness and interpretability of RD studies.
data
Find on Google Scholar
Find on JSTOR
Find on CUP
Political Analysis
Podcast host Ryan