New research shows that statistically significant results are disproportionately published in political science. Using a novel dataset tracking quantitative findings across the field, this paper demonstrates that published estimates consistently overstate their true effect size under various prior assumptions. While conventional statistical significance testing (α=0.05) increases false positive rates when applied to potentially biased data, lowering α reduces these errors but does not correct upward bias in magnitude.