Many researchers, students, and consumers of empirical research have poor understandings of probability distributions, calculus, and key concepts necessary for mathematical statistics. At the same time, even researchers with PhD in quantitative fields can have difficulty understanding and interpreting concepts like p-values and confidence intervals. Over time, I’ve found that the best way to help individuals think through their quantitative problems and understand the logic of statistical inference is by focusing on the data-generating process as the concept of interest.

In an article titled “Association Between State Laws Facilitating Pharmacy Distribution of Naloxone and Risk of Fatal Overdose,” in the June 2019 issue of JAMA: Internal Medicine, Abouk et al. claim state laws granting direct authority to pharmacists to provide naloxone are associated with greater declines in opioid-related mortality than other laws facilitating access to naloxone. Furthermore, Abouk et al. claim laws other than these direct access laws are not associated with declines in opioid-related mortality.

Social scientists rarely provide explicit justification for choices that directly affect the suitability of their research designs for providing evidence for or against their hypotheses. While recent developments - such as the development of pre-registration plans - encourage researchers to think more carefully about the ability of their studies to precisely identify the sign and magnitude of the relationships between theoretical constructs, it still remains that case that few researchers justify the statistical power of their designs.