Social scientists rarely provide explicit justification for choices that directly affect the suitability of their research designs for providing evidence for or against their hypotheses. While recent developments - such as the development of pre-registration plans - encourage researchers to think more carefully about the ability of their studies to precisely identify the sign and magnitude of the relationships between theoretical constructs, it still remains that case that few researchers justify the statistical power of their designs.
As a Fellow for the Program for Advanced Research in the Social Sciences, I have the opportunity to teach students, faculty, and staff at Duke how to develop research designs, chooose quantitative methods, and implement those methods with statistical software.
Recently, a student asked me for help in calculating average score scales from multiple survey items. Since this provided a good opportunity to teach the student that there are multiple approaches to any programming problem and that each approach faces different trade-offs in terms of computational cost, verbosity, generality, and the opportunity for making mistakes, I put together a short gist I thought I’d share.
As a course instructor and teaching assistant, I aspire to not only help students master course concepts and tools, but more generally train my students to become critical thinkers capable of clearly and effectively articulating well-reasoned and empirically-supported arguments regarding scientific matters. In particular, I commonly aim to impart students with a clear idea of how to develop an argument, translate that argument into a journal-style document, and potentially port that document to other forms of communication, such as op-eds, presentations, or blog posts.