Research designs and causal inference

Instructors: Steve Raudenbush, Guanglei Hong, and Ken Frank

While some of the inquiries in STEM education research are descriptive, many are driven by fundamental questions about causality. The field of STEM education research has seen an increasing number of randomized trials for investigating the efficacy, effectiveness, and scale-up feasibility of new interventions. However, it has been the norm rather than the exception that a randomized experiment tends to be compromised due to non-random attrition or noncompliance. In the cases in which a randomized design is infeasible, large-scale natural experiments may offer viable alternative designs. And finally, quasi-experimental designs, some of which are embedded in large-scale, longitudinal surveys, may provide additional evidence potentially useful for informing causal thinking. The objectives of this course are to enable STEM education researchers to rigorously evaluate alternative designs for a wide range of research questions and become familiar with various analytic methods suitable for addressing some of the major challenges in causal inference. Many of these methods are relatively new to researchers in this field. Fellows will become aware of the strengths and limitations of different designs and analyses and will learn to discern the conditions necessary to invalidate an inference. The course will introduce the potential outcomes framework that has laid a theoretical foundation for causal inference. We will highlight the key identification assumptions required for drawing causal conclusions under alternative research designs. These are assumptions relating observable data to counterfactual quantities, the latter being essential in the definition of a causal effect. The course will introduce a wide range of causal inference methods that have the potential of extending conventional regression-based covariance adjustment methods. These will include propensity score-based matching, stratification, inverse-probability-of-treatment-weighting (IPTW), and marginal mean weighting through stratification (MMWS; Hong, 2010a, 2012, 2015). We will teach regression-based and weighting-based sensitivity analysis for assessing the potential consequences when key identification assumptions are violated and thereby determining the robustness of initial causal conclusions (Frank, 2000, Frank et al, 2013; Hong et al, in press). The course will also introduce popular econometric methods including the instrumental variable (IV) method, methods for analyzing data obtained from a sharp RDD or a fuzzy RDD, and difference in difference (DID) methods for analyzing cross-sectional and panel data. These designs will be presented in the context of STEM education case studies typically featuring a causal comparison between an intervention group and a control group. In addition, we will consider research designs and analytic methods for investigating the heterogeneity of treatment effects across subpopulations and across settings and the spillover of treatment effects among children in the same classroom or school (Hong and Raudenbush, 2006).  The course will feature lecture of statistical and design content as well as whole class and small group discussions of applications in STEM research.

REFERENCES:

Frank, K. A. (2000). “Impact of a Confounding Variable on the Inference of a Regression Coefficient.”  Sociological Methods and Research, 29(2), 147-194.

Frank, K.A., Maroulis, S., Duong, M., & Kelcey, B. (2013).  What would it take to Change an Inference?: Using Rubin’s Causal Model to Interpret the Robustness of Causal Inferences.   Education, Evaluation and Policy Analysis.  35, 437-460.

Hong, G. (2010). Marginal mean weighting through stratification: Adjustment for selection bias in multilevel data. Journal of Educational and Behavioral Statistics, 35(5), 499-531.

Hong, G. (2012). Marginal mean weighting through stratification: A generalized method for evaluating multi-valued and multiple treatments with non-experimental data. Psychological Methods, 17(1), 44-60.

Hong, G. (2015). Causality in a social world: Moderation, mediation, and spill-over. West Sussex, UK: John Wiley & Sons, Inc.

Hong, G., & Raudenbush, S. W. (2008) Causal inference for time-varying instructional treatments. Journal of Educational and Behavioral Statistics, 33(3), 333-362.

Hong, G., Yang, F., & Qin, X. (in press). Did you conduct a sensitivity analysis? A new weighting-based approach for evaluations of the average treatment effect for the treated. Journal of the Royal Statistical Society, Series A.

 
 

Scroll to Top