Political Science Scope and Methods
MIT | 17.850 | Fall 2024 | Fri 11:00–1:00 | E53-485 | https://canvas.mit.edu/courses/27086
Last updated: September 5, 2024
Contact
Instructor | Office | Office hours | |
---|---|---|---|
Devin Caughey | caughey@mit.edu | E53-463 | Thu 12:00–12:30 |
Assignments
- MIT COUHES training (Sep. 13) [5%]
- Three APSA DDRIG proposal ideas (Sep. 20) [5%]
- Peer review of DDRIG ideas (Sep. 27) [5%]
- Peer review reflection memo (Oct. 11) [5%]
- Operationalization exercise (Oct. 25) [15%]
- Experimental design slides (Nov. 8) [15%]
- DDRIG draft (Nov. 22) [15%]
- Case study exercise (Nov. 29) [15%]
- DDRIG proposal (Dec. 13) [15%]
- Class participation [5%]
Schedule
N.B. Readings marked “[PDF]” are posted on the course website.
1. Sep. 6: Becoming a political scientist (Zoom)
Today we discuss some foundational questions about the occupation you have chosen. What does it mean to study politics “scientifically”? What does a successful career in political science look like? What skills and traits should political scientists develop? What ethical and professional standards should they follow? We will also discuss how to write a successful research proposal and hear from a current PhD student about their experience writing one.
Required readings
Introduction
- Gerring (2012), xix–xxiii (Preface) and 1–23 (chap. 1) ~ [PDF]
Exemplars
Ethics
Workflow
DDRIG proposal
- Ye Zhang’s 2023 DDRIG application ~ [PDF]
Total: 102 pages
Additional resources
2. Sep. 13: Philosophy of science
What are the philosophical underpinnings of science? How does science work in practice? What, if any, are the differences between the natural and social sciences? How do and should scientists’ values and identities affect their work?
DUE: MIT COUHES training
Required readings
Overview
Science and values
Total: 196 pages
Additional resources
Sep. 20: NO CLASS (STUDENT HOLIDAY)
DUE: DDRIG ideas
3. Sep. 27: The context of discovery
Like many others, Gerring divides the scientific process into two “contexts”: discovery and appraisal (also known as “justification”). This week focuses on discovery—the generation of new questions, models, theories, hypotheses, explanations, and arguments. What motivates good research? How do we come up with research questions? What are attributes of good arguments? What does it mean to “explain” something? How do theories, models, and hypotheses relate to one another? When formulating theories, how should we balance values such as verisimilitude, parsimony, tractability, formalization, and usefulness? Should theories be considered collections of falsifiable statements about the world, or should they be considered objects with certain similarities to real-world structures, which may be more or less useful but are neither true nor false?
DUE: Peer review of DDRIG ideas
Required readings
Overview
Ideas, questions, problems
Theories, models, explanations
- Mahoney and Goertz (2012), 16–38 (chap. 2) ~ [PDF]
- Van Evera (1997), 7–21 (part of chap. 1) ~ [PDF]
- Clarke and Primo (2007) ~ [PDF]
Total: 163 pages
Additional resources
- Spirling and Stewart (2024)
4. Oct. 4: The context of justification
This session covers questions related to the context of justification or appraisal. How should arguments be appraised (i.e., justified, tested, evaluated, falsified, compared)? What makes for convincing appraisal? What are various strategies of appraisal? In a probabilistic world, how should we draw theoretical inferences from data? How does the nature of our ontological claims (i.e., claims about how the world is) affect our methodological choices (i.e., choices about how to evaluate those claims)? What are the advantages and disadvantages of alternative appraisal strategies? We use the literature on the Democratic Peace to unpack these questions in a specific scholarly context.
Required readings
Overview
- Gerring (2012), 74–103 (chap. 4) ~ [book]
Perspectives
Applications: The Democratic Peace
- Russett et al. (1993), 3–23 (chap. 1) and 72–86 (part of chap. 4) ~ [PDF]
- Peterson (1995) ~ [PDF]
- Tomz and Weeks (2013) ~ [PDF]
Total: 188 pages
Additional resources
- Barnhart et al. (2020)
5. Oct. 11: Conceptualization and descriptive arguments
This class session covers concepts (the linguistic containers political scientists use to describe the social world) and descriptive arguments (proposed answers to “what” questions). Following Gerring, these can be viewed as forms of descriptive (as distinct from causal) discovery. While conceptualization and description are sometimes neglected, they are necessary preconditions for causal arguments as well as valuable scientific tasks in themselves.
DUE: Peer review reflection memo
Required readings
Overviews
Perspectives
Applications
Total: 149 pages
Additional resources
- Abdelal et al. (2006)
6. Oct. 18: Measurement and descriptive inference
This session moves from descriptive discovery to descriptive appraisal—that is, from theoretical concepts and arguments to the empirical task of measurement. The readings discuss procedures for constructing operational measures of concepts and criteria for evaluating measurement validity. We consider how choices regarding measurement can affect the appraisal of descriptive arguments. To illustrate the pitfalls and trade-offs of different measurement strategies, we examine various approaches to conceptualizing and measuring democracy.
Required readings
Overview
- Gerring (2012), 155–194 (chap. 7) ~ [book]
Perspectives
Applications: Measuring Democracy
Total: 149 pages
Additional resources
7. Oct. 25: Causal arguments and causal inference
In this session, we transition from description to causation. We will cover both causal arguments (i.e., discovery) and causal inference (i.e., appraisal), though we leave discussion of specific strategies of causal appraisal for subsequent sessions. We will consider alternative pespectives on causation and discuss what constitutes a well-defined (and therefore estimable) casual effect. A primary goal of this session is to establish a framework for reasoning about causation that transcends methodological divides (e.g., quantitative vs. qualitative).
DUE: Operationalization exercise
Required readings
General perspectives
Counterfactuals
- Fearon (1991) ~ [PDF]
Manipulation
- Sen and Wasow (2016) ~ [PDF]
Mechanisms
- Falleti and Lynch (2009) ~ [PDF]
Total: 186 pages
Additional resources
8. Nov. 1: Quantitative I—Design-based causal inference
This is the first in a series of sessions that discuss specific causal inference strategies and research designs. We will begin with what are arguably the most straightforward causal designs, ones in which units are randomly (or “as if” randomly) assigned to different treatment conditions (i.e., different levels of the causal variable of interest). Such experiments, whether controlled by the researcher or implemented by “nature,” offer the most propitious setting for estimating average treatment effects of various kinds. We will read applications illustrating themes such as threats to internal validity (Campbell and Ross), the role of qualitative evidence in validating quantitative designs (Ferwerda and Miller vs. Kocher and Monteiro), field experiments (Broockman and Kalla), and external validity (Barabas and Jerit). We will also discuss the growing use of pre-analysis plans.
Required readings
Overview
- Gerring (2012), 256–290 (chap. 10) ~ [book]
Natural experiments
- Titiunik (2021) ~ [PDF]
Applications
Analysis
- Ofosu and Posner (2023) ~ [PDF]
Total: 166 pages
Additional resources
- Dunning (2008)
9. Nov. 8: Quantitative II—Model-based causal inference
In this session we broaden our focus to consider causal designs that, in the Gerring’s words, require going “beyond \(X\) and \(Y\)” (i.e., beyond the treatment and the outcome). These include designs that: (1) adjust for the confounding effect of common causes of \(X\) and \(Y\); (2) examine the mediators and/or moderators of the \(X\)–\(Y\) relationship; or (3) use an instrument to induce or isolate random variation in the causal variable of interest. We also consider selection bias, which arises from inappropriate adjustment or conditioning. We examine all of these issues with the aid of causal diagrams known as directed acyclic graphs (DAGs).
DUE: Experimental design (slides)
Required readings
Overview
- Gerring (2012), 291–326 (chap. 11) ~ [book]
Causal graphs
- Digitale, Martin, and Glymour (2022) ~ [PDF]
Mediation
- Bullock, Green, and Ha (2010) ~ [PDF]
Selection bias
Applications
Total: 131 pages
Additional resources
10. Nov. 15: Qualitative I—Case studies
In this session, we turn from quantitative research to qualitative. Our focus is small-\(n\) case studies. We will discuss two classic qualitative methods introduced by John Stuart Mill—the method of agreement and the method of difference—along with their assumptions and limitations. We will also cover methods for small-\(n\) case selection and various ways of leveraging and combining cross-case and within-case analysis. We will also consider how DAGs and Bayesian inference can be used to formally justify and structure qualitative methods such as process-tracing.
Required readings
Overviews
Cross-case analysis
Within-case analysis
Combining within- and cross-case analysis
- Falleti and Mahoney (2015) ~ [PDF]
Application
- Tannenwald (1999) ~ [PDF]
Total: 185 pages
Additional resources
11. Nov. 22: Qualitative II—Fieldwork and interpretivism
In our second session on qualitative analysis, we will discuss fieldwork and ethnography. We will give particular (but not exclusive) attention to interpretive methods, which focus on how people understand and give meaning to the world around them. In contrast to positivism, interpretivism emphasizes the importance of inhabiting the perspective of the subjects of study and the impossibility of studying social phenomena neutrally or objectively.
DUE: DDRIG draft
Required readings
Perspectives
Methods
Application
- Michener and SoRelle (2022)
Total: 153 pages
Additional resources
Nov. 29: NO CLASS (THANKSGIVING)
DUE: Case study exercise
12. Dec. 6: Mixing methods
In this, our final session focused explicitly on methodology, we consider the issue of whether and how to reconcile and combine different methodological approaches. Do different methodological approaches share an underlying unity of logic and standards, or are they essentially different “cultures” that can coexist only at a safe distance from one another? Can and should a given study employ multiple methods to complement each other, or is knowledge cumulation best served by methodological specialization?
Required readings
Approaches
Critiques
Application
- Thachil (2020) ~ [PDF]
Total: 150 pages