Excavation: What is good political science?

I was going through some backup folders on my old computer and found a few pieces of text that I figured I’d dispose of here. This first one is an assignment I wrote for a course many years ago. The assignment was to write a “manifesto” on what constitutes good political science. Some of the thoughts are perhaps a bit half-baked and some arguments could be developed much further, but in general I think I still mostly agree with the somewhat younger me. I make no claims to being a philosopher of science, and if I were to clash with someone who was, I’d be more than happy to give up the points I think I made in this essay. Fire away!


A political science manifesto

The following pages constitute a collection of brief thoughts about the nature of political research. The thoughts are only somewhat structured, but my argument will proceed in two steps:

First, I will argue that what makes for good political science is what makes for good science in general. I will argue, hence, that there is very little that in principle separates the social sciences from the natural sciences, and that common arguments to the contrary either does not hold, or are based on caricatures of natural science.

Second, I will argue for what makes for good scientific method as applied to the realm of the social and the political. In this regard, I will be careful not to make prescriptions but will speculate about some possible augmentations to current mainstream research methodology.

Why political science?

There is a common set of arguments that purport to highlight the impossibility of conquering the wall between the natural and social sciences. The social sciences can never be scientific in the sense that physics or biology is, it is claimed, and therefore we need to adapt not only our tools and our expectations of what can plausibly be achieved by the social sciences, but also our whole ontology. I will argue that most of the arguments furthered to support this stance are flawed.

Four fundamental arguments are generally made in this regard. Of these, I claim that two are essentially based on a straw man of the natural sciences. One of these can be found in Bent Flyvbjergs notoriously confused article Five Misunderstandings About Case-Study Research (Flyvbjerg, 2006), namely the absence of predictive capacity: “Predictive theories and universals cannot be found in the study of human affairs. Concrete, context-dependent knowledge is, therefore, more valuable than the vain search for predictive theories and universals” (p. 224). It can also be found in his A Perestroikan Strawman Answers Back (Flyvbjerg, 2006b), where he argues that “social science can never be scientific in the natural scientific sense” because it is “[probable] that social science can never be explanatory and predictive”. The argument hinges on an understanding of the natural sciences as being at its very core predictive. Surprisingly, this is a view he shares to some extent with Rein Taagepera (2007), where they disagree on the possibility, not the necessity, of predictive theory: “A major goal of science is to explain in a way that can lead to prediction”. Taagepera’s conclusion is not that this makes social science impossible but that more effort should be put into the formulation of logical predictive models.

The second argument is that solid science requires controlled experiments, and that experiments at the scale of societies and political systems are either impossible or outrageously unethical.

These, as I said above, I think are straw man arguments. In fact, there is a large segment of traditional, “hard”, natural sciences that can never be predictive, and another part that, like large parts of the social sciences, is unable to perform controlled experiments. To begin with, it has long been recognized that there is a class of phenomena exhibiting characteristics commonly known as “chaos” and “complexity” (buzzwords sometimes misused by poststructuralists et al to further an anti-science agenda). The definition of chaos is that the long-run behavior of a system is heavily dependent on initial parameters and thus, that very minute changes to the starting state of a system can lead to very large changes to its long-run behavior (an informal and common description is that of a “butterfly effect”). Chaotic systems, though actually deterministic, are inherently unstable and in practice unpredictable – and can be found everywhere in nature. Complexity, on the other hand, describes the emergence of macro-phenomena from interactions between parts of a system (where order emerges from chaos, so to speak). In these cases, strictly (or greedy) reductionist approaches may fail.

A number of prominent examples can be mentioned. The monumental magnum opus of biology, the theory of evolution, in fact satisfies none of the above criteria. It is generally not capable of making substantive, detailed predictions (other than vague ones, like there being a high probability that an ecological niche will sooner or later be occupied by a new species – but due to the practically random nature of mutations it can rarely predict how the adaption will occur or by which selective mechanism) and the opportunities for experimentation are very limited (speciation often takes millions of years, unless the object of study is bacteria or perhaps fungi). It must still be considered one of the, if not the, crowning achievements of science – a comprehensive, unified understanding of the origin, spread, diversity and continued development of all biological life. Evolutionary biology, instead of producing predictive theory, provides an understanding of a set of processes by which species diverge and evolve.

Another example is the weather, which is a classic example of a chaotic system. It is somewhat predictable only days at a time, due to the extreme reliance on initial conditions. The outlook for meteorology in producing long-run predictive models is not very bright. Nevertheless, this discipline also provides an understanding of the processes involved in variations of weather conditions.

When it comes to the possibility of experiments generally, there are obvious examples of completely non-experimental natural sciences. Astronomers cannot possibly randomly put planets in the sky to observe their trajectories. Cosmologists cannot possibly rerun the Big Bang with slightly altered parameters. Geologists cannot possibly grind out tectonic plates or provoke earthquakes. Paleontologists cannot possibly… (the list goes on). Instead, these disciplines rely on systematic observation and are experimental only to the extent that parallels can be drawn to much smaller phenomena that will fit in the lab – much like the social sciences.

Thus, I think both Flyvbjerg and Taagepera judge the social sciences in the light of what is to some extent a straw man of the natural sciences. Predictive capacity is not a necessary concluding stage of the scientific process. In the, most likely limited, areas where fairly accurate prediction is actually possible within the social sciences (and for those areas, Taageperas point is very well taken – the possible benefits of reaching a stage of successful logical predictive model building may be huge), by all means it should be attempted. For the other areas, the necessary conclusion is not, speaking with Flyvbjerg, that the only thing we can hope for is “concrete, context-dependent knowledge”.

As we have seen, neither predictive capacity nor controlled experimentation actually separates the natural from the social sciences. What about the other arguments? The second set of arguments appeal to the specific character of social phenomena, namely the importance of context and the problem of reflexivity. Here I will argue that both of these arguments are essentially based on a form of intellectual laziness.

The most common argument is the appeal to the dependence on context. Little (1991) confidently states that “there are no ‘brute facts’ in social science – facts that do not allude to specific cultural meanings”. First of all, this statement taken at face value seems rather absurd. Many of the things social scientists study are painfully tangible, “brute” facts in the most brute sense of the word: war, famine, political violence and torture. It is hard to allude to cultural context to explain away a dead body with a bullet in the neck or a malnourished child. A person is, no matter how much we wish to take context into account, still a biological organism with certain very clear constraints. A bullet or a limited caloric intake will have definite consequences for that organism. Second: what exactly is context? If by context we mean the shared social values and cultural traits in a given setting, then I very much concede that these undoubtedly will have a powerful impact on people’s thoughts and actions. But these shared webs of meaning are also, in an analytical sense, essentially a set of intricate relationships between independent variables we have yet to disentangle and understand. Saying that the vast complexity of the contextual factors involved in a given scientific problem makes understanding it impossible is just saying that we don’t want to do the necessary work.

A second argument about the unique characteristics of social phenomena is the appeal to reflexivity. Social agents talk back, and can change their behavior in response to our previous knowledge of how they act. Again, however, in a sense this is actually not unique to the social sciences either. A pressing example is presented to us by the ability of bacteria to evolve past our antibiotic defenses. As simplistic as the example might seem, it is in principle the same: an empirical law is found (antibiotics kill bacteria), and the practical application of this law eventually renders it useless (bacteria develop resistance). Reflexivity of social phenomena undeniably represents a challenge for social science. But again, from an analytical viewpoint, it is just a particular type of feedback effect. The parallel to antibiotics goes even further: instead of throwing our hands in the air, we conduct research to understand the mechanisms by which bacteria evolve resistance and strive to develop antibiotics that may circumvent these mechanisms. The same process is just as valid for social phenomena, and giving up because of the daunting complexity reflexivity may induce is intellectual laziness.

I will sum up with a question: where would contemporary physics be if Ernest Rutherford and his contemporaries, after discovering the nucleus of the atom in 1911, had simply shrugged their shoulders and said “this all looks incredibly complex, we simply cannot understand it – let’s just conclude that the atom is a mystery?” A hundred years later, we can say that while they would have been right in saying that it was indeed incredibly complex – massively more complex than they could probably have imagined – the defeatist attitude would have been premature. Neither nuclear nor particle physics are completed research programs, but few would argue that nothing of value has been achieved since then.

Good scientific practice in the social realm

So, given that the divide between the natural and social sciences is not as watertight as some would have it, what would good scientific practice be, in a general sense, as applied to the problems that political scientists study?

First and foremost, any scientific enterprise should fulfill some minimal criteria. It should strive for cumulative knowledge, and empirical implications of our theorizing need to be worked out so that we are dealing with statements that are at least in principle (and preferably in practice) falsifiable. Further than this, we need to be extraordinarily careful in setting out a scientific “working order”. If history shows anything at all, it is that insight and discovery can arise under very unexpected circumstances and in a highly unstructured way (methodological anarchists like Paul Feyerabend, even though I take great issue with their ontological and epistemological approach, have effectively shown how science has probably never had such a thing as a unified process anyway). Nevertheless, a few thoughts are justifiable.

Laitin (2006) sets out what he labels the “tripartite” method, consisting of narrative, formal theory and statistics. Narrative studies play the part of hypothesis generating and plausibility checking, while formal modeling puts theorizing on a firm foundation and statistics provide a way of verifying or falsifying the proposed hypotheses. This is an efficient and parsimonious (and fairly flexible) description of a working order for the social sciences. I wish to elaborate on it however. The thoughts in the following section are greatly indebted to, first and foremost, Scott de Marchi’s excellent book Computational and Mathematical Modeling in the Social Sciences (2005), but also to Rein Taagepera’s Making the Social Sciences more Scientific (2008).

First: when developing theory, what should be the assumptions that provide the foundations for our theorizing? Friedman (1953) famously argued that which assumptions we should prefer is an irrelevant question. The only test of a successful theory is whether it provides accurate predictions about its empirical referent. This is a limited argument. What should we change if our hypotheses are falsified? Which assumption is the critical one? We might end up randomly shuffling around assumptions and solution concepts to fit an already preconceived result. Considering that, mathematically speaking, there is an infinite universe of models that can produce a given result, this is a good reason for keeping track of the suitability and reality of our assumptions.

Narrative or interpretive approaches play an important role in opening up the “black box” of human reasoning and constraining the set of assumptions we can use about the behavior of agents in our theoretical models. What are the preferences of the actors (when dealing with less obvious payoffs than pure monetary gains, or equivalents)? How do people reason when dealing with the problem at hand? Pure experimental approaches (both field and lab experiments) can further put such assumptions about agent behavior on an even more solid foundation.

What kind of modeling should be done, once a plausible set of assumptions have been chosen? This is an area where I’m a bit hesitant about some of the rational choice-oriented work done in the social sciences. The reliance on deductive modeling and game theory allows us to model only a subset of all the possible systematic and structured interactions that humans engage in. Complex systems and systems displaying chaotic behavior can often not be adequately modeled with deductive approaches, since they seldom have unique equilibria, if they have equilibria at all. Such systems might be more amenable to computational approaches like agent-based modeling (ABM) or microsimulation. While such techniques may not lead to deductive analytical results, they allow us to investigate the dynamics of the phenomenon at hand and the behavior of systems over time. They allow us to study and understand processes and dynamics rather than equilibria and statics. Thus, acknowledging complexity leads to an expanded set of modeling tools. These tools are in fact frequently applied to problems studied by political scientists but have yet to make it to the mainstream of political research.

Concluding remarks

This essay has failed in at least one sense: it has not brought up any concrete examples of good political science research. Neither has it touched upon the normative aspects of our discipline and the role of political philosophy. However, I believe that I’ve made clear my stance on a few key points. I do not believe that we need a specific ontology for the study of the social world. I do not believe that our subject matter is of such a radically different nature that it motivates a distinct divergence from the natural sciences. Given this, I have argued that what constitutes good political science research is what makes for good scientific research in general. As such, a political science manifesto is little more than a science manifesto.


References

Flyvbjerg, B. (2006). “Five Misunderstandings About Case-Study Research”, Qualitative Inquiry, 12:219-45.

Flyvbjerg, B. (2006b). “A Perestroikan Strawman Answers Back: David Laitin and the Phronetic Political Science”, in Schram & Caterino eds, Making Political Science Matter. New York: New York University Press.

Friedman, M. (1953). Essays in Positive Economics. University of Chicago Press.

Laitin, D. (2006). “The Perestroikan Challenge to Social Science”, in Schram & Caterino eds. Making Political Science Matter. New York: New York University Press.

Little, D. (1991). “Interpretation Theory”, Chap. 4, in Varieties of Social Explanation. Boulder: Westview Press.

de Marchi, S. (2005). Computational and Mathematical Modeling in the Social Sciences. Cambridge University Press.

Taagepera, R. (2007). “Predictive versus Postdictive Models”, European Political Science, 6:114-23.

Taagepera, R. (2008). Making Social Sciences more Scientific. Oxford University Press.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s