Election Surveys

Election research has played a decisive part in the development of the methods of survey research from the beginning. More than market and media research, election research has awoken, in a special way, the curiosity and the ambition of researchers, and thus strongly affected empirical social research. It is significant that the breakthroughs of both the modern method of the representative survey and of empirical methods in communication research are connected with studies in electoral choice.

History

The first attempts at analyzing elections using statistical methods can be traced back to the early twentieth century. In 1905, the German researcher R. Blank published a detailed social science analysis of the social democratic party’s electorate (Blank 1905). A decade later, attempts began to predict future electoral behavior on the basis of pre-election polls. In 1916, an American magazine, Literary Digest, began to conduct write-in mass surveys before presidential elections, so-called “straw polls.” This endeavor became rather gigantic over the years, with 20 million ballot forms being sent to addresses taken from telephone registers and automobile licensing offices in the election year of 1932.

At the same time, another approach to studying consumer behavior had established itself in business life since the 1920s: interviewing a few hundred people, but taking care that the interviewed people exactly reproduced, in reduced size, the socio-demographic structure of the population at large. There was also at this time a shift from sending out questionnaires by mail to interviewing face-to-face.

The researchers George Gallup, Elmo Roper, and Archibald Crossley first applied this method to the prediction of election results in the US presidential election in 1936. They predicted Franklin D. Roosevelt’s victory, while Literary Digest had anticipated Republican candidate Alf Landon would win. Roosevelt made it with a clear margin. Gallup, Roper, and Crossley had demonstrated that it was not crucial to interview as many individuals as possible to reproduce the people’s opinion correctly, but that, given an appropriate representative selection of respondents, a few thousand interviews would suffice to know, with computable confidence, what a population of millions thought – Gallup had interviewed 6,000.

The three avoided two crucial methodological mistakes that caused the failure of the Literary Digest’s forecast:

1 Respondent selection relied on telephone registers and the files of car licensing authorities. The share of well-to-do people was above average among the respondents chosen in this manner, and the well-to-do voted Republican to a larger degree than the average population.

2 The method of the write-in questionnaire further created problems: only a minority of the people contacted did indeed fill in and send back the postcards, and those who sent them back were not representative of the total sample. The problem of self-selection is considered an especially serious disadvantage of write-in and most online surveys to this day. The principles applied by Gallup reduced the detrimental effect of self-selection considerably and are regarded to this day as the methodological bases of representative surveys.

The 1944 election study The people’s choice by Paul F. Lazarsfeld is of crucial significance for the development of communication research. Lazarsfeld studied voters’ decision-making and the effects of media coverage on voting behavior in a broadly designed six-wave panel survey (a panel survey is one that allows for the repeated interviewing of the same respondents over a period of time; Lazarsfeld et al. 1944). Several influential communication theories grew out of the results of this study, among them the hypothesis of a two-step flow of communication.

Very soon after George Gallup’s success in 1936, election surveys became a fixed quantity in the media’s election coverage, at first in the USA and after World War II in western Europe also. Today, election surveys are conducted in almost all democratic states. From the start, the institutes conducting election surveys were confronted with the allegation that the publication of their results affected the democratic process and thus interfered with the voters’ free decision. Studies, however, have clearly shown that the effect of published election survey results is in fact weak. For this reason, and because of fundamental judicial considerations, repeated attempts to ban the publication of election survey results before elections have failed in several countries. Nevertheless, such attempts are still made time and again, and some democratic countries sustain such bans (Donsbach 2001).

Exit Polls And Pre-Election Surveys

Today, there are two basically different types of election surveys: exit polls and preelection surveys, with the former usually dominating media coverage on election day, while the latter are more important for in-depth analyses of the background of elections. The method of the exit polls was essentially developed in the 1960s by the American researcher Warren Mitofsky, who was responsible for the election coverage of the US television network CBS. In exit polls a representative sample of voters is interviewed immediately upon leaving the polling station. Results are published before the first returns come in on election night. They mostly allow a very precise estimate of the election result. In countries where it is doubtful whether the vote count is conducted correctly, exit polls are also a good control on the official electoral process, provided they can be conducted independently.

In contrast, pre-election surveys use the same methods as surveys on other subjects. They are one of the few situations that allow survey research firms to test the validity of their methods using verifiable external criteria. Pre-election surveys today are conducted in many countries with the traditional method of the face-to-face interview. In the USA and many other western countries, however, telephone interviewing has come to dominate. For some years, there has been an increasing number of experiments with online surveys, but in spite of some spectacular success (Martin et al. 2005, 359), telephone and face-to-face surveys appear to be more reliable at this stage. Pre-election polls offer a much more detailed analytical potential than exit polls, especially when conducted as panel studies, which can be considered the only truly reliable basis for the analysis of changing electoral preferences.

Methodological Requirements For Pre-Election Polls

The public’s expectations for the precision of pre-election surveys are especially high. This increases the risk for the research institutes active in this field to be publicly accused of failure. To meet the high expectations, the institutes conduct election surveys with a considerable methodological expenditure. The following are among the methods applied in these surveys.

In some countries, the shares of parties or candidates cannot simply be determined by the question, “If the election were held tomorrow, which party would you vote for?” Depending on the electoral system, whole series of questions may be necessary. In Germany, for instance, every voter has two votes, one to choose a local candidate for parliament, and the other to determine a party’s number of seats. Questions to model the outcome of an election not only have to consider both votes, but also have to account for the fact that a large share of the population is unaware of the functions of the two votes.

One of the biggest problems in predicting election outcomes is created by the fact that even shortly before election day, a considerable share of respondents is still undecided about whether they will vote, and if so, which party or candidate they will vote for. There is widespread disagreement in election research on the right procedure to deal with these undecideds. In many cases, they are just left out of the computation of party shares or allotted proportionally on the candidates standing for election (Mitofsky 1998), a risky procedure when an over proportionally large share of potential adherents to one of the political camps is among the undecideds. The problem seems to be somewhat smaller in countries with a two-party system than in countries with a multiparty system and proportional representation. In these systems, a method of distributing undecideds with the help of cluster analyses (“statistical twins”) has proven its worth.

A special problem for predicting election outcomes appears in situations in which one political camp comes under the pressure of public opinion, which causes its adherents to no longer (or only hesitatingly) admit to belonging to this camp in survey interviews (Noelle-Neumann 1993). In these cases, weighting the data according to the so-called recall question (Which party/candidate did you vote for last time?) can become necessary. The difference between the marginal results of the recall question and the actual outcome of the last election serves as the basis for the computation of the weighting factors (NoelleNeumann & Petersen 2005).

Time and again, attempts are made to compute long-term election forecasts on the basis of figures acquired through experience over a long period of time. In spite of some remarkable successes of this approach (e.g., Norpoth 1995), it cannot replace traditional pre-election polls, no matter how instructive and analytically useful such procedures may otherwise be. Pre-election polls may help to control these forecast models. Attempts at combining the figures gained from long-term experience with new surveys are to be considered somewhat problematic. They may improve the precision of a pre-election poll in some cases, but they are always misleading when voting behavior, for whatever reason, does not follow the patterns observed earlier.

Precision Of Election Polls

The reputation of election surveys among the public in many countries is strongly affected by a few erroneous prognoses, while the large number of correct prognoses is noticed less attentively by the media and the public. Spectacular errors in election forecasts, such as in the US presidential elections in 1948, 1980, and 1996, the British general election in 1992, the French presidential election in 2002, and the German parliamentary election in 2005, have always produced, aside from intensive scientific research of the causes of the failure, a public discussion of the uses of survey research as a political information medium and its quality.

Partly, these discussions are based on a misunderstanding of the possibilities and limits of survey research, for instance when the surveys were correct within the margins of standard error, but picked the wrong winner in a close race, as was the case with some exit polls in Germany in 2002, or when the survey results correctly showed the overall percentage shares of the vote but failed to reliably predict the resulting distribution of seats in parliament because of a first-past-the-post electoral system, as was the case in the Indian parliamentary election in 2004.

A scientific analysis of the quality of election surveys faces the question of how their precision can be measured objectively. As early as 1949, Mosteller et al. listed eight different ways to measure the deviation of election forecasts from election outcomes in their study of the causes of Gallup’s failure in predicting the result of the 1948 US presidential election. Most of these use either the difference between survey and vote shares of individual parties or candidates in percentage points or the gap between the two leading candidates as the basis of their computation.

Especially the latter are suited to the US political system, in which elections are almost always dominated by two parties or candidates. More complex modern computation models, which try to connect the various advantages of traditional methods, also assume an electoral or party system that is at least similar to the US system (e.g., Martin et al. 2005). In countries with a more complex party system, the average deviation of prognosis from actual outcome across all parties, multiplied by the largest deviation for any single party, usually provides a practical basis for analysis.

References:

  1. Blank, R. (1905). Die soziale Zusammensetzung der sozialdemokratischen Wählerschaft Deutschlands [The social structure of the social democratic electorate in Germany]. Archiv für Sozialwissenschaft und Sozialpolitik, 20, 507–553.
  2. Donsbach, W. (2001). Who’s afraid of election polls? Normative and empirical arguments for the freedom of pre-election surveys. Amsterdam: Foundation for Information.
  3. Lazarsfeld, P. F., Berelson, B., & Gaudet, H. (1944). The people’s choice: How the voter makes up his mind in a presidential campaign. New York: Columbia University Press.
  4. Martin, E. A., Traugott, M., & Kennedy, C. (2005). A review and proposal for a new measure of poll accuracy. Public Opinion Quarterly, 69, 342 –369.
  5. Mitofsky, W. (1998). The polls – Review: Was 1996 a worse year for polls than 1948? Public Opinion Quarterly, 62, 230 –249.
  6. Mosteller, F., Hyman, H., McCarthy, P. J. et al. (1949). The pre-election polls of 1948: Report to the Committee on Analysis of Pre-Election Polls and Forecasts. New York: Social Science Research Council.
  7. Noelle-Neumann, E. (1993). The spiral of silence: Public opinion – Our social skin, 2nd edn. Chicago, IL: University of Chicago Press.
  8. Noelle-Neumann, E., & Petersen, T. (2005). Alle, nicht jeder. Einführung in die Methoden der Demoskopie [All, not everyone: Introduction to the methods of survey research], 4th edn. Berlin: Springer.
  9. Norpoth, H. (1995). Is Clinton doomed? An early forecast for 1996. PS: Political Science and Politics, 28, 201–207.
Scroll to Top