Science and the Measurement of Risk

The definition and measurement of risk is controversial and much in debate, with risk assessments made by scientists often differing from those of the lay public. Scientific measurements are based on logic and rationality. They tend to ignore or invalidate lay understandings of risk, not taking into account social, experiential, or perceptual influences. However, sociological work has highlighted that responses to risk are governed by a huge number of interrelated factors and have to be considered as part of the social context from which they arise. Lay people also have their own knowledge and expertise to draw upon when making assessments of the immanency and impact of any risk. It is not necessarily the case, therefore, that the public is being “irrational” in its decision making, but that it uses a different rationality to that of scientists. These differences can lead to conflicts over risks, and resistance from the public to the measurements, and subsequent recommendations, imposed upon them by scientific research.

Risk assessments carried out by scientists are based on technical rationality. Risks are determined by experts, founded on evidence and logic, and described in terms of statistical probabilities. Results are depicted as objective, universal, and value free. They are designed to improve decision making, and are presented as a means for human advancement. The focus is on identifying risks and people’s responses to them in order to build prescriptive models for their avoidance. Such prescriptions often note a divergence between the public fear of a risk and the actual incidence of the danger; the controversy over genetically modified foods is often cited as an example of where the risk does not warrant the public alarm caused by it. Indeed, scientific rationality about risks is intended to counter any public irrationality. The public is often considered to be unable to decide accurately and realistically about the existence and magnitude of the risks that it faces. It is deemed to use non scientific factors and base its judgments on emotions or misinformation. Scientific assessments may characterize the public as ignorantly or willfully disregarding the neutral, objectively derived facts, and there is often a focus therefore on how to overcome this ignorance – or how to act in spite of it. Reassurances may be given about the ”real facts” of a risk, and attempts made to educate the public about its true likelihood. This is based on a view that educating people will win their support, and that if they knew the facts, they would behave accordingly – the ”public deficit” model, whereby problems arise because people misunderstand or are unable to grasp the facts being presented to them. Any continued public concern is then often dismissed as hysteria or hype.

While scientific risk assessments are designed to make people appreciate the irrationality of their position, what they overlook is that people may simply be drawing on another form of rationality; one that is not any more or less irrational, but just different from that employed by scientists. This ”sociocultural rationality” is based on experience, social values, and the social context in which an individual lives. Assessment of the impacts and implications of a risk are shaped by the circumstances in which that risk is anticipated. Moreover, rather than being ignorant of the methods and results of risk assessment, lay people may draw on their own expertise, definitions, and meanings, and use these to reflect upon the validity and credibility of the technical information they are given.

Debates and assessments of risk are rooted in the context from which they arise. It is not possible to separate the likelihood and impacts of any particular risk from the broader social situation and the everyday social reality in which they are experienced. For example, researchers examining the effects of pollution from petro chemical factories in the northeast of the UK found that people tolerated the discharge from the smoke stacks because the factories were an intrinsic part of the local community. They were the main source of employment and had been for several previous generations. The factories were central to the identity of the area, and had been the reason for the economic boom and prestige accorded to it. The impacts of the pollution were known about, accommodated for, and were adopted into patterns of life (see Phillimore & Moffatt 1999). The scientific risk assessments of the distribution and effects of the pollution did not reflect anything of the social context in which the risks of the emissions were perceived.

As well as the influence of the wider social context, people may also bring their own knowledge and experience to bear in assessing risks. The hierarchical scientific ”top down” model of informing the public about the facts does not incorporate any notion of a two way relationship between people and scientists, or any negotiation between them. Sociological work, however, highlights the potential validity of a public response to a risk. Williams and Popay (1994) describe the situation in the town of Camelford in Cornwall, UK, where toxic substances were accidentally   tipped   into   the   local water reservoir. Residents were concerned about both the short and long term risks to their health, but a committee of government scientists convened to look into the issue stated that chronic symptoms were not associated with the toxic dumping. However, in the light of continuing ill health, local residents continued to campaign for recognition of their claims, in what was to become a long running and contentious battle. At the heart of this dispute is the notion that the shared experiences that formed local knowledge could not be invalidated by reference to standards of objectivity derived from abstract scientific knowledge.

Considering local experience also highlights problems with the presumed universality of scientific assessments of risk. Technical information in any communications about risk is simplified in order for people to be able to understand it and to reassure them. While risk assessment and prediction may incorporate a great deal of uncertainty, this is rarely discussed in any public communications. However, understating any uncertainty can antagonize rather than reassure, and damage credibility when such standardization may not fit with normal experience. The invalidity of idealized and universal versions of ”laboratory science” is highlighted when contrasted with ”real world” experience and expertise. For example, Wynne (1989) documents differences between scientists’ and farmers’ knowledges in the aftermath of the Chernobyl accident. Concerns about the spread of radiation led to controversial restrictions being placed on farming practices and the movement and sale of livestock in the UK. However, the abstract knowledge applied by scientists in determining the levels of radiation and subsequent restrictions did not take account of local variations in the distribution of radiation, ignored local farming realities, and neglected the local knowledge and expert judgments of farmers. For the farmers, the supposedly universal scientific knowledge was out of touch with the practical reality and thus of no validity. It ignored factors that were obvious to them but invisible to outsiders, and made no attempt to accommodate (or even communicate with) their understandings and knowledge of the situation.

What universal risk assessments also over look is that the nature of a risk affects the response to it. Sociological research has documented increased public concern over risks perceived as unfamiliar, unfair, invisible, or involuntary. This is not necessarily “irrational,” but rather highlights the contingent nature of risk assessment that technical analyses ignore. For example, people may accept risks many times greater if they are voluntarily assumed rather than forced upon them. Research has found that when people willingly engage in ”risky behavior,” they report greater knowledge, less fear, and more personal control over the risks. Examples include engaging in extreme sports, or taking recreational drugs. The imposition of a risk, as well as the perceived degree of risk and likelihood of danger, is an important factor, and people are therefore less likely to accept a risk imposed upon them, with consequences they have no control over. The outcry over safety on the railways is an example of this. Risk assessments based on rationality outline the statistical likelihood of being involved in a train crash or car accident, and the latter is much higher. But traveling by train means handing over the control and responsibility for the journey, and powerless passengers thus demand to be kept safe and free from risks.

Finally, differences in the definitions of risks relate to issues of trust in experts and scientific decision making. As well as any divergence between public and expert understanding of risks, the knowledge that the public receive from those experts is increasingly being met with skepticism. Accordingly therefore, individuals faced with a risk consider not only the probability of harm but also the credibility of whoever generates the information. The controversies over nuclear power, waste incinerators, and even renewable energy are all examples of this. Assurances from scientists and engineers that the noise from a wind farm will not disturb people living nearby, however authoritative and objective they seem, will be disregarded if the information is presented on behalf of the developer, or if it does not take into account the particular contingencies of the local area. Increasingly, scientific risk assessments are discounted and discredited by people who use their own knowledge and experience to determine the risks they face.

The disjuncture between scientists and lay assessment of risk has therefore led to increased public resistance of both the procedures and results of expert measurement. Indeed, as discussed above, the notion of the division between local and scientific knowledge can be seen as part of a trend challenging scientific work more generally. Public resistance of technical assessment arises because people’s experiences, meanings, and knowledges are not expressed in this definition; and are often ignored, ridiculed, and contradicted. When the control of the basis for which assessments about risk can be made is claimed by scientific experts, there may be little similarity between perceptions of risk embedded in lived experience and those based on ideals of rationality and logic. In the light of risk, divisions are opened up between lay and scientific knowledge and those who have access to and expertise of these seemingly opposing epistemologies. Scientific definitions tend to exclude those without access to the technical knowledge needed to understand them, and preclude any negotiation between experts and the public. They are intended to educate and inform, not empower, and are imposed upon a context in which they may be seen to have little validity. What sociology has done is to highlight the differing rationalities that scientists and lay people use in their risk assessments, and shown how conflicts arise because of these. Work in the tradition of the sociology of scientific knowledge (SSK) demonstrates that the ”public deficit” model of understanding has little validity, and serves to problematize the public rather than the operation of science. Instead, a more symmetrical view of different knowledge and expertise is required.

References:

  1. Green, J. (1997) Risk and Misfortune: The Social Construction of Accidents. UCL Press, London.
  2. Irwin, A. (1995) Citizen Science: A Study of People, Expertise, and Sustainable Development. Routledge, London.
  3. Leach, M., Scoones, I., & Wynne, B. (Eds.) (2005) Science and Citizens: Globalization and the Challenge of Engagement. Zed Books, London.
  4. Lupton, D. (Ed.) (1999) Risk and Sociocultural Theory: New Directions and Perspectives. Cambridge University Press, Cambridge.
  5. Phillimore, P. & Moffatt, S. (1999) ”If We Have Wrong Perceptions of Our Area, We Cannot Be Surprised If Others Do As Well”: Representing Risk in Teesside’s Environmental Politics. Journal of Risk Research 7(2): 171-84.
  6. Williams, G. & Popay, J. (1994). Lay Knowledge and the Privilege of Experience. In: Gabe, J., Kelleher, D., & Williams, G. (Eds.), Challenging Medicine. Routledge, London.
  7. Wynne, B. (1989) Sheep Farming After Chernobyl: A Case Study in Community Scientific Information. Environment 31: 10-15.

Back to Top

Back to Sociology of Science

Scroll to Top