Quantitative research approaches increase our knowledge by gathering data that can be manipulated mathematically. This allows us to answer questions about the meanings of psychological concepts, as well as to deter-mine their levels and variability as well as the relationships among them. Quantitative research approaches may be contrasted with qualitative approaches, which tend to collect data expressed in nonmathematical, symbolic representations sometimes referred to as thick descriptions, and place less focus on estimating the strength and form of relationships.
The data associated with quantitative approaches can result from simple measurement operations such as counts or categorizations, or from more complex operations that may involve the creation of measurement scales that function as psychological yardsticks. For example, quantitative research approaches have allowed industrial/organizational (I/O) psychologists to develop self-report measures of a construct called job satisfaction (JS), to determine that JS has a variety of different aspects or facets (such as satisfaction with pay, supervisor, or work setting), and to study its relationships with conditions such as organizational culture or leadership that make its general level higher or lower.
A basic tenet of any science is that scientists must collect and analyze data in a manner that can be replicated by others and is open to public inspection and criticism. Really, I/O psychologists are no different; they rely heavily on a wide range of quantitative methods to pursue two broad endeavors. The first of these is to accurately measure psychological variables of interest, such as performance, personality, intellectual capacity, work attitudes, and many more aspects of the world of work. The second endeavor consists of the systematic and theory-driven search for relationships among variables. Typically, the search for relationships involves testing theory-based hypotheses, the results of which allow for scientific inferences about the presence or absence of the relationships of interest. Next, we briefly describe quantitative approaches to measurement, the rationale for significance testing, and quantitative techniques for assessing relationships.
Quantitative Techniques Addressing Measurement Issues
Psychological measurement consists of developing rules that either allow us to classify objects into meaningful categories or identify where aspects of those objects fall on a numerical scale. Importantly, measurement is best when it is theory driven.
Two important characteristics of measures, often addressed using quantitative methods, are reliability and validity. Reliability may be defined in various ways; however, they all address the extent to which the same (or presumably equivalent) measurement procedures will yield the same results, if repeated. A variety of statistical techniques estimate reliability— including classic test theory-based procedures, such as test-retest correlation and coefficient alpha—and more recently developed methods such as generalizability theory. Closely related are indexes of agreement, which tell us the extent to which multiple observers rate the same object in the same way.
In contrast, validity addresses the issue of whether measures capture the true essence of the intended psychological construct. Again, a variety of quantitative approaches can be used to assess validity. Construct validity questions are often addressed with factor analytic techniques, which help us better understand the patterns of interrelatedness among measures and thus the number and nature of underlying constructs or latent variables. Exploratory factor analysis (EFA) is primarily inductive, providing empirical guides to the dimensionality of a set of measures. Each separate dimension suggests the presence of a different underlying construct; and EFA also estimates the extent to which specific items or measures appear to be influenced by a common underlying factor. Confirmatory factor analysis (CFA) allows a more deductive approach, because the researcher can prespecify a hypothesized latent factor structure. It also permits tests of how well a given factor model fits the data and allows comparisons of alternative models.
Another extremely useful quantitative approach is item response theory (IRT), which relates test item responses to levels of an underlying latent trait such as cognitive ability. This technique helps distinguish good test items that discriminate well between people high or low in a trait from poor items that do not. The IRT technique also enables the development of adaptive tests, allowing researchers to assess an individual’s standing on a trait without having to administer the entire measure.
Why Significance Tests Are Used
Psychological data typically contain a lot of noise because measurements generally reflect not only the level of the desired variable but also other extraneous influences such as misunderstandings or impression management attempts by research participants, temporary fluctuations in mood or alertness, and random variability. Focal variables often account for as little as 5% to 10% of the observed variability in responding. This frequent condition of small to moderate effect sizes means variability caused by the focal variables is not much larger than that possibly expected from sampling error. Statistical significance testing helps researchers determine whether observed differences or associations should be attributed to the variables of interest or could simply be an artifact of sampling variability. Significance tests typically pit two mutually exclusive and exhaustive hypotheses against each other, with the desired result being to find evidence that leads one to reject a null hypothesis of no effect.
Quantitative Techniques Addressing Relationship Issues
The quantitative techniques used by I/O psychologists were primarily developed in the late 1800s, 1900s, and into the present century. Research design and quantitative analysis were closely intertwined in their development. We describe some of the most commonly used techniques, which are appropriate when the dependent variable is at least an interval level measurement. These techniques have tended to rely on least-squares estimation procedures and have linear and fixed-model assumptions.
The experimental method is particularly powerful because it allows causal inference. Experiments are studies in which the researcher systematically manipulates conditions in groups that have been created by random assignment and then compares the effects of those manipulations. Variations of experimental methods, called quasi-experiments, attempt to preserve at least some of the characteristics of experimental designs while acknowledging that researchers cannot always use random assignment or manipulate key variables.
The most common statistical approach for experimental data analysis is the analysis of variance (ANOVA) model; it was first developed by Sir Ronald A. Fisher, who was interested in studying differences in crop yields associated with different agricultural practices. In general, ANOVA involves the comparisons of mean levels of a dependent variable across different groups created by experimental manipulations. There are many subtypes of ANOVA models, which incorporate mixed and random effects, allow analysis of incomplete design matrices, and control for covariates, among other variations.
There is also a strong tradition of survey and questionnaire research in I/O psychology. Although this approach makes causal inference more difficult, at least some researchers argue that this drawback is compensated for by better generalizability and construct richness. In fact, there are many interesting research questions where experimental designs are impractical or impossible because of ethical or practical issues.
Correlation and regression analysis, as well as related but more complex path and structural equation modeling approaches, are commonly used to analyze survey and questionnaire data. Sir Francis Galton and Karl Pearson were instrumental in developing correlation and regression. Correlation indicates the extent and direction of association between two variables. For example, a positive correlation between job satisfaction and organizational commitment indicates that employees who are more satisfied with their jobs tend to be more committed. Regression analysis determines whether predictor variables such as grade point average (GPA) and personality linearly relate to a criterion such as job performance, and estimates the proportion of variance in the criterion explained by the predictors. Ironically, given the sharp distinction made historically between ANOVA and regression techniques, in the 1950s statisticians began to recognize that they were in fact subtypes of an umbrella statistical model called the general linear model (GLM). The GLM also subsumes other important techniques such as canonical correlation, discriminant analysis, and multivariate analysis of variance.
Finally, important developments in a set of quantitative techniques called meta-analysis have led to advances in many areas of study over the past 25 years. These techniques allow researchers to cumulate the results from multiple studies of a given relationship. Meta-analysis thus more definitively addresses the question of whether a relationship is nonzero, and better estimates its true effect size.
Current Trends
Quantitative research techniques are becoming increasingly sophisticated and are simultaneously easier to implement with specialized computer software. Researchers are beginning to work more with techniques appropriate for dynamic, nonlinear, and longitudinal models; increase their use of robust or assumption-free statistics and alternative estimation methods; and critically reexamine aspects of the null hypothesis statistical testing paradigm.
References:
- Bobko, P. (1995). Correlation and regression: Principles and applications for industrial/organizational psychology and management. New York: McGraw-Hill.
- Drasgow, F., & Schmitt, N. (Eds.). (2002). Measuring and analyzing behavior in organizations: Advances in measurement and data analysis. San Francisco: Jossey-Bass.
- Harlow, L. L., Mulaik, S. A., & Steiger, J. H. (1997). What if there were no significance tests? Mahwah, NJ: Lawrence Erlbaum.
- Nunnally, J. C., & Bernstein, I. H. (1994). Psychometric theory (3rd ed.). New York: McGraw-Hill.
- Rogelberg, S. G. (Ed.). (2002). Handbook of research methods in industrial and organizational psychology. Malden, MA: Blackwell.
- Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. New York: Houghton Mifflin.