Empirically Based Professional Practice

Professional practice based on reliable and valid empirical findings of effectiveness—empirically based professional practice (EBPP)—has increasingly influenced the work of mental health practitioners over the past two decades. The appeal of EBPP is based on the conviction that using evidence-based assessments and treatments in preference to those without empirical support makes sense, because EBPPs have been compared systematically to alternative treatments by appropriate and powerful methods and, for that reason, should provide practitioners assurance of superior efficacy.

Many believe that medical practice has long been heavily influenced by the most current available scientific data. However, that assumption may not be valid. The roots of evidence-based medical practice (EBMP) are shorter than many may have thought: They are usually traced to a statement published in the Journal of the American Medical Association by a group of physicians led by a Canadian internist that advocated evidence-based medical practice over medicine as an art. The article led to a debate about power, ethics, and responsibility in medicine that appears to have radically altered healthcare practices. Although this change has led physicians away from intuition (the art in medicine) toward empirical data (the science in medicine), divisions among physicians over the value of intuition continue. Some have said that the EBMP position is unequivocal: There are reliable, validated data, and then there are data that aren’t reliable and validated, and the difference between the two is what is important.

Parallels between the nature of and reactions to evidence-based medical practice and evidence-based mental health practice are clear. What advocates for evidence-based treatments in psychology, psychiatry, and social work hear from their critics is strikingly similar to what those who support evidence-based medical practices hear from their critics, as are the efforts of advocates for evidence-based mental health practice and advocates for evidence-based medicine to counter those criticisms with data.

History of Concern for Evidence of Efficacy in Professional Psychology

The issue of the evidence base of clinical practice in psychology has a lengthy and important history that dates back more than 60 years to the period of explosive growth of professional psychology during and after World War II that transformed it from a small, primarily academic discipline to one of the core mental health professions. As long ago as 1942, social psychologist Theodore Sarbin predicted, on the basis of some of his own data, that actuarial prediction methods (“science”) would ultimately be able to outperform humans (“art”) along a variety of judgment dimensions. In 1954, Paul Meehl, who was to become one of the towering figures in clinical psychology over the next several decades, published a book that summarized his data confirming the consistent superiority of statistical prediction (“science”) over clinical prediction (“art”). Although advocates for art rather than science in clinical and counseling psychology since then have been heard from, support from behavioral scientists for the positions Sarbin and Meehl took so long ago on this issue has been strong and consistent.

Notwithstanding opposition to evidence-based practice by some mental health professionals, increasing efforts are being expended to require mental health practitioners to follow practice guidelines, and practice guidelines are becoming more prescriptive. Managed care organizations, third-party reimbursers, and state and federal agencies have come increasingly to expect psychologists, psychiatrists, and social workers whenever possible to employ practices with empirical support. As a consequence, many psychology and social work students and psychiatric residents receive training that emphasizes these practices as mandated by their professions. Practice guidelines incorporating evidence-based treatments have been put forth by the American Psychological Association and the American Psychiatric Association as well as by the U.S. Department of Veterans Affairs and the U.S. Agency for Health Care Policy and Research. And more and more patients have come to expect their therapists to know and use empirically supported treatments whenever possible.

Despite these developments, disagreement continues to divide mental health professionals on the strength and legitimacy of the evidence base that underlies evidence-based practices. Three pressing unresolved issues underlie this controversy; they are briefly considered below. Resolution of these issues appears to be essential to the future of evidence-based practices. If some or all of them can finally be resolved, agreement by most mental health professionals on the worth of evidence-based practices would likely be assured. By the same token, if few or none can be settled, the momentum toward evidence-based practices might well slow.

Issue #1: Efficacy Model versus Effectiveness Model

Which of two psychotherapy outcome research models—the efficacy model or the effectiveness model—best captures the most crucial differences among therapy techniques and procedures and, hence, can be relied on to provide the most accurate picture of therapy outcomes? The efficacy model describes most carefully controlled, time-limited psychotherapy outcome research, much of it involving random assignment of patients to treatments, done largely by psychotherapy researchers in laboratory or other controlled settings, using therapists intensively trained to provide the experimental treatment and psychotherapy patients carefully selected diagnostically to receive it. The effectiveness model describes psychotherapy research done in real-world clinical settings, utilizing clinicians in their usual treatment settings doing the kind of psychotherapy they customarily do with the patients who customarily come to see them in those settings.

Most of the research that has led to identification of evidence-based treatments to this time has been done according to the efficacy model. As a result, critics of the evidence underlying evidence-based treatments have claimed that efficacy studies do not reflect therapy outcomes “in the real world.” While supporters of the efficacy model have mounted a vigorous defense of the treatment model, the question of which of the two models provides the most valid picture of psychotherapy outcomes remains unresolved. Until it is, the evidence base of evidence-based treatments will remain suspect.

The efficacy model and the effectiveness model represent quite different approaches to studying behavior change. Because neither model by itself appears to capture the entirety of what makes a treatment effective, clinical researchers have begun to try to integrate the two approaches in the design of psychotherapy research in the effort to gather the most broadly based empirical support for the treatments being evaluated and resolve the controversy.

Issue #2: Common Factors versus Treatment Factors

The controversy over Issue #1—the relative validity of efficacy versus effectiveness studies in assessing treatment outcomes—came to a head with the publication of practice guidelines by the American Psychiatric Association and the Division of Clinical Psychology of the American Psychological Association. However, the appearance of these guidelines also led critics of evidence-based practices to step up the amplitude of a related continuing controversy: This issue has to do with whether differences in psychotherapy outcomes are more strongly associated with specific types or schools of psychotherapy (as advocates for empirically supported treatments affirm) or with therapist, patient, and therapy process variables common to all psychological treatments.

Treatment factors refer to the array of therapeutic behaviors and techniques a therapist is taught and must learn as he or she acquires the skills appropriate to the practice of a specific intervention. By contrast, common factors refer to factors influencing outcomes presumed to be independent of the specific treatment employed, including patient attributes like age, gender, and personality; therapist attributes like interpersonal and social skills; and the nature of the relationship between therapist and patient.

Therapist variables thought to impact on therapy outcomes regardless of the kind of therapy techniques the therapist uses range from the therapist’s demographic characteristics and sociocultural background to subjective factors like values, attitudes, and beliefs. Prominent psychologist Larry Beutler believes that therapist variables reflecting behaviors specific to the therapeutic relationship, including the therapist’s professional background, style, and choice of interventions, may exert the most powerful effects on therapy outcomes. After an extensive review of outcome research data, Michael Lambert and Allen Bergin concluded that about 30% of psychotherapy outcome variance is attributable to therapist variables, prominently including therapist empathy, warmth, and acceptance of the patient.

In contrast to the voluminous data on the impact of therapist variables on therapy outcomes, patient variables have failed to demonstrate a robust relationship to outcome variables. In a well-known National Institute of Mental Health Treatment of Depression Collaborative Research Project (NIMH-TDCRP) comparative study of treatments for depression, for example, no single patient variable correlated significantly with outcome. More recently, the National Institute on Alcohol Abuse and Alcoholism’s Project MATCH failed to identify relationships between patient-treatment matches and outcomes of treatment for alcohol abuse and dependence.

Therapeutic process variables—factors influencing therapists’ reactions to patients’ behavior and attitudes and vice versa—have also been claimed to affect therapy outcomes. To this end, David Orlinsky and Kenneth Howard concluded that process variables, which they believed include the strength of the therapeutic bond, the skillfulness with which interventions are undertaken, and the duration of the treatment relationship, all impact positively on outcomes. Others have also stressed the central role of the therapeutic alliance in determining outcomes. Most recently, Louis Castonguay and his colleagues reviewed the extensive data on relationship factors in the treatment of dysphoric disorders, concluding that these factors positively influenced treatment outcomes. Nonetheless, critics of process research have continued to emphasize the difficulties associated with the reliable collection of process data.

Issue #3: The Dodo Bird Effect

The dodo bird effect, so designated by Saul Rosenzweig in a prescient 1936 article on common factors in diverse psychotherapies, is named after the well-known race in Lewis Carroll’s Alice in Wonderland. At the conclusion of that contest, the dodo bird declares that since the racers, including Alice, had failed to keep to the racecourse, it was impossible to know who had won and, so, “Everyone has won and all must have prizes.” Thus, the dodo bird effect in psychotherapy research refers to findings that indicate that there are few or no meaningful differences among psychotherapies in effectiveness and that, accordingly, outcomes of therapy don’t really depend on the kind of therapy patients.

Acknowledging their debt to Rosenzweig for his initial reference to the dodo bird, Luborsky, Singer, and Luborsky wrote a review chapter in the 1970s in an edited volume on psychotherapy evaluation, “Comparative studies of psychotherapies: Is it true that ‘everyone has won and all must have prizes’?” Luborsky and his colleagues compared outcomes from group and individual psychotherapy, time-limited and open-ended psychotherapy, and client-centered and psychodynamic therapy, concluding that most studies of psychotherapy have found insignificant differences in the effectiveness of different psychotherapies.

More recently, the dodo bird effect has been taken to refer to efficacy comparisons among psychotherapies by means of meta-analyses that have failed to find differences in efficacy. By now, a number of psychotherapy researchers besides Luborsky and his colleagues have adopted the same position—that most psychotherapies are effective in inducing behavior change, but that they do not differ in efficacy. A number of behavioral researchers, however, have vigorously disputed this position. They point to data from a number of randomized clinical trials that strongly suggest that some psychosocial treatments—most of them behavioral or cognitive behavioral—do appear to yield significantly better outcomes than do other psychotherapies. In so doing, they also cite data finding substantial problems with meta-analysis as a means of comparing the differential efficacy of psychotherapies. As noted above, the drafters of practice guidelines by the American Psychiatric Association, the American Psychological Association, and the Mental Health and Behavioral Science Service of the Veterans Administration, among others, have taken similar positions in recommending some treatments over other treatments on the basis of outcome studies.

Present Situation

The well-known complexity of the psychotherapy process—in particular, the range of variables in the relationship between patient and therapist that are capable of influencing therapy outcomes—has led advocates for and critics of evidence-based treatments to the present situation. On the one hand, advocates point to voluminous data attesting (a) to the power of efficacy studies to differentiate among therapies in outcome, (b) to the association of specific kinds of treatment, most often cognitive-behavioral, with positive outcomes, and (c) to the failure of “art” through history in its struggle with science for primacy. On the other hand, critics have no problem citing research in each instance that supports their view that these questions cannot be resolved nearly as simply and expeditiously as the advocates would like.

While the most attractive solution to this contretemps is to suggest that partisans on each side of this issue tone down their rhetoric until enough data have been gathered to resolve these questions, that solution is unlikely to satisfy either side.

References:

  1. Barlow, D. H. (1996). Health care policy, psychotherapy research, and the future of psychotherapy. American Psychologist, 51, 1050-1058.
  2. Beutler, L. E., Machado, P. P. P., & Neufeldt, S. A. (1994). Therapist variables. In S. L. Garfield & A. E. Bergin (Eds.), Handbook of psychotherapy and behavior change (4th ed., pp. 229-269). New York: Wiley.
  3. Boisvert, C. M., & Faust, D. (2003). Leading researchers’ consensus on psychotherapy research findings: Implications for the teaching and conduct of psychotherapy. Professional Psychology: Research and Practice, 34, 508-513.
  4. Chambless, D. L., & Ollendick, T. H. (2001). Empirically supported psychological interventions: Controversies and evidence. In S. T. Fiske, D. L. Schacter, & C. Zahn-Waxler (Eds.), Annual review of psychology (Vol. 52, pp. 685-716). Palo Alto, CA: Annual Reviews.
  5. Garfield, S. L. (1996). Some problems associated with “validated” forms of psychotherapy. Clinical Psychology: Science and Practice, 3, 218-229.
  6. Guyatt, G. H., Haynes, R. B., Jaeschke, R. Z., Cook, D. J., Green, L., Naylor, C. D., et al. (2000). Users’ guide to the medical literature XXV. Evidence-based medicine: Principles for applying the users’ guides to patient care. Journal of the American Medical Association, 284, 1290-1296.
  7. Lambert, M. J., & Bergin, A. E. (1994). The effectiveness of psychotherapy. In S. L. Garfield & A. E. Bergin (Eds.), Handbook of psychotherapy and behavior change (4th ed., pp. 143-189). New York: Wiley.
  8. Luborsky, L., Singer, B., & Luborsky, L. (1976). Comparative studies of psychotherapies: Is it true that “everybody has won and all must have prizes”? In R. L. Spitzer & D. F. Klein (Eds.), Evaluation of psychological therapies (pp. 3-22). Baltimore, MD: Johns Hopkins University Press.
  9. Meehl, P. E. (1954). Clinical vs. statistical prediction: A theoretical analysis and a review of the evidence. Minneapolis: University of Minnesota Press.
  10. Nathan, P. E., Stuart, S. P., & Dolan, S. L. (2000). Research on psychotherapy efficacy and effectiveness: Between Scylla and Charybdis? Psychological Bulletin, 126, 964-981.
  11. Sarbin, T. R. (1942). A contribution to the study of actuarial and individual methods of predictions. American Journal of Sociology, 48, 593-602.
  12. Strupp, H. H. (1973). Psychotherapy: Clinical, research, and theoretical issues. New York: Jason Aronson.
  13. Task Force on Promotion and Dissemination of Psychological Procedures. (1995). Training in and dissemination of empirically-validated psychological treatments: Report and recommendations. The Clinical Psychologist, 48, 3-23.

See also:

    Scroll to Top