Evaluation is a management tool that allows the user to establish whether a project or campaign has had its intended effect. Effective evaluation is at the center of any public relations effort and should be a basic element of any planned public relations action. In reality, however, evaluation is often overlooked or not undertaken for a variety of reasons, including costs, lack of resources, or simply a failure to understand how to conduct basic evaluation. Basic evaluation provides the user not only with information on how well the project worked, but also with an indication during the project as to whether it is “on target,” or “on phase.”
Further, effective evaluations are based on four assumptions, that are tied to project outcomes and/or business or client needs. First, public relations evaluation assumes that the decision-making process is basically the same across all entities or businesses. This assumption establishes that evaluation is both systematic and applicable to a variety of situations. Second, all evaluation is based on realistic goals with measurable outcomes based on a set strategy and tactics that are implemented to bring the strategies to life. That is, all evaluation has expectations that are rooted in daily practice, associated with specific strategies (that, in turn, employ specific tactics), aimed at meeting measurable objectives (that, in turn, are tuned to “reachable” goals tied to client or business goals). Third, evaluation is divided into three general stages: development, refinement, and evaluation. This assumption suggests that evaluation is a continual process, which begins with project evaluation, is refined as it is implemented, and has a final evaluation against its objectives and ultimately against the client or business goals. Finally, evaluation is knowledge-based and behavior-driven. This assumption underlies the fact that evaluation decisions are made with forethought and that there are measurable criteria against which project outcomes can be tested.
Public Relations Goals And Objectives
As noted above, all public relations projects must start with realistic goals and measurable objectives. These goals and objectives derive from the problem statement – a statement that succinctly states what the project is seeking to do, what ends it is trying to achieve. The problem statement will in turn focus on the public relations goal, that will be tied to both tactical (output) and strategic (outtake and outcome) decision-making. An output is a tactic, e.g., a media release, video news release (VNR), or speech. It is the technical element and comprises what is to be done to meet the objectives. The outtake is the initial evaluation of the output: has it accomplished its intended purpose? The outcome is whether or not the strategy that employed the tactics actually “moved the needle,” met or surpassed its objectives to reach both public relations and client/business goals.
A goal is simply something that is desired. In a political campaign, it is to win the election. In a branding project it is to establish, maintain, or expand the brand. In a corporate project it may be to have employees sign up for certain benefits. The goal should be reasonable; the desire to corner 100 percent of a market with a new brand may be achievable, but improbable. Objectives come from goals. They are the things we seek to assess during and at the end of the project. Public relations objectives fall into three areas: informational (was the message sent out, received, and understood?), motivational (did it change or reinforce attitudes and behavioral intentions?), and behavioral (did the targeted audience do what the message asked?). Informational and behavioral objectives are fairly easy to set and evaluate; motivational objectives, however, are harder and require that the evaluation assess cognitive, affective, and intended behavioral aspirations. Good evaluation will employ a triangulated research approach – it will use multiple research methods to evaluate each of the three objectives during and after the project.
Evaluation Phases
All evaluation is phase-oriented. The first phase develops the project and its evaluation. This pre-project phase sets or establishes the benchmarks against which the project will be evaluated at selected times. Benchmarking is a form of evaluation in and of itself; it provides the project with a current knowledge base against which to plan and helps to establish realistic objectives. The benchmark phase also provides information about competition, identifies potential problems, and may help to preplan strategy in cases where objectives are not being met.
The second phase occurs once the project has begun and, based on periodic evaluation, refines the strategy and tactics employed to meet specified objectives (or may result in the altering of the objectives themselves).
The third phase occurs post-project as a final evaluation and establishes whether the objectives have been met and the goal achieved. Final evaluation reviews the entire project, from benchmarking to final outcome and provides evaluation of strategy and tactics, as well as a cost–benefit analysis.
Evaluation Methods
Evaluation methods employed in a public relations project run the gamut of possible research methods. Some evaluation simply requires counting whether a target audience – possibly an intervening audience, such as editors or reporters – receives, evaluates, and then forwards the outputs to the intended public. Other evaluation may require outtake analyses: were the messages transmitted to targeted publics with the intended results, that is, was the tone of the actual article or broadcast what was intended? This goes beyond a count of simple pick-up, it looks at how the message was evaluated by a third party whose endorsement may add to or detract from its intended effect.
Similar research methods are employed across the three phases of evaluation. They are generally classified as being either formal (scientific and quantitative) or informal (humanistic or qualitative). During the developmental phase, however, a third methodology is typically added – historical or secondary research. Throughout the public relations project different methods are employed and compared one against the other to insure that the evaluation is providing decision-makers with reliable and valid information.
Reliability And Validity
Evaluation methods differ in terms of both reliability and validity of the “data” collected and analyzed. The terms “qualitative” and “humanistic” are applied to data gathered with the intent of a deep understanding of specific individuals or cases but not meant to be extended to a larger population or public. The difference is to be found in use. Quantitative data establish norms (or parameters) against which groups can be compared, but in establishing a norm any individual differences are lost.
Quantitative methods have established ways of testing for reliability of response or observation. As such, they can tell the evaluator within a certain degree of confidence that the responses will be similar among other members of the public from which they were taken. Once reliability has been established, validity of response can be established – both in terms of logical and of statistical analyses. Hence, an advantage of the quantitative method is that reliability and validity can be judged and extrapolation to larger groups possibly inferred.
Qualitative data, on the other hand, comes from a much smaller sample and often from the interview of selected individuals. Qualitative data has a deeper meaning, is valid only for those persons being interviewed, and has real reliability problems – that is, it is valid for that group or individual, but may not reliably represent others from the public from which they come.
Evaluation Methodologies
Evaluation methods can be divided into four general classes – historical/secondary, qualitative, quantitative, and content analysis. Historical/secondary methods evaluate extant data. Qualitative methods collect data from individuals or small groups of individuals whose generalizability is limited but which is valid for those individuals; the data obtained is typically based on what was said or interpreted. Quantitative methods generally seek to gather information that can be reduced to numeric evaluation and in the case of survey or poll methods may be generalized to larger groups. Content analysis is a combination of qualitative and quantitative methods and examines the actual messages that are employed in public relations projects or campaigns.
Historical/Secondary Methods
Almost all projects will have some historical context from which to obtain the data required to establish where the project is before it actually begins. This information may come from association sources, previous research, annual reports, and news reporting of similar industries or products or persons. In many cases other departments may have the data required to establish a starting point. The Internet has made the gathering of historical and secondary data, as well as access to documentation, much easier than before. The gathering and evaluation of extant information often points to gaps in the project knowledge base, places where additional data is required to gain the “big picture” of the project and its project environment. In the rare case where data are missing, contemporary information must be obtained – primarily through qualitative and quantitative methods.
Qualitative Methods
Three qualitative methods found in public relations evaluation are in-depth interviews, focus groups, systematic observation/participant observation, and the case study. In-depth interviews are the most controlled of the qualitative methods and are often used when trying to obtain data from opinion leaders, innovators, or people who are held in high esteem through their contact with target audiences. Focus groups, or what have been called “controlled group discussions,” allow for a degree of control over questions, but allow participants to qualify their ideas, agree or disagree with others, and “tag on” to current threads of conversation; they provide the researcher with invaluable insight as to why something may or may not work. Observation – whether simple systematic “environmental scanning” or a planned participant observation study – provides information about the real-world activities of people. Observation is something that is often overlooked in planned evaluation, but is a method that provides additional insight into project management.
The case study is an in-depth look at previous projects or campaigns from a historical perspective and is found in three different forms. The linear approach examines the case from beginning to end, with a focus on the particular elements employed in the project – basic research undertaken, project objectives, project programming, and project evaluation – as a static analysis of the case under study. Process-oriented case studies take into consideration the feedback process associated with the case, with evaluation first appearing at the second of four phases (fact-finding/problem analysis, strategic planning, action, and assessment). The grounded case study takes a management-by-objective (MBO) approach and includes analysis of the project’s financial impact and its impact on the business bottom line. Traditionally, linear and process-oriented case studies have focused on the communication process while the grounded case study has looked more at business strategy. All provide essential information for the planning of a public relations project or campaign.
Quantitative Methods
Quantitative methods can be further divided into scientific and social scientific camps. Most public relations evaluation takes a social science approach and focuses on survey and poll methodology in gathering data on small samples of larger populations or publics.
Survey or poll methodology seeks to understand the attitudes or behavioral intentions (norms) of target audiences. Polls are shorter, more behavior-oriented collections of questions that seek to take snapshots of the target. Surveys are much longer and take an in-depth look at the target audience or public. Both collect samples (representative or nonrepresentative members of the population or universe being studied), most commonly face to face, by telephone, by mail, or on the Internet. Sample selection can conducted as a “probability” or convenience (“nonprobability”) sampling. Probability sampling occurs when all members of the population have equal chances of being selected; convenience sampling occurs when only those present in a given environment are selected to participate in data gathering (e.g., random intercept or mall survey).
Scientific approaches are more experimental, where variables of interest (“independent variables,” such as messages or channels) are varied and their impact on desired outcome variables (“dependent variables,” such as purchase intent, relationship, reputation). Very sophisticated projects may even simulate under differing conditions the projected impact of the project on the outcome(s) of interest.
Content analysis spans the qualitative–quantitative gulf. Since the method examines messages, it may be considered qualitative and as such may evaluate message content via rhetorical analysis thematic structure, purpose, and so forth. However, content analysis also allows a quantitative evaluation of the message such as number of words, basic tone of message, number of times a certain word or phrase is found, readability indices (e.g., Flesch Reading Ease or Flesch–Kincaid Grade Level indices), or type–token ratio. Content analysis has been computerized for faster analysis with the Internet allowing for almost real-time message acquisition and analysis.
Establishing Metrics
Prior to actually beginning data acquisition, evaluation metrics need to be established. A metric is a way to provide both focused and continuous project evaluation. Metrics run the gamut from dashboards to scorecards. Metrics are management tools that take the results of data gathered through qualitative and quantitative methods and relate them to project objectives, specific outtakes or outcomes, or other indicators that are monitored and evaluated on a regular basis.
Some metrics, such as balanced scorecards, are evaluated on a regular basis, but not continuously. Dashboards, on the other hand, are set up to monitor data as it comes in and provide day-by-day, minute-by-minute evaluations. What is common to each, however, is the data gathered, some of which may be compared to pre-project benchmarks or to established benchmarks throughout the project. Scorecards typically examine specific indicators against other indicators – either competitor-based or project-based – and are presented typically as numeric data. Dashboards are often more graphical and present data in terms of analogical measures, such as clocks, fever graphs, or other chart-like presentations.
Evaluation metrics should be established during the pre-project, development phase.
Developmental Phase Methodology
Developmental phase evaluation focuses primarily on gathering data against which to compare project results over the life of the project. As such, it often begins with a set of methods that are neither qualitative nor quantitative. Development phase research is often rooted in historical or secondary research. It may, of course collect new data to update what has been obtained from historical and secondary sources or, because of a lack of historical or secondary research, require that benchmark data be gathered as a pre-project requirement through qualitative or quantitative methods.
In preparing for a project or campaign certain information should be readily available. This information may be collected, culled, and interpreted to establish an initial baseline or benchmark against which periodic checks can be conducted at later phases. An all too common characteristic of previous public relations evaluation has been a failure to establish a baseline – often based on the assumption that data at this stage is too expensive to gather.
During the developmental phase in-depth interviews, focus groups, observation, and previous case studies provide the background against which to compare the public relations activities during the campaign. Selected interviews and focus groups may be employed to gather an in-depth understanding of what strategies and tactics will produce the desired results and when and where secondary benchmarks (employed during the refinement phase) should be gathered. The developmental phase will set the actionable and measurable objectives to be met during the campaign.
Quantitative methods seek to establish the expected attitudinal and behavioral norms as target audience/public benchmarks. Survey methodology is often applied where historical/secondary analysis fails to produce expected attitudinal or behavioral norms. In some cases the projected campaign will be submitted to experimental method and a simulated campaign run against differing conditions (market or competition, for instance).
Content analyses are often undertaken to better understand how similar messaging has been interpreted by opinion leaders or reactions of focus groups to messages. This may take the form of pre-project message testing, concept testing, and so forth.
Refinement Phase Methodology
During the refinement phase evaluations are undertaken to see if the project is on target and schedule. This phase employs survey or poll, in-depth interview, focus group, and content analyses methods, often triangulated to provide the normative data required for larger population against the deeper and “richer” data from interviews and focus groups. Content analyses provide indicators that key messaging is getting out and that opinion leaders, editors, or reporters are on message. Observation continues to be an important informal methodology, such as observing during the day how many times and how people communicate on message (“word of mouth”).
Data gathered during the refinement phase is evaluated against set objectives. This evaluation allows for alterations in strategy and tactics once the project has begun. As with most planned events, once the project is kicked off many things may alter the way intended messages are interpreted: the competition may engage in counter-messaging, or the target audience simply is not getting the information or, once received, is not being motivated to act. Finally, refinement phase evaluation seeks to make better predictions about actual behavior – that which drives the return on investment and project goals in most cases.
Final Phase Methodology
Final phase methodology is typically divided into three areas. First, was the goal met? Did the project move the needle? Did it meet or surpass expectations and how did it contribute to the client or company bottom line? Second, each objective is examined to evaluate both the strategy and tactics employed. Were objectives met? Were they on target? Were they on schedule? All the data gathered during the development and refinement stages are evaluated against final outcome(s). In some ways this is a meta-evaluation of the campaign and may yield an internal case study that can be used for future projects as baseline or benchmark data. Finally, a cost–product evaluation is undertaken. Here the evaluation focuses on whether the project was cost-effective, that the goal(s) and objective(s) were on target. Was the project cost-effective or were developmental estimates off so that the project could have come in for less? These are the hard questions that are always asked at the final evaluation phase.
Evaluation is an important factor in any public relations project. It should be planned across three phases and take into account both the project goal(s) and the objective(s). Objectives must be actionable and measurable and the methods selected to gather data should be triangulated to gain the best insight into project effectiveness.
References:
- Brody, E. W., & Stone, G. C. (1989). Public relations research. New York: Praeger.
- Broom, G. M., & Dozier, D. M. (1990). Using research in public relations: Applications to program management. Englewood Cliffs, NJ: Prentice Hall.
- Carroll, T., & Stacks, D. W. (2004). Bibliography of public relations measurement. Gainesville, FL: Institute for Public Relations.
- Hocking, J. E., Stacks, D. W., & McDermott, S. T. (2003). Communication research, 3rd edn. Boston: Allyn and Bacon.
- Stacks, D. W. (2002). Primer of public relations research. New York: Guilford.
- Stacks, D. W. (2006). Dictionary of public relations measurement and research. Gainesville, FL: Institute for Public Relations.