Skip Navigation
College Drinking Prevention - Changing the Culture

Stats & Summaries NIAAA College Materials Supporting Research Other Alcohol Information NewSpecial Features
College Presidents College Parents College Students H.S. Administrators H.S. Parents & Students
Supporting Research

Journal of Studies on Alcohol

College Drinking Statistical Papers

Funding

Related Research

 
Helpful Tools

In the News

Join Our Listserv

Links

Order Publications

Link to Us

E-mail this Page

Print this Page


The Role Of Evaluation In Prevention Of College Student Drinking Problems

III. Recommendations for Program Managers and College Administrators

Whether or not a project is going to hire a technical consultant for evaluation, there are several key things program personnel or supervising administrators can do to enhance the utility of any evaluation and thus enhance the effectiveness of the intervention itself. These include clarifying the program objectives; describing how the intervention is supposed to work; facilitating the availability of data that can measure program impact; and providing clear guidance to the evaluator.

A. What is the program supposed to do?

Ideally, before any program is adopted or created, there will be clear consensus on its objectives. This is rarely the case, however, and it is common practice to state the objective in very broad language (e.g., “to reduce student drinking”). It is fine for an intervention to have multiple objectives. A policy to ban alcohol entirely from a campus may be expected to reduce alcohol consumption and subsequent problems, but it is important to state them as separate outcomes, because the intervention may work at one level and not the other (e.g., a ban may decrease problems on campus but increase them somewhere else). If an intervention is aimed at changing alcohol consumption, it is important to state how or where that change will occur. An intervention may be aimed at reducing the prevalence of “binge” drinking, but not the frequency of drinking. An intervention may be aimed at reducing a specific problem related to drinking (e.g., fights, assaults) but not drinking per se. Whatever is the case, it should be specified, and with as much detail as possible.

B. How is the program supposed to work?

How will a new policy, program, or activities engaged in by prevention staff lead to the objective(s) described above? What is needed is a picture of the “chain of events” that lead from a policy being adopted, or a publicity campaign being launched, through a chain of intermediate effects (e.g., publicity for the policy, informational meetings, enforcement campaigns, etc.) to the desired end result. This chain of events is sometimes referred to as a “logic model.” In either case, it guides an evaluation by articulating the sequence that connects “inputs” with “outputs.”

Given the resources, a good evaluation will then address the question of whether each of the intermediate effects was, in fact, achieved. As a simple example, there have been many educational programs in which it is assumed that a person’s drinking will be reduced via awareness of its negative effects. An evaluation could then measure people’s awareness of alcohol’s negative consequences and see if awareness is, in fact, changed as a result of the educational message. It has often been found, for instance, that the heaviest drinkers are also the ones most knowledgeable of alcohol’s effects, and discovering this can be quite illuminating.

It is important to see that the usefulness of the evaluation is in large part dependent on its following the logic model. If the evaluation were to only measure the final outcome, and the intervention fell short of its aims, the evaluation would be unable to answer the fundamental question of whether the program effects were smaller because the fundamental concept behind the intervention was wrong, the implementation was flawed, or one piece of the intervention sequence fell apart. From a manager’s viewpoint, these are crucial distinctions, as the answers will suggest different directions to take in the future.

C. What data are available?

If our general goal is to be able to improve prevention interventions through continuous monitoring of their impact, it behooves us to look for ways to make evaluation data available on an ongoing basis and develop something closer to management information systems than one-shot evaluation projects. Even here, though, the value of such regular data collection is enhanced to the degree that similar data are collected at several campuses. This would permit easy comparisons across sites as different interventions are implemented. Disentangling trends, differing populations of students, and different types of campuses would still not be trivial, but techniques for doing so have been developed. Those who work in the area of highway safety are greatly assisted by standardized data collection at the scene of traffic fatalities and crashes.

To some degree, the U.S. Department of Education has encouraged colleges and universities to adopt the student survey questionnaire developed by the CORE Institute (via its Fund for the Improvement of Post Secondary Education (FIPSE) and Safe and Drug-free Schools and Communities grants). The CORE Institute makes its questionnaire, coding, data file construction, and technical assistance available to any school for a nominal fee. If the items in the questionnaire are relevant to a specific evaluation, this would certainly be an option for any college or university to consider (conducting the survey itself would still be a labor-intensive task).

But as noted above, a student survey may not always be the preferred source of evaluation data, and is still expensive in labor, if not dollars. A desirable alternative or complement would be to institute routine data collection and compilation at those points in which student drinking and negative consequences of drinking come into contact with university or community agents. Again, the nature of those data should reflect the objectives and mechanisms of the programs and policies being adopted, but a short list of examples can be given:

Campus and/or community police Alcohol involvement in each instance where police are either called (i.e., reports to police) or where officers initiate contact with individuals (should not be limited to formal arrests)
Urgent or emergency care Alcohol involvement in injuries (preferably via breathalyzer)

Health insurance data Costs associated with medical care when alcohol is involved

Counseling services Alcohol use history
Residence facilities Records of alcohol involvement in complaints, property damage, calls for police, or emergency services

University discipline Records of alcohol involvement in behavior brought to disciplinary hearings
Athletic departments Alcohol involvement in spectator injuries, complaints, or disciplinary actions
Greek student office Records of alcohol involvement in neighbor complaints, student injuries, contacts with police or fire departments (crowd control), or property damage

We should not minimize the difficulty of determining alcohol involvement when direct readings (e.g., via breathalyzer) are not available. Sometimes so-called “passive” breathalyzers can be used (which measure the air in front of a person’s mouth but do not require him or her to blow directly into an instrument), but even highway non-fatal crash data often depend on an officer’s judgment of alcohol involvement. Though these judgments are fallible and variable (and may even change in the context of a prevention intervention), there is yet a great advantage in having the data available over a long period of time so that relative changes in problem prevalence can be monitored.

D. Provide guidance to the evaluation

By now it should be clear that the success of an evaluation is partially dependent on the technical quality of the evaluation, but also dependent on clear communication between program managers and evaluators. The objectives must be laid out, the logic model fully detailed, and, in the best of all worlds, useful data made available (which is most often the result of upper-management making it a priority). Finally, it is the responsibility of the program manager to find an evaluator who can work within the constraints of the resources available for the task. While those resources are rarely able to cover the costs of a “gold standard” evaluation, many useful answers about program effectiveness can be provided to managers and administrators for less money.

Though early treatises on evaluation used to recommend evaluators be “removed” from program planning and personnel to maximize “objectivity,” there is now greater appreciation of the ways in which that “distance” can cripple effective evaluation designs. Many large-scale evaluations are funded today in which the evaluators are, in fact, responsible for the program implementation as well. Concerns about the validity of the results are more commonly addressed via the evaluation design, its measures, and details of the analyses.

E. Concluding remarks

The concept of “process” evaluation is somewhat similar to our earlier discussion of an evaluation that includes measures of the intermediate “links” within the logic model. More traditionally, a process evaluation would confine itself to measuring the quality of program implementation (e.g., number of people attending a training, fidelity of training to curriculum, etc.). Clearly, these factors are important to the understanding of the intervention’s impact, again, so that shortcomings in implementation can be disentangled from cases where impact was minimal but implementation was of high quality.

On another point, although we have implicitly described evaluations using quantitative methods, there are many occasions in which qualitative methods (including semi-structured interviews, observations, and participant-observations) may be superior. Qualitative approaches can be especially valuable in cases where program goals or mechanisms may need to be clarified (e.g., via interviews with program staff or students exposed to the program). They are often useful, too, in following up more structured evaluation results and for “troubleshooting” places where the intervention may not have met expectations.

Finally, some consideration should be given to the prospect of “unintended consequences.” With any intervention, there is some likelihood that some changes will occur that were not related to the program objectives. In the context of college student drinking, for instance, many critics of restrictions on student drinking on campus raise the concern that such restrictions will “drive” students to do their drinking somewhere off campus, with the consequence that some problems (e.g., DUI) would be exacerbated. Especially in cases where such outcomes are part of public debate, the evaluation should be designed to look for them.

 

Previous | Next

 

Last reviewed: 9/23/2005


Home
About Us
Awards
Site Map
FAQ
Accessibility
Plug-Ins
Privacy Policy
Contact Us
Web site Policies
Disclaimer

NIAAA logo HHS logo USA dot gov logo