Skip Navigation
College Drinking Prevention - Changing the Culture

Stats & Summaries NIAAA College Materials Supporting Research Other Alcohol Information NewSpecial Features
College Presidents College Parents College Students H.S. Administrators H.S. Parents & Students
Supporting Research

Journal of Studies on Alcohol

College Drinking Statistical Papers

Funding

Related Research

 
Helpful Tools

In the News

Links

Link to Us

E-mail this Page

Print this Page


The Role Of Evaluation In Prevention Of College Student Drinking Problems

I. That Sounds Like a Lot of Trouble… Why Do It?

A. Increase likelihood of effectiveness—both local and in general

The first, if obvious, answer is that successful evaluation will increase the likelihood of program effectiveness, both in a specific setting and more generally. Some have argued that the exercise of merely defining “effectiveness” would improve many interventions that are now too diffuse to be effective. Apart from having specific evaluation results on one or another prevention strategy, having access to multiple evaluations would also improve the more general prevention approach, as instances of effectiveness and ineffectiveness begin to accumulate evidence of program generalizability and sensitivity to the wider context of application.

B. Cycle of improvement blurs distinction between outcome evaluation and formative research

Ideally, evaluation should be a continuous “feedback” mechanism to program managers. Too often, programs that have shown promise under highly-controlled experimental conditions fail to achieve that same effect when implemented in the “real world.” This loss of impact may be due to the absence of a halo effect that arose in the experimental setting, or the loss of a dynamic leader from the original implementation, or perhaps from the difficulty inherent in mounting a scaled up version of the program. Even modest evaluation data and analysis may show when and where a program is failing to meet its objectives, and focus on ways to reshape the intervention. Thus, evaluation evolves from a “one-shot” effort to a continuous cycle of improvement.

C. Encourages strategy over activity

Unfortunately, many prevention programs are adopted from a desire to do “something” about a problem. If the design or selection of an intervention is closely tied to an evaluation design, there is a greater chance that overt strategies for prevention will be articulated and discussed rather than the prevention activities per se. This provides a good counter-balance to the impulse to choose a program based on more superficial qualities (e.g., slick brochures, “fun” activities, etc.).

D. Counterbalance to adopting “high visibility” or popular programs or desire to blindly “do something”

The absence of good empirical data on program effectiveness leaves program managers and administrators with little basis for selecting an intervention other than what some other campus might be doing. Once a program reaches critical mass, there is even greater temptation to assume that it must be effective.

E. Discourages complacency

A recurring theme in discussions of evaluation is that it can sharpen the focus on program or intervention strategy, goals, and objectives. Novice evaluators are often surprised to discover how often programs, even long-established ones, have poorly articulated objectives and are vague about how the desired goals are supposed to be achieved. In addition, even well-articulated programs may not have been put to the “test” of evaluation. A thorough evaluation design will address not only the expected outcomes of the intervention, but each link of the chain of intermediate effects that are hypothesized to influence the endpoints.

F. Maximizes resources (almost an ethical responsibility to make best use of funds)

In an era when public funding of any kind is becoming increasingly scarce, it behooves anyone in the field of prevention to demonstrate the effectiveness of their interventions. Going further, more funding sources are expecting to see a benefit-cost analysis that can put a dollar value on the program activities.

G. Enhances overall program credibility

Many have complained that prevention specialists are given little resources and have low visibility within the institutions they are expected to serve. This marginalization of prevention will only get worse until and unless those specialists can provide evidence of their benefit to the organization. Solid evaluation enhances not only the credibility of a specific intervention but the credibility of prevention efforts at large. More resources will be committed to prevention when it can be shown to work.

 

Previous | Next

 

Last reviewed: 9/23/2005


Home
About Us
Awards
Site Map
FAQ
Accessibility
Plug-Ins
Privacy Policy
Contact Us
Web site Policies
Disclaimer

NIAAA logo HHS logo USA dot gov logo