Statistical power and effect size in the field of health psychology

Jason Edward Maddock, University of Rhode Island


Statistical significance testing is one of the most pervasive techniques in psychology to examine treatment effects. Because of the ubiquitous use of these procedures, misuses have plagued the field of psychology (Harlow, 1997). In the first chapter, the significance test controversy is discussed in detail, and the general disregard for statistical power in most psychological research is discussed as a major contributor to this controversy (Cohen, 1988). This section ends by discussing four methods for improving significance testing: the use of confidence intervals, testing for probable upper bounds, meta-analysis, and power analyses. Each of these methods is explored because they offer simple ways in which significance testing can be improved without great resistance. ^ In chapter two, power was calculated for 8,266 statistical tests in 187 journal articles published in the 1997 volumes of Health Psychology, Addictive Behaviors, and Journal of Studies on Alcohol. Power to detect small, medium and large effects was .34, .74, and .92 for Health Psychology, .34, .75, and .90 for Addictive Behaviors , and .41, .81, and .92 for the Journal of Studies on Alcohol . Mean power estimates are .36, .77, and .91, giving a good estimation for the field of health psychology. Comparison of these results to over 30 other power studies in fields indicate that health psychology journals rank among the highest in power. Results are encouraging for this field, although studies examining small effects are still very much underpowered. ^ In chapter three, a meta-analysis of interventions to reduce college student drinking was conducted. Qualitative analyses examining these interventions have produced conflicting results. Twenty-one studies met criteria to be included in the meta-analyses. This criteria included use of random assignment or statistical control for baseline differences. Results indicated that the studies when examined as a group significantly reduced drinking in college students. However, cognitive-behavioral interventions (d = .53) produced significantly larger effects than traditional educational approaches (d = .17), indicating the superiority of this type of intervention. Power analysis of these articles revealed inadequate power, demonstrating the need to use higher powered studies to reduce controversial findings. ^

Subject Area

Psychology, Experimental|Psychology, Psychometrics

Recommended Citation

Jason Edward Maddock, "Statistical power and effect size in the field of health psychology" (1999). Dissertations and Master's Theses (Campus Access). Paper AAI9945214.