Evaluation Activity in Psychology Training Clinics. National Survey Findings
Date of Original Version
A survey of American psychology training clinics was undertaken to determine the scope, nature, impact, and problems of evaluation research conducted in these settings. Survey questions explored evaluation of both clinical training and client treatment. Seventy-four usable responses (56%) were received, of which 68% reported current quantitative evaluation of client treatment and 61% reported current quantitative evaluation of clinical training. A wide variety of specific outcome measures were used with varying frequency. Most evaluation activities were exclusively supported by internal financing, with the clinic director the most likely collector of evaluation data and the clinic staff the most likely recipients of evaluation findings. Major obstacle to evaluation included resource constraints, staff resistance, pragmatic difficulties, and technological limitations. Forty-eight percent of the directors of clinics conducting treatment evaluation believed evaluation had a significant influence on policy, whereas 42% of those conducting training evaluation reported such influence. Several correlates of policy impact were also identified. Further plans to conduct evaluation were widespread, though not universal. The need for better measures, faculty resistance to evaluation, ways of improving policy impact, and the importance of increased communication across training sites are discussed. © 1985 American Psychological Association.
Publication Title, e.g., Journal
Professional Psychology: Research and Practice
Stevenson, John F., and John C. Norcross. "Evaluation Activity in Psychology Training Clinics. National Survey Findings." Professional Psychology: Research and Practice 16, 1 (1985): 29-41. doi: 10.1037/0735-7028.16.1.29.