How shall we judge ourselves? Training evaluation in clinical psychology programs

Document Type

Article

Date of Original Version

1-1-1984

Abstract

Examined the procedures, practices, and problems associated with clinical training evaluation. 62 directors of American Psychological Association (APA)-accredited clinical psychology doctoral programs completed a questionnaire assessing their use of informal and formal training evaluation procedures, the impact of these procedures, methods of data collection and dissemination, and obstacles to meaningful evaluation. Informal, qualitative evaluation measures (e.g., personal impressions, reputations) were used most frequently, and formal, quantitative comparison measures (e.g., pre-post comparisons) were employed least frequently. Supervisors' written evaluations, APA-accreditation reports, feedback from internship supervisors, and quantitative evaluation of supervisors by supervisees were perceived as having the greatest impact in determining the quality of clinical training. In over 75% of the programs, one person was responsible for the collection and dissemination of training evaluation data, typically to other faculty members. Inadequate evaluation methods and measures, time restraints, and lack of personnel were rated the most serious obstacles to successful evaluation. Clinical training evaluation appears to be in a preparadigmatic stage characterized by diversity, creativity, and informality. The classification of available measures and the establishment of a coordinated national program of training evaluation are discussed as possible correctives. (27 ref) (PsycINFO Database Record (c) 2006 APA, all rights reserved). © 1984 American Psychological Association.

Publication Title, e.g., Journal

Professional Psychology: Research and Practice

Volume

15

Issue

4

Share

COinS