Criteria for the evaluation of in-service education
The purpose-of this study was to organize a set of criteria which were appropriate for use in the evaluation of inservice teacher education programs. After a review of research and literature was conducted, the advocated criteria were organized according to an adaptation of a basic model of evaluation designed by Stufflebeam, CIPP (Context - Input - Process - product).Organized in three categories of 1) Conceptual Rationale cf Evaluation, 2) Administration of Evaluation, and 3) Procedural Methodology of Evaluation, the components were criticized by experts, revised, and listed in questionnaires as fifty-one individual items, proposed for use as criteria in evaluating in-service teacher education.During the summer of 1971, a random sampling of sixty Indiana public school administrators and 275 public school teachers judged the criteria on two measures: 1) contribution to the improvement of instruction, and 2) feasibility in actual practice. A five-point response scale was used ranging from strongest to weakest. Items which were judged to be both a contribution to the improvement of instruction and feasible in practice by more than fifty percent of administrators and teachers were acceptable. Total responses of the two strongest categories on the five-point scale had to total fifty percent in order to meet this standard.Thirty-five of the original fifty-one criteria were judged to be both a contribution to the improvement of instruction and to be feasible in practice. Items which were accepted in the first section, Conceptual Rationale, concerned: the need for evaluating in-service; a focus on constructive behavior; establishing the relationship between evaluation and improved instruction; the use of cost/benefit analysis; a systematic collection of information; assessment of specific practices; the use of evaluative information in decision-making; developing self-evaluation competency; and, evaluation prior to, during, and following in-service activities.Acceptable items involving Administration of an evaluation, the second section, included: participant involvement during the evaluation process; developing a commitment to organizational purposes; relationships between evaluation activities, the organization, and purposes of in-service activities; quarterly progress reporting; implementing the evaluation findings; establishing a flow chart of evaluation activities; and, annual revision of the evaluation plan.The acceptable items for the third section concerning Procedural Methodology of evaluation were: reflection of local needs in the design; projection of anticipated decisions; specification of information needed; the use of written, behavioral program objectives; the use of needs assessment as the basis for the development of objectives; data collection prior to the evaluation; communication of the purposes and methods of data collection; training for data collectors; analysis of the data; identification of strengths and weaknesses with specific personnel; limitation of the conclusions; stating implications from the evaluation; identification of information users; effective reporting; overall evaluation effectiveness; and, producing the final report.Twelve of the original fifty-one criteria were judged by administrators and/or teachers to be a contribution, but as having uncertain feasibility in actual practice. The twelve items related to: the use of programmed scheduling of evaluation activities; the use of data retrieval systems; generation of alternatives for decision situations; the use of both experimental and descriptive research designs; behavior modification of personnel involved; intergroup relations training; the use of outside consultants; and, setting aside specific facilities and materials for the evaluation. Both administrators and teachers were uncertain about the feasibility of: the use of systems analysis; staff involvement in policy-making; the need for weekly face-to-face contact of all participants; and, required professionally-conducted encounter sessions.Six items were judged uncertain on both contribution and feasibility. Administrators felt uncertain on both counts about the use of outside consultants, the use of reference research and literature, and disseminating evaluation results outside the local system. Teachers were uncertain on both counts about inhibiting destructive behavior through an evaluation and the use of programmed scheduling. Both administrators and teachers felt uncertain about whether assigning ten percent of the in-service budget for evaluation would be contributory or feasible. No items were judged to be totally unacceptable by having fifty percent of responses in the weakest two categories of the scale for both counts. Administrators and teachers agreed on judgments in twenty-two (seventy-nine percent) of the twenty-eight items which they both judged.The study concluded that appropriate criteria were available and recommended for use in the evaluation of in-service teacher education programs.