Analysis of Rater Reliability Using a Faculty Developed, Revised OSCE Evaluation Instrument
MetadataShow full item record
Nurse educators need valid and reliable instruments to evaluate student performance during summative clinical experiences and Objective Structured Clinical Examinations (OSCEs). The purpose of this retrospective, secondary data analysis was to analyze existing data to determine whether there were differences between raters using a faculty-developed, revised OSCE instrument designed to evaluate student performance in a summative course OSCE. Data were extracted from an existing database collected during a prior study to compare OSCE instruments, within a large, urban college of nursing. Data were examined for a sample of 44 subjects whose OSCE performances were rated by faculty using the revised OSCE instrument. Differences between course faculty and non-course faculty were examined for four categories of the OSCE instrument including patient safety, assessment, planning and medication administration, to determine whether the revised OSCE instrument demonstrated inter-rater reliability. In this study, there were significant differences between rater groups in ratings of the subject’s performances. There was no agreement between rater groups. Probable reasons for these differences were explored, including grade inflation, failure to fail and rater error. Nurse educators need to determine whether to continue to use faculty developed instruments or to select evaluation instruments which have been tested and shown to be reliable and valid.