pubmed-article:20838702 | pubmed:abstractText | Over the years, performance assessment (PA) has been widely employed in medical education, Objective Structured Clinical Examination (OSCE) being an excellent example. Typically, performance assessment involves multiple raters, and therefore, consistency among the scores provided by the auditors is a precondition to ensure the accuracy of the assessment. Inter-rater agreement and inter-rater reliability are two indices that are used to ensure such scoring consistency. This research primarily examined the relationship between inter-rater agreement and inter-rater reliability. | lld:pubmed |