Statements in which the resource exists.
SubjectPredicateObjectContext
pubmed-article:20838702rdf:typepubmed:Citationlld:pubmed
pubmed-article:20838702lifeskim:mentionsumls-concept:C0031809lld:lifeskim
pubmed-article:20838702lifeskim:mentionsumls-concept:C0597198lld:lifeskim
pubmed-article:20838702lifeskim:mentionsumls-concept:C1707455lld:lifeskim
pubmed-article:20838702lifeskim:mentionsumls-concept:C0680240lld:lifeskim
pubmed-article:20838702lifeskim:mentionsumls-concept:C0870740lld:lifeskim
pubmed-article:20838702pubmed:issue8lld:pubmed
pubmed-article:20838702pubmed:dateCreated2010-9-14lld:pubmed
pubmed-article:20838702pubmed:abstractTextOver the years, performance assessment (PA) has been widely employed in medical education, Objective Structured Clinical Examination (OSCE) being an excellent example. Typically, performance assessment involves multiple raters, and therefore, consistency among the scores provided by the auditors is a precondition to ensure the accuracy of the assessment. Inter-rater agreement and inter-rater reliability are two indices that are used to ensure such scoring consistency. This research primarily examined the relationship between inter-rater agreement and inter-rater reliability.lld:pubmed
pubmed-article:20838702pubmed:languageenglld:pubmed
pubmed-article:20838702pubmed:journalhttp://linkedlifedata.com/r...lld:pubmed
pubmed-article:20838702pubmed:citationSubsetIMlld:pubmed
pubmed-article:20838702pubmed:statusMEDLINElld:pubmed
pubmed-article:20838702pubmed:monthAuglld:pubmed
pubmed-article:20838702pubmed:issn0304-4602lld:pubmed
pubmed-article:20838702pubmed:authorpubmed-author:HuntElizabeth...lld:pubmed
pubmed-article:20838702pubmed:authorpubmed-author:ChenWalterWlld:pubmed
pubmed-article:20838702pubmed:authorpubmed-author:LiaoShih...lld:pubmed
pubmed-article:20838702pubmed:issnTypePrintlld:pubmed
pubmed-article:20838702pubmed:volume39lld:pubmed
pubmed-article:20838702pubmed:ownerNLMlld:pubmed
pubmed-article:20838702pubmed:authorsCompleteYlld:pubmed
pubmed-article:20838702pubmed:pagination613-8lld:pubmed
pubmed-article:20838702pubmed:meshHeadingpubmed-meshheading:20838702...lld:pubmed
pubmed-article:20838702pubmed:meshHeadingpubmed-meshheading:20838702...lld:pubmed
pubmed-article:20838702pubmed:meshHeadingpubmed-meshheading:20838702...lld:pubmed
pubmed-article:20838702pubmed:meshHeadingpubmed-meshheading:20838702...lld:pubmed
pubmed-article:20838702pubmed:meshHeadingpubmed-meshheading:20838702...lld:pubmed
pubmed-article:20838702pubmed:meshHeadingpubmed-meshheading:20838702...lld:pubmed
pubmed-article:20838702pubmed:meshHeadingpubmed-meshheading:20838702...lld:pubmed
pubmed-article:20838702pubmed:meshHeadingpubmed-meshheading:20838702...lld:pubmed
pubmed-article:20838702pubmed:meshHeadingpubmed-meshheading:20838702...lld:pubmed
pubmed-article:20838702pubmed:meshHeadingpubmed-meshheading:20838702...lld:pubmed
pubmed-article:20838702pubmed:meshHeadingpubmed-meshheading:20838702...lld:pubmed
pubmed-article:20838702pubmed:year2010lld:pubmed
pubmed-article:20838702pubmed:articleTitleComparison between inter-rater reliability and inter-rater agreement in performance assessment.lld:pubmed
pubmed-article:20838702pubmed:affiliationSchool of Medicine, College of Medicine, China Medical University, Taichung, Taiwan.lld:pubmed
pubmed-article:20838702pubmed:publicationTypeJournal Articlelld:pubmed
pubmed-article:20838702pubmed:publicationTypeComparative Studylld:pubmed
pubmed-article:20838702pubmed:publicationTypeResearch Support, Non-U.S. Gov'tlld:pubmed