Switch to
Predicate | Object |
---|---|
rdf:type | |
lifeskim:mentions | |
pubmed:issue |
3
|
pubmed:dateCreated |
1992-4-20
|
pubmed:abstractText |
This paper continues an investigation into the merits of an alternative approach to the statistical evaluation of quality-control rules. In this report, computer simulation is used to evaluate and compare quality-control rules designed to detect increases in within-run or between-run imprecision. When out-of-control conditions are evaluated in terms of their impact on total analytical imprecision, the error detection ability of a rule depends on the relative magnitudes of the between-run and within-run error components under stable operating conditions. A recently proposed rule based on the F-test, designed to detect increases in between-run imprecision, is shown to have relatively poor performance characteristics. Additionally, several issues are examined that have been difficult to address with the traditional evaluation approach.
|
pubmed:language |
eng
|
pubmed:journal | |
pubmed:citationSubset |
IM
|
pubmed:status |
MEDLINE
|
pubmed:month |
Mar
|
pubmed:issn |
0009-9147
|
pubmed:author | |
pubmed:issnType |
Print
|
pubmed:volume |
38
|
pubmed:owner |
NLM
|
pubmed:authorsComplete |
Y
|
pubmed:pagination |
364-9
|
pubmed:dateRevised |
2000-12-18
|
pubmed:meshHeading | |
pubmed:year |
1992
|
pubmed:articleTitle |
Comparing the power of quality-control rules to detect persistent increases in random error.
|
pubmed:affiliation |
Department of Pathology, Washington University School of Medicine, St. Louis, MO 63110.
|
pubmed:publicationType |
Journal Article
|