Switch to
Predicate | Object |
---|---|
rdf:type | |
lifeskim:mentions | |
pubmed:issue |
6
|
pubmed:dateCreated |
1995-6-28
|
pubmed:abstractText |
The authors evaluated the reproducibility of a clinical algorithm consensus development process across three different physician panels at a health maintenance organization. Physician groups were composed of primary care internists, who were provided with identical selections from the medical literature and first-draft "seed" algorithms on the management of two common clinical problems: acute sinusitis and dyspepsia. Each panel used nominal group process and a modified Delphi method to create final algorithm drafts. To compare the clinical logic in the final algorithms, the authors applied a new qualitative and quantitative comparison method, the Clinical Algorithm Patient Abstraction (CAPA). Dyspepsia algorithms from all physician groups recommended empiric anti-acid therapy for most patients, favored endoscopy over barium swallow, and had very similar indications for endoscopy. The average CAPA comparison score among final physician algorithms was 6.1 on a scale of 0 (different) to 10 (identical). Sinusitis algorithms from all groups proposed empiric antibiotic therapy for most patients. Indications for sinus radiographs were similar between two algorithms (CAPA = 4.9), but differed significantly in the third, resulting in lower CAPA scores (average CAPA = 1.9, P < 0.03). The clinical similarity of the algorithms produced by these physician panels suggests a high level of reproducibility in this consensus-driven algorithm development process. However, the difference among the sinusitis algorithms suggests that physician consensus groups using a consensus process that a health maintenance organization can do with limited resources will produce some guidelines that vary due to differences in interpretation of evidence and physician experience.
|
pubmed:language |
eng
|
pubmed:journal | |
pubmed:citationSubset |
IM
|
pubmed:status |
MEDLINE
|
pubmed:month |
Jun
|
pubmed:issn |
0025-7079
|
pubmed:author | |
pubmed:issnType |
Print
|
pubmed:volume |
33
|
pubmed:owner |
NLM
|
pubmed:authorsComplete |
Y
|
pubmed:pagination |
643-60
|
pubmed:dateRevised |
2007-11-15
|
pubmed:meshHeading |
pubmed-meshheading:7760579-Acute Disease,
pubmed-meshheading:7760579-Algorithms,
pubmed-meshheading:7760579-Consensus Development Conferences as Topic,
pubmed-meshheading:7760579-Delphi Technique,
pubmed-meshheading:7760579-Dyspepsia,
pubmed-meshheading:7760579-Group Processes,
pubmed-meshheading:7760579-Humans,
pubmed-meshheading:7760579-Practice Guidelines as Topic,
pubmed-meshheading:7760579-Reproducibility of Results,
pubmed-meshheading:7760579-Research Design,
pubmed-meshheading:7760579-Sinusitis
|
pubmed:year |
1995
|
pubmed:articleTitle |
Is consensus reproducible? A study of an algorithmic guidelines development process.
|
pubmed:affiliation |
Department of Ambulatory Care and Prevention, Harvard Community Health Plan, Boston, MA, USA.
|
pubmed:publicationType |
Journal Article,
Research Support, Non-U.S. Gov't
|