Statements in which the resource exists.
SubjectPredicateObjectContext
pubmed-article:12355348rdf:typepubmed:Citationlld:pubmed
pubmed-article:12355348lifeskim:mentionsumls-concept:C0679199lld:lifeskim
pubmed-article:12355348lifeskim:mentionsumls-concept:C0814897lld:lifeskim
pubmed-article:12355348lifeskim:mentionsumls-concept:C0205251lld:lifeskim
pubmed-article:12355348pubmed:issue9-10lld:pubmed
pubmed-article:12355348pubmed:dateCreated2002-9-30lld:pubmed
pubmed-article:12355348pubmed:abstractTextThis article deals with the issue of statistical validity when evaluating interventions. The most common study design with two groups and two points of measurement is discussed. In clinical research settings, unsatisfactory statistical validity is often seen due to small sample sizes. In order to resolve this problem, a strategy based on an approach by Hager is proposed which takes both significance testing and effects sizes systematically into account. Using an example from clinical research practice the problematic issue of statistical power is introduced and methods to increase the power of tests are discussed. Within this framework, Erdfelder's compromise power analysis (computing alpha levels according to a predetermined beta/alpha error ratio) is crucial as well as a lowering of the number of applied tests by data reduction and the improved detection of potential effects by methods to reduce error variance. The results show that significance tests should not be used in case of small sample and effect sizes. In these cases different approaches should be used.lld:pubmed
pubmed-article:12355348pubmed:languagegerlld:pubmed
pubmed-article:12355348pubmed:journalhttp://linkedlifedata.com/r...lld:pubmed
pubmed-article:12355348pubmed:citationSubsetIMlld:pubmed
pubmed-article:12355348pubmed:statusMEDLINElld:pubmed
pubmed-article:12355348pubmed:issn0937-2032lld:pubmed
pubmed-article:12355348pubmed:authorpubmed-author:MüllerJohanne...lld:pubmed
pubmed-article:12355348pubmed:authorpubmed-author:HoyerJürgenJlld:pubmed
pubmed-article:12355348pubmed:authorpubmed-author:ManzRolfRlld:pubmed
pubmed-article:12355348pubmed:issnTypePrintlld:pubmed
pubmed-article:12355348pubmed:volume52lld:pubmed
pubmed-article:12355348pubmed:ownerNLMlld:pubmed
pubmed-article:12355348pubmed:authorsCompleteYlld:pubmed
pubmed-article:12355348pubmed:pagination408-16lld:pubmed
pubmed-article:12355348pubmed:dateRevised2006-11-15lld:pubmed
pubmed-article:12355348pubmed:meshHeadingpubmed-meshheading:12355348...lld:pubmed
pubmed-article:12355348pubmed:meshHeadingpubmed-meshheading:12355348...lld:pubmed
pubmed-article:12355348pubmed:meshHeadingpubmed-meshheading:12355348...lld:pubmed
pubmed-article:12355348pubmed:meshHeadingpubmed-meshheading:12355348...lld:pubmed
pubmed-article:12355348pubmed:meshHeadingpubmed-meshheading:12355348...lld:pubmed
pubmed-article:12355348pubmed:articleTitle[What to do if statistical power is low? A practical strategy for pre-post-designs].lld:pubmed
pubmed-article:12355348pubmed:affiliationInstitut für Klinische, Diagnostische und Differentielle Psychologie, Technische Universität Dresden, Germany.lld:pubmed
pubmed-article:12355348pubmed:publicationTypeJournal Articlelld:pubmed
pubmed-article:12355348pubmed:publicationTypeEnglish Abstractlld:pubmed
pubmed-article:12355348pubmed:publicationTypeReviewlld:pubmed