Source:http://linkedlifedata.com/resource/pubmed/id/17029033
Switch to
Predicate | Object |
---|---|
rdf:type | |
lifeskim:mentions |
umls-concept:C0006104,
umls-concept:C0015217,
umls-concept:C0085862,
umls-concept:C0262485,
umls-concept:C0449432,
umls-concept:C0936012,
umls-concept:C1179435,
umls-concept:C1299583,
umls-concept:C1524073,
umls-concept:C1548799,
umls-concept:C1549571,
umls-concept:C1608386,
umls-concept:C1705248,
umls-concept:C1708533,
umls-concept:C2698172
|
pubmed:issue |
10
|
pubmed:dateCreated |
2006-10-9
|
pubmed:abstractText |
In this study flashing stimuli, such as digits or letters, are displayed on a LCD screen to induce flash visual evoked potentials (FVEPs). The aim of the proposed interface is to generate desired strings while one stares at target stimulus one after one. To effectively extract visually-induced neural activities with superior signal-to-noise ratio, independent component analysis (ICA) is employed to decompose the measured EEG and task-related components are subsequently selected for data reconstruction. In addition, all the flickering sequences are designed to be mutually independent in order to remove the contamination induced by surrounding non-target stimuli from the ICA-recovered signals. Since FVEPs are time-locked and phase-locked to flash onsets of gazed stimulus, segmented epochs from ICA-recovered signals based on flash onsets of gazed stimulus will be sharpen after averaging whereas those based on flash onsets of non-gazed stimuli will be suppressed after averaging. The stimulus inducing the largest averaged FVEPs is identified as the gazed target and corresponding digit or letter is sent out. Five subjects were asked to gaze at each stimulus. The mean detection accuracy resulted from averaging 15 epochs was 99.7%. Another experiment was to generate a specified string '0287513694E'. The mean accuracy and information transfer rates were 83% and 23.06 bits/min, respectively.
|
pubmed:language |
eng
|
pubmed:journal | |
pubmed:citationSubset |
IM
|
pubmed:status |
MEDLINE
|
pubmed:month |
Oct
|
pubmed:issn |
0090-6964
|
pubmed:author | |
pubmed:issnType |
Print
|
pubmed:volume |
34
|
pubmed:owner |
NLM
|
pubmed:authorsComplete |
Y
|
pubmed:pagination |
1641-54
|
pubmed:meshHeading |
pubmed-meshheading:17029033-Adult,
pubmed-meshheading:17029033-Biomedical Engineering,
pubmed-meshheading:17029033-Brain,
pubmed-meshheading:17029033-Communication Aids for Disabled,
pubmed-meshheading:17029033-Electroencephalography,
pubmed-meshheading:17029033-Evoked Potentials, Visual,
pubmed-meshheading:17029033-Female,
pubmed-meshheading:17029033-Humans,
pubmed-meshheading:17029033-Male,
pubmed-meshheading:17029033-Photic Stimulation,
pubmed-meshheading:17029033-Principal Component Analysis,
pubmed-meshheading:17029033-Software Design,
pubmed-meshheading:17029033-User-Computer Interface
|
pubmed:year |
2006
|
pubmed:articleTitle |
The brain computer interface using flash visual evoked potential and independent component analysis.
|
pubmed:affiliation |
Department of Electrical Engineering, National Central University, Taoyuan, Taiwan.
|
pubmed:publicationType |
Journal Article,
Research Support, Non-U.S. Gov't
|