Source:http://linkedlifedata.com/resource/pubmed/id/10719539
Switch to
Predicate | Object |
---|---|
rdf:type | |
lifeskim:mentions | |
pubmed:issue |
4
|
pubmed:dateCreated |
2000-3-31
|
pubmed:abstractText |
Recent developments in networking and computing have enabled collaborative biomedical engineering research by geographically separated participants. One of the most promising goals is to use these technologies to extend human intellectual capabilities in medical decision making. These emerging technologies are poised to drastically reduce healthcare cost by providing service at remote locations. This also increases diagnosis capacity since information is made available to experts at any location. In this paper, we propose a novel application of a recently developed interactive and distributed system in medical consultation and education. Our approach builds on the notion that interactive and distributive capabilities of the system are crucial for medical consultation and education. The presented application uses a multiuser, collaborative environment with multimodal human/machine communication in the dimensions of sight, sound, and touch. The experimental setup, consisting of two user stations, and the multimodal interfaces, including sight (eye-tracking), sound (automatic speech), and touch (microbeam pen), were tested and evaluated. The system uses a collaborative workspace as a common visualization space. Users communicate with the application through a fusion agent by eye-tracking, speech, and microbeam pen. The audio/video teleconferencing is also included to help the radiologists to communicate with each other simultaneously while they are working on the mammograms. The system used in this study has three software agents: a fusion agent, a conversational agent, and an analytic agent. The fusion agent interprets multimodal commands by integrating the multimodal inputs. The conversational agent answers the user's questions and detects human-related or semantic errors and notifies the user about the results of the image analysis. The analytic agent enhances the digitized images using the wavelet denoising algorithm if requested by the user. To show how well the system performs in practice, we used the system for medical consultation on mammograms. Results also show that the relevant information about the region of interest (ROI) of the mammograms chosen by the users is extracted automatically and used to enhance the mammograms.
|
pubmed:language |
eng
|
pubmed:journal | |
pubmed:citationSubset |
IM
|
pubmed:status |
MEDLINE
|
pubmed:month |
Dec
|
pubmed:issn |
1089-7771
|
pubmed:author | |
pubmed:issnType |
Print
|
pubmed:volume |
2
|
pubmed:owner |
NLM
|
pubmed:authorsComplete |
Y
|
pubmed:pagination |
282-91
|
pubmed:dateRevised |
2006-11-15
|
pubmed:meshHeading | |
pubmed:year |
1998
|
pubmed:articleTitle |
A system for medical consultation and education using multimodal human/machine communication.
|
pubmed:affiliation |
Thayer School of Engineering, Dartmouth College, Hanover, NH 03755, USA. makay@northstar.dartmouth.edu
|
pubmed:publicationType |
Journal Article,
Research Support, U.S. Gov't, Non-P.H.S.
|