Switch to
Predicate | Object |
---|---|
rdf:type | |
lifeskim:mentions | |
pubmed:issue |
4
|
pubmed:dateCreated |
1999-3-12
|
pubmed:abstractText |
To efficiently simulate very large networks of interconnected neurons, particular consideration has to be given to the computer architecture being used. This article presents techniques for implementing simulators for large neural networks on a number of different computer architectures. The neuronal simulation task and the computer architectures of interest are first characterized, and the potential bottlenecks are highlighted. Then we describe the experience gained from adapting an existing simulator, SWIM, to two very different architectures-vector computers and multiprocessor workstations. This work lead to the implementation of a new simulation library, SPLIT, designed to allow efficient simulation of large networks on several architectures. Different computer architectures put different demands on the organization of both data structures and computations. Strict separation of such architecture considerations from the neuronal models and other simulation aspects makes it possible to construct both portable and extendible code.
|
pubmed:language |
eng
|
pubmed:journal | |
pubmed:citationSubset |
IM
|
pubmed:status |
MEDLINE
|
pubmed:month |
Dec
|
pubmed:issn |
0929-5313
|
pubmed:author | |
pubmed:issnType |
Print
|
pubmed:volume |
5
|
pubmed:owner |
NLM
|
pubmed:authorsComplete |
Y
|
pubmed:pagination |
443-59
|
pubmed:dateRevised |
2006-11-15
|
pubmed:meshHeading | |
pubmed:year |
1998
|
pubmed:articleTitle |
Large neural network simulations on multiple hardware platforms.
|
pubmed:affiliation |
Studies of Artificial Neural Systems, Department of Numerical Analysis and Computing Science, Royal Institute of Technology, Stockholm, Sweden.
|
pubmed:publicationType |
Journal Article,
Research Support, Non-U.S. Gov't
|