ITEM METADATA RECORD
Title: MASSIVE MIMD NEURAL-NETWORK SIMULATIONS - THE CONNECTIONS DILEMMA
Authors: TOLLENAERE, T ×
SARAIVA, JM
Van Hulle, Marc #
Issue Date: 1994
Publisher: JOHN WILEY & SONS LTD
Series Title: Concurrency, practice and experience vol:6 issue:3 pages:153-191
Conference: date:CATHOLIC UNIV LEUVEN,NEURO & PSYCHOFYSIOL LAB,B-3000 LOUVAIN,BELGIUM; CATHOLIC UNIV LEUVEN,INTERDISCIPLINARY CTR NEURAL NETWORKS,B-3000 LOUVAIN,BELGIUM; UNIV MINHO,DEPT INFORMAT,P-4719 BRAGA,PORTUGAL
Abstract: We present two strategies for the simulation of massive neural networks on message-passing MIMD machines. In the first strategy all interconnections between neurons are stored explicitly in interconnection matrices. During simulation, every processor is responsible for certain submatrices of these interconnection matrices. The fact that message-passing MIMD processors do not provide virtual memory seriously limits the size or the networks that can be simulated, since interconnection matrices require huge amounts of memory. An alternative strategy is not to store the connections explicitly, but generate the interconnections as they are needed. This circumvents memory limitations, but because interconnections need to be generated multiple times, it is inherently slower than the first implementation. This yields the connections dilemma: the choice between fast simulation of small networks as against slower simulation or massive networks. We present, analyze and bench-mark parallel implementations for both strategies. An efficient connection-look-up algorithm, which can be used for any network with static interconnections, ensures that simulation times for the second strategy are only marginally longer than for the first strategy. We show that for our users the connections dilemma is no longer a dilemma: by means of our look-up algorithm the simulation of massive networks becomes possible; furthermore the time to design and construct a network, prior to simulation, is considerably shorter than it is for the matrix version, and in addition this time is independent of network size. Although we have implemented both strategies on a parallel computer, the algorithms presented here can be used on any machine with memory limitations, such as personal computers.
ISSN: 1040-3108
Publication status: published
KU Leuven publication type: IT
Appears in Collections:Research Group Neurophysiology
× corresponding author
# (joint) last author

Files in This Item:

There are no files associated with this item.

Request a copy

 




All items in Lirias are protected by copyright, with all rights reserved.

© Web of science