next up previous
Next: Application Scenario Up: Organization of Distributed Digital Previous: Introduction

  
The Architecture

The self-organizing map [3] consists of a grid of nodes with n-dimensional weight vectors, to which input signals $x \in \Re^{n}$ are presented in random order for training. An activation function based on some metric is used to determine the winning node (the `winner') as the node showing the highest activation (e.g. the lowest Euclidean distance between the weight vector and the presented input signal). In the next step the weight vectors of the winner and its neighboring nodes are modified following some learning rate in order to represent the various input signals more closely, i.e. moving the weight vectors of the winner and of some neighboring nodes closer to the input pattern in order to increase their similarity in terms of the metric used. This finally forms a topology-preserving mapping of the high-dimensional input space.

Given a situation, where not all training data is available at one single location or where the amount of data hinders effective and frequent training, a series of small maps may be created based on the locally available subset of the complete data. Following the standard training procedure for the creation of these maps, they are integrated into a single higher order SOM.

To build a higher-order SOM integrating several lower order maps, we use the weight vectors of the lower order maps' nodes as input for a higher order SOM, resulting in a topology-preserving mapping of the nodes of the lower order maps. Thus we obtain a single map integrating the distributed maps.

Combining several independent SOM systems offers several advantages as compared to using one single, huge map. First of all, training times and computational requirements for training small maps, are considerably lower, making them more flexible. Secondly, small maps may easily and independently be retrained if the input data organized on the maps changes, which is especially important for dynamically changing environments, where new input data needs to be added to a trained map, e.g. when documents covering new subject matters are added to the digital library. Frequent retraining allows the mapping performance of the SOM to remain at high levels. Note, however, that a retraining of single lower order SOMs due to changes in the input data domain does not necessarily require a retraining of higher order maps where the `new' domain may already be present, originating from the presence of this very domain in another lower order SOM. In this case the retrained lower order SOM only needs to be mapped onto the higher order map to determine the new node references. Thirdly, integrating several distributed maps allows the combination and utilization of SOMs, that are trained and maintained at different locations by downloading the weight vectors of the respective SOMs and using them to create a higher order map. Thus we can build systems on a higher level of abstraction based on locally maintained maps. Finally, the usage and integration of several smaller sets of maps provides a better comprehension of the underlying structure of large amounts of data whereas huge maps tend to become confused.


next up previous
Next: Application Scenario Up: Organization of Distributed Digital Previous: Introduction
Andreas RAUBER
1998-11-02