Initializing back propagation networks with prototypes - Archive ouverte HAL Access content directly
Journal Articles Neural Networks Year : 1993

Initializing back propagation networks with prototypes

(1) , (2)
1
2

Abstract

This paper addresses the problem of initializing the weights in back propagation networks with one hidden layer. The proposed method relies on the use of reference patterns, or prototypes, and on a transformation which maps each vector in the original feature space onto a unit-length vector in a space with one additional dimension. This scheme applies to pattern recognition tasks, as well as to the approximation of continuous functions. Issues related to the preprocessing of input patterns and to the generation of prototypes are discussed, and an algorithm for building appropriate prototypes in the continuous case is described. Also examined is the relationship between this approach and the theory of radial basis functions. Finally, simulation results are presented, showing that initializing back propagation networks with prototypes generally results in (a) drastic reductions in training time, (b) improved robustness against local minima, and (c) better generalization.

Dates and versions

hal-02861437 , version 1 (09-06-2020)

Identifiers

Cite

Thierry Denoeux, Régis Lengellé. Initializing back propagation networks with prototypes. Neural Networks, 1993, 6 (3), pp.351-363. ⟨10.1016/0893-6080(93)90003-F⟩. ⟨hal-02861437⟩
23 View
0 Download

Altmetric

Share

Gmail Facebook Twitter LinkedIn More