Skip to Main content Skip to Navigation
Journal articles

Initializing back propagation networks with prototypes

Abstract : This paper addresses the problem of initializing the weights in back propagation networks with one hidden layer. The proposed method relies on the use of reference patterns, or prototypes, and on a transformation which maps each vector in the original feature space onto a unit-length vector in a space with one additional dimension. This scheme applies to pattern recognition tasks, as well as to the approximation of continuous functions. Issues related to the preprocessing of input patterns and to the generation of prototypes are discussed, and an algorithm for building appropriate prototypes in the continuous case is described. Also examined is the relationship between this approach and the theory of radial basis functions. Finally, simulation results are presented, showing that initializing back propagation networks with prototypes generally results in (a) drastic reductions in training time, (b) improved robustness against local minima, and (c) better generalization.
Document type :
Journal articles
Complete list of metadatas

https://hal-utt.archives-ouvertes.fr/hal-02861437
Contributor : Jean-Baptiste Vu Van <>
Submitted on : Tuesday, June 9, 2020 - 7:28:45 AM
Last modification on : Wednesday, June 10, 2020 - 10:07:36 AM

Links full text

Identifiers

Collections

Citation

Thierry Denoeux, Régis Lengellé. Initializing back propagation networks with prototypes. Neural Networks, Elsevier, 1993, 6 (3), pp.351-363. ⟨10.1016/0893-6080(93)90003-F⟩. ⟨hal-02861437⟩

Share

Metrics

Record views

27