Training MLPs layer by layer using an objective function for internal representations - Université de technologie de Troyes Accéder directement au contenu
Article Dans Une Revue Neural Networks Année : 1996

Training MLPs layer by layer using an objective function for internal representations

Résumé

A new constructive algorithm for designing and training multilayer perceptrons is proposed. This algorithm involves the optimization of an objective function for internal representations, which does not require any computation of the network's outputs. Coupled with a strategy for recruiting units during the learning process, this concept provides a scheme for training a multilayer network layer by layer, until self-encoding of the pattern categories is achieved in the final, highest-level representations. Two objective functions are proposed. For discrimination problems, recent experimental and theoretical results concerning back-propagation training of networks with one hidden layer and linear outputs suggest the introduction of a particular measure of class separability. For problems involving the approximation of a continuous function, we show that the minimization of the mean squared output error can be achieved by maximizing a statistical measure (the sample coefficient of multiple determination) in the last hidden layer. Simulations are used to illustrate the process of network construction, and to demonstrate the improvements brought by this approach over back-propagation in terms of performance and robustness.

Dates et versions

hal-02861431 , version 1 (09-06-2020)

Identifiants

Citer

Régis Lengellé, Thierry Denoeux. Training MLPs layer by layer using an objective function for internal representations. Neural Networks, 1996, 9 (1), pp.83-97. ⟨10.1016/0893-6080(95)00096-8⟩. ⟨hal-02861431⟩
30 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More