Skip to Main content Skip to Navigation
Journal articles

Training MLPs layer by layer using an objective function for internal representations

Abstract : A new constructive algorithm for designing and training multilayer perceptrons is proposed. This algorithm involves the optimization of an objective function for internal representations, which does not require any computation of the network's outputs. Coupled with a strategy for recruiting units during the learning process, this concept provides a scheme for training a multilayer network layer by layer, until self-encoding of the pattern categories is achieved in the final, highest-level representations. Two objective functions are proposed. For discrimination problems, recent experimental and theoretical results concerning back-propagation training of networks with one hidden layer and linear outputs suggest the introduction of a particular measure of class separability. For problems involving the approximation of a continuous function, we show that the minimization of the mean squared output error can be achieved by maximizing a statistical measure (the sample coefficient of multiple determination) in the last hidden layer. Simulations are used to illustrate the process of network construction, and to demonstrate the improvements brought by this approach over back-propagation in terms of performance and robustness.
Document type :
Journal articles
Complete list of metadata
Contributor : Jean-Baptiste VU VAN Connect in order to contact the contributor
Submitted on : Tuesday, June 9, 2020 - 7:10:06 AM
Last modification on : Tuesday, November 16, 2021 - 4:30:07 AM

Links full text




Régis Lengellé, Thierry Denoeux. Training MLPs layer by layer using an objective function for internal representations. Neural Networks, Elsevier, 1996, 9 (1), pp.83-97. ⟨10.1016/0893-6080(95)00096-8⟩. ⟨hal-02861431⟩



Record views