QLTL: a Simple yet Efficient Algorithm for Semi-Supervised Transfer Learning

Abstract : Most machine learning techniques rely on the assumption that training and target data share a similar underlying distribution. When this assumption is violated, they usually fail to generalise; this is one of the situations tackled by transfer learning: achieving good classification performances on different-butrelated datasets. In this paper, we consider the specific case where the task is unique, and where the training set(s) and the target set share a similar-but-different underlying distribution. Our method, QLTL: Quadratic Loss Transfer Learning, constitutes semi-supervised learning: we train a set of classifiers on the available training data in order to input knowledge, and we use a centred kernel polarisation criterion as a way to correct the density probability function shift between training and target data. Our method results in a convex problem, leading to an analytic solution. We show encouraging results on a toy example with covariate shift, and good performances on a textdocument classification task, relatively to recent algorithms.
Document type :
Conference papers
Complete list of metadatas

https://hal-utt.archives-ouvertes.fr/hal-02291405
Contributor : Jean-Baptiste Vu Van <>
Submitted on : Wednesday, September 18, 2019 - 4:52:28 PM
Last modification on : Friday, October 18, 2019 - 1:26:09 AM

Identifiers

  • HAL Id : hal-02291405, version 1

Collections

Citation

Bruno Muller, Régis Lengellé. QLTL: a Simple yet Efficient Algorithm for Semi-Supervised Transfer Learning. 8th International Conference on Pattern Recognition Applications and Methods, Feb 2019, Prague, Czech Republic. ⟨hal-02291405⟩

Share

Metrics

Record views

14