Unbiased Supervised Contrastive Learning - Laboratoire Traitement et Communication de l'Information Accéder directement au contenu
Communication Dans Un Congrès Année : 2023

Unbiased Supervised Contrastive Learning

Résumé

Many datasets are biased, namely they contain easy-to-learn features that are highly correlated with the target class only in the dataset but not in the true underlying distribution of the data. For this reason, learning unbiased models from biased data has become a very relevant research topic in the last years. In this work, we tackle the problem of learning representations that are robust to biases. We first present a margin-based theoretical framework that allows us to clarify why recent contrastive losses (InfoNCE, SupCon, etc.) can fail when dealing with biased data. Based on that, we derive a novel formulation of the supervised contrastive loss (ϵ-SupInfoNCE), providing more accurate control of the minimal distance between positive and negative samples. Furthermore, thanks to our theoretical framework, we also propose FairKL, a new debiasing regularization loss, that works well even with extremely biased data. We validate the proposed losses on standard vision datasets including CIFAR10, CIFAR100, and ImageNet, and we assess the debiasing capability of FairKL with ϵ-SupInfoNCE, reaching stateof-the-art performance on a number of biased datasets, including real instances of biases "in the wild".
Fichier principal
Vignette du fichier
ICLR_2023___Unbiased_Supervised_Contrastive_Learning.pdf (2.59 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03993128 , version 1 (16-02-2023)

Identifiants

  • HAL Id : hal-03993128 , version 1

Citer

Carlo Alberto Barbano, Benoit Dufumier, Enzo Tartaglione, Marco Grangetto, Pietro Gori. Unbiased Supervised Contrastive Learning. ICLR, May 2023, Kigali, Rwanda. ⟨hal-03993128⟩
89 Consultations
106 Téléchargements

Partager

Gmail Facebook X LinkedIn More