Autoregressive GAN for Semantic Unconditional Head Motion Generation - Département Image, Données, Signal Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2023

Autoregressive GAN for Semantic Unconditional Head Motion Generation

Résumé

In this work, we address the task of unconditional head motion generation to animate still human faces in a low-dimensional semantic space from a single reference pose. Different from traditional audio-conditioned talking head generation that seldom puts emphasis on realistic head motions, we devise a GAN-based architecture that learns to synthesize rich head motion sequences over long duration while maintaining low error accumulation levels. In particular, the autoregressive generation of incremental outputs ensures smooth trajectories, while a multi-scale discriminator on input pairs drives generation toward better handling of high- and low-frequency signals and less mode collapse. We experimentally demonstrate the relevance of the proposed method and show its superiority compared to models that attained state-of-the-art performances on similar tasks.
Fichier principal
Vignette du fichier
SUHMo.pdf (5.66 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03833759 , version 1 (28-10-2022)
hal-03833759 , version 2 (13-04-2023)
hal-03833759 , version 3 (19-07-2023)

Identifiants

Citer

Louis Airale, Xavier Alameda-Pineda, Stéphane Lathuilière, Dominique Vaufreydaz. Autoregressive GAN for Semantic Unconditional Head Motion Generation. 2023. ⟨hal-03833759v2⟩
291 Consultations
74 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More