Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Generative adversarial networks for 2D-based CNN pose-invariant face recognition

Generative adversarial networks for 2D-based CNN pose-invariant face recognition The computer vision community considers the pose-invariant face recognition (PIFR) as one of the most challenging applications. Many works were devoted to enhancing face recognition performance when facing profile samples. They mainly focused on 2D- and 3D-based frontalization techniques trying to synthesize frontal views from profile ones. In the same context, we propose in this paper a new 2D PIFR technique based on Generative Adversarial Network image translation. The used GAN is Pix2Pix paired architecture covering many generator and discriminator models that will be comprehensively evaluated on a new benchmark proposed in this paper referred to as Combined-PIFR database, which is composed of four datasets that provide profiles images and their corresponding frontal ones. The paired architecture we are using is based on computing the L1 distance between the generated image and the ground truth one (pairs). Therefore, both generator and discriminator architectures are paired ones. The Combined-PIFR database is partitioned respecting person-independent constraints to evaluate our proposed framework’s frontalization and classification sub-systems fairly. Thanks to the GAN-based frontalization, the recorded results demonstrate an important improvement of 33.57% compared to the baseline. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png International Journal of Multimedia Information Retrieval Springer Journals

Generative adversarial networks for 2D-based CNN pose-invariant face recognition

Loading next page...
 
/lp/springer-journals/generative-adversarial-networks-for-2d-based-cnn-pose-invariant-face-L0o10lSzla
Publisher
Springer Journals
Copyright
Copyright © The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2022. Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
ISSN
2192-6611
eISSN
2192-662X
DOI
10.1007/s13735-022-00249-2
Publisher site
See Article on Publisher Site

Abstract

The computer vision community considers the pose-invariant face recognition (PIFR) as one of the most challenging applications. Many works were devoted to enhancing face recognition performance when facing profile samples. They mainly focused on 2D- and 3D-based frontalization techniques trying to synthesize frontal views from profile ones. In the same context, we propose in this paper a new 2D PIFR technique based on Generative Adversarial Network image translation. The used GAN is Pix2Pix paired architecture covering many generator and discriminator models that will be comprehensively evaluated on a new benchmark proposed in this paper referred to as Combined-PIFR database, which is composed of four datasets that provide profiles images and their corresponding frontal ones. The paired architecture we are using is based on computing the L1 distance between the generated image and the ground truth one (pairs). Therefore, both generator and discriminator architectures are paired ones. The Combined-PIFR database is partitioned respecting person-independent constraints to evaluate our proposed framework’s frontalization and classification sub-systems fairly. Thanks to the GAN-based frontalization, the recorded results demonstrate an important improvement of 33.57% compared to the baseline.

Journal

International Journal of Multimedia Information RetrievalSpringer Journals

Published: Sep 15, 2022

Keywords: Face recognition challenges; Pose invariant face recognition; GAN image translation; CNN face classification framework; Deep residual networks

References