Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

On the (Im)Practicality of Adversarial Perturbation for Image Privacy

On the (Im)Practicality of Adversarial Perturbation for Image Privacy AbstractImage hosting platforms are a popular way to store and share images with family members and friends. However, such platforms typically have full access to images raising privacy concerns. These concerns are further exacerbated with the advent of Convolutional Neural Networks (CNNs) that can be trained on available images to automatically detect and recognize faces with high accuracy.Recently, adversarial perturbations have been proposed as a potential defense against automated recognition and classification of images by CNNs. In this paper, we explore the practicality of adversarial perturbation-based approaches as a privacy defense against automated face recognition. Specifically, we first identify practical requirements for such approaches and then propose two practical adversarial perturbation approaches – (i) learned universal ensemble perturbations (UEP), and (ii) k-randomized transparent image overlays (k-RTIO) that are semantic adversarial perturbations. We demonstrate how users can generate effective transferable perturbations under realistic assumptions with less effort.We evaluate the proposed methods against state-of-theart online and offline face recognition models, Clarifai.com and DeepFace, respectively. Our findings show that UEP and k-RTIO respectively achieve more than 85% and 90% success against face recognition models. Additionally, we explore potential countermeasures that classifiers can use to thwart the proposed defenses. Particularly, we demonstrate one effective countermeasure against UEP. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Proceedings on Privacy Enhancing Technologies de Gruyter

On the (Im)Practicality of Adversarial Perturbation for Image Privacy

Loading next page...
 
/lp/de-gruyter/on-the-im-practicality-of-adversarial-perturbation-for-image-privacy-DmFbmXIBlH

References (86)

Publisher
de Gruyter
Copyright
© 2021 Arezoo Rajabi et al., published by Sciendo
ISSN
2299-0984
eISSN
2299-0984
DOI
10.2478/popets-2021-0006
Publisher site
See Article on Publisher Site

Abstract

AbstractImage hosting platforms are a popular way to store and share images with family members and friends. However, such platforms typically have full access to images raising privacy concerns. These concerns are further exacerbated with the advent of Convolutional Neural Networks (CNNs) that can be trained on available images to automatically detect and recognize faces with high accuracy.Recently, adversarial perturbations have been proposed as a potential defense against automated recognition and classification of images by CNNs. In this paper, we explore the practicality of adversarial perturbation-based approaches as a privacy defense against automated face recognition. Specifically, we first identify practical requirements for such approaches and then propose two practical adversarial perturbation approaches – (i) learned universal ensemble perturbations (UEP), and (ii) k-randomized transparent image overlays (k-RTIO) that are semantic adversarial perturbations. We demonstrate how users can generate effective transferable perturbations under realistic assumptions with less effort.We evaluate the proposed methods against state-of-theart online and offline face recognition models, Clarifai.com and DeepFace, respectively. Our findings show that UEP and k-RTIO respectively achieve more than 85% and 90% success against face recognition models. Additionally, we explore potential countermeasures that classifiers can use to thwart the proposed defenses. Particularly, we demonstrate one effective countermeasure against UEP.

Journal

Proceedings on Privacy Enhancing Technologiesde Gruyter

Published: Jan 1, 2021

There are no references for this article.