FaceShifter: Towards High Fidelity And Occlusion Aware Face Swapping

Lingzhi Li 1
Jianmin Bao2
Hao Yang2
Dong Chen 2
Fang Wen2
1Peking University
2Microsoft Research

Dataset [GitHub]

CVPR 2020 (Oral) [Paper]


Our face swapping results on wild face images under various challenging conditions. All results are generated using a single well-trained two-stage model.


Abstract

In this work, we study various existing benchmarks for deepfake detection researches. In particular, we examine a novel two-stage face swapping algorithm, called FaceShifter, for high fidelity and occlusion aware face swapping. Unlike many existing face swapping works that leverage only limited information from the target image when synthesizing the swapped face, FaceShifter generates the swapped face with high-fidelity by exploiting and integrating the target attributes thoroughly and adaptively. FaceShifter can handle facial occlusions with a second synthesis stage consisting of a Heuristic Error Acknowledging Refinement Network (HEAR-Net), which is trained to recover anomaly regions in a self-supervised way without any manual annotations. Experiments show that existing deepfake detection algorithm performs poorly with FaceShifter, since it achieves advantageous quality over all existing benchmarks. However, our newly developed Face X-Ray [Li et al. CVPR 2020] method can reliably detect forged images created by FaceShifter.


Video



Paper


Lingzhi Li, Jianmin Bao, Hao Yang, Dong Chen, Fang Wen.
FaceShifter: Towards High Fidelity And Occlusion Aware Face Swapping
In CVPR, 2020 (oral presentation). (Paper)
[Bibtex]



Acknowledgements

We'd like to thank Sicheng Xu, Yu Deng, Jiaolong Yang for helpful advice and discussion. We are grateful to Jinpeng Lin for helping with user study's webpage. Source code of this webpage was borrow from Peter Wang . The views, opinions and/or findings expressed are those of the authors and should not be interpreted as representing the official views or policies of Peking University or Microsoft Coporation.