Open Access
Open access
volume 2 issue 1 publication number 26

Face shape transfer via semantic warping

Zonglin Li 1
Xiaoqian Lv 1
Yu Wei 1
Qinglin Liu 1
Jingbo Lin 2
Shengping Zhang 1
Publication typeJournal Article
Publication date2024-09-03
SJR
CiteScore4.0
Impact factor
ISSN27319008, 20973330
Abstract

Face reshaping aims to adjust the shape of a face in a portrait image to make the face aesthetically beautiful, which has many potential applications. Existing methods 1) operate on the pre-defined facial landmarks, leading to artifacts and distortions due to the limited number of landmarks, 2) synthesize new faces based on segmentation masks or sketches, causing generated faces to look dissatisfied due to the losses of skin details and difficulties in dealing with hair and background blurring, and 3) project the positions of the deformed feature points from the 3D face model to the 2D image, making the results unrealistic because of the misalignment between feature points. In this paper, we propose a novel method named face shape transfer (FST) via semantic warping, which can transfer both the overall face and individual components (e.g., eyes, nose, and mouth) of a reference image to the source image. To achieve controllability at the component level, we introduce five encoding networks, which are designed to learn feature embedding specific to different face components. To effectively exploit the features obtained from semantic parsing maps at different scales, we employ a straightforward method of directly connecting all layers within the global dense network. This direct connection facilitates maximum information flow between layers, efficiently utilizing diverse scale semantic parsing information. To avoid deformation artifacts, we introduce a spatial transformer network, allowing the network to handle different types of semantic warping effectively. To facilitate extensive evaluation, we construct a large-scale high-resolution face dataset, which contains 14,000 images with a resolution of 1024 × 1024. Superior performance of our method is demonstrated by qualitative and quantitative experiments on the benchmark dataset.

Found 
Found 

Top-30

Journals

1
2
3
Lecture Notes in Computer Science
3 publications, 75%
1
2
3

Publishers

1
2
3
Springer Nature
3 publications, 75%
Association for Computing Machinery (ACM)
1 publication, 25%
1
2
3
  • We do not take into account publications without a DOI.
  • Statistics recalculated weekly.

Are you a researcher?

Create a profile to get free access to personal recommendations for colleagues and new articles.
Metrics
4
Share
Cite this
GOST |
Cite this
GOST Copy
Li Z. et al. Face shape transfer via semantic warping // Visual Intelligence. 2024. Vol. 2. No. 1. 26
GOST all authors (up to 50) Copy
Li Z., Lv X., Yu Wei, Liu Q., Lin J., Zhang S. Face shape transfer via semantic warping // Visual Intelligence. 2024. Vol. 2. No. 1. 26
RIS |
Cite this
RIS Copy
TY - JOUR
DO - 10.1007/s44267-024-00058-7
UR - https://link.springer.com/10.1007/s44267-024-00058-7
TI - Face shape transfer via semantic warping
T2 - Visual Intelligence
AU - Li, Zonglin
AU - Lv, Xiaoqian
AU - Yu Wei
AU - Liu, Qinglin
AU - Lin, Jingbo
AU - Zhang, Shengping
PY - 2024
DA - 2024/09/03
PB - Springer Nature
IS - 1
VL - 2
SN - 2731-9008
SN - 2097-3330
ER -
BibTex
Cite this
BibTex (up to 50 authors) Copy
@article{2024_Li,
author = {Zonglin Li and Xiaoqian Lv and Yu Wei and Qinglin Liu and Jingbo Lin and Shengping Zhang},
title = {Face shape transfer via semantic warping},
journal = {Visual Intelligence},
year = {2024},
volume = {2},
publisher = {Springer Nature},
month = {sep},
url = {https://link.springer.com/10.1007/s44267-024-00058-7},
number = {1},
pages = {26},
doi = {10.1007/s44267-024-00058-7}
}