TY - JOUR
T1 - Leveraging Feature Extraction and Context Information for Image Relighting
AU - Fang, Chenrong
AU - Wang, Ju
AU - Chen, Kan
AU - Su, Ran
AU - Lai, Chi-Fu
AU - Sun, Qian
N1 - Publisher Copyright:
© 2023 by the authors.
PY - 2023/10
Y1 - 2023/10
N2 - Example-based image relighting aims to relight an input image to follow the lighting settings of another target example image. Deep learning-based methods for such tasks have become highly popular. However, they are often limited by the geometric priors or suffer from shadow reconstruction and lack of texture details. In this paper, we propose an image-to-image translation network called DGATRN to tackle this problem by enhancing feature extraction and unveiling context information to achieve visually plausible example-based image relighting. Specifically, the proposed DGATRN consists of a scene extraction, a shadow calibration, and a rendering network, and our key contribution lies in the first two networks. We propose an up- and downsampling approach to improve the feature extraction capability to capture scene and texture details better. We also introduce a feature attention downsampling block and a knowledge transfer to utilize the attention impact and underlying knowledge connection between scene and shadow. Experiments were conducted to evaluate the usefulness and effectiveness of the proposed method.
AB - Example-based image relighting aims to relight an input image to follow the lighting settings of another target example image. Deep learning-based methods for such tasks have become highly popular. However, they are often limited by the geometric priors or suffer from shadow reconstruction and lack of texture details. In this paper, we propose an image-to-image translation network called DGATRN to tackle this problem by enhancing feature extraction and unveiling context information to achieve visually plausible example-based image relighting. Specifically, the proposed DGATRN consists of a scene extraction, a shadow calibration, and a rendering network, and our key contribution lies in the first two networks. We propose an up- and downsampling approach to improve the feature extraction capability to capture scene and texture details better. We also introduce a feature attention downsampling block and a knowledge transfer to utilize the attention impact and underlying knowledge connection between scene and shadow. Experiments were conducted to evaluate the usefulness and effectiveness of the proposed method.
KW - attention
KW - image relighting
KW - knowledge transfer
KW - neural network
KW - upsampling and downsampling
UR - http://www.scopus.com/inward/record.url?scp=85175201082&partnerID=8YFLogxK
U2 - 10.3390/electronics12204301
DO - 10.3390/electronics12204301
M3 - Article
VL - 12
JO - Electronics (Switzerland)
JF - Electronics (Switzerland)
IS - 20
M1 - 4301
ER -