Leveraging Feature Extraction and Context Information for Image Relighting

Chenrong Fang, Ju Wang, Kan Chen, Ran Su, Chi-Fu Lai, Qian Sun

Research output: Contribution to journalArticlepeer-review

Abstract

Example-based image relighting aims to relight an input image to follow the lighting settings of another target example image. Deep learning-based methods for such tasks have become highly popular. However, they are often limited by the geometric priors or suffer from shadow reconstruction and lack of texture details. In this paper, we propose an image-to-image translation network called DGATRN to tackle this problem by enhancing feature extraction and unveiling context information to achieve visually plausible example-based image relighting. Specifically, the proposed DGATRN consists of a scene extraction, a shadow calibration, and a rendering network, and our key contribution lies in the first two networks. We propose an up- and downsampling approach to improve the feature extraction capability to capture scene and texture details better. We also introduce a feature attention downsampling block and a knowledge transfer to utilize the attention impact and underlying knowledge connection between scene and shadow. Experiments were conducted to evaluate the usefulness and effectiveness of the proposed method.

Original languageEnglish
Article number4301
JournalElectronics (Switzerland)
Volume12
Issue number20
DOIs
Publication statusPublished - Oct 2023

Keywords

  • attention
  • image relighting
  • knowledge transfer
  • neural network
  • upsampling and downsampling

Fingerprint

Dive into the research topics of 'Leveraging Feature Extraction and Context Information for Image Relighting'. Together they form a unique fingerprint.

Cite this