TY - GEN
T1 - Learning to See in the Dark with Events
AU - Zhang, Song
AU - Zhang, Yu
AU - Jiang, Zhe
AU - Zou, Dongqing
AU - Ren, Jimmy
AU - Zhou, Bin
N1 - Publisher Copyright:
© 2020, Springer Nature Switzerland AG.
PY - 2020
Y1 - 2020
N2 - Imaging in the dark environment is important for many real-world applications like video surveillance. Recently, the development of Event Cameras raises promising directions in solving this task thanks to its High Dynamic Range (HDR) and low requirement of computational sources. However, such cameras record sparse, asynchronous intensity changes of the scene (called events), instead of canonical images. In this paper, we propose learning to see in the dark by translating HDR events in low light to canonical sharp images as if captured in day light. Since it is extremely challenging to collect paired event-image training data, a novel unsupervised domain adaptation network is proposed that explicitly separates domain-invariant features (e.g. scene structures) from the domain-specific ones (e.g. detailed textures) to ease representation learning. A detail enhancing branch is proposed to reconstruct day light-specific features from the domain-invariant representations in a residual manner, regularized by a ranking loss. To evaluate the proposed approach, a novel large-scale dataset is captured with a DAVIS240C camera with both day/low light events and intensity images. Experiments on this dataset show that the proposed domain adaptation approach achieves superior performance than various state-of-the-art architectures.
AB - Imaging in the dark environment is important for many real-world applications like video surveillance. Recently, the development of Event Cameras raises promising directions in solving this task thanks to its High Dynamic Range (HDR) and low requirement of computational sources. However, such cameras record sparse, asynchronous intensity changes of the scene (called events), instead of canonical images. In this paper, we propose learning to see in the dark by translating HDR events in low light to canonical sharp images as if captured in day light. Since it is extremely challenging to collect paired event-image training data, a novel unsupervised domain adaptation network is proposed that explicitly separates domain-invariant features (e.g. scene structures) from the domain-specific ones (e.g. detailed textures) to ease representation learning. A detail enhancing branch is proposed to reconstruct day light-specific features from the domain-invariant representations in a residual manner, regularized by a ranking loss. To evaluate the proposed approach, a novel large-scale dataset is captured with a DAVIS240C camera with both day/low light events and intensity images. Experiments on this dataset show that the proposed domain adaptation approach achieves superior performance than various state-of-the-art architectures.
KW - Domain adaptation
KW - Event camera
KW - Image reconstruction
KW - Low light imaging
UR - https://www.scopus.com/pages/publications/85097832966
U2 - 10.1007/978-3-030-58523-5_39
DO - 10.1007/978-3-030-58523-5_39
M3 - Conference contribution
AN - SCOPUS:85097832966
SN - 9783030585228
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 666
EP - 682
BT - Computer Vision – ECCV 2020 - 16th European Conference, 2020, Proceedings
A2 - Vedaldi, Andrea
A2 - Bischof, Horst
A2 - Brox, Thomas
A2 - Frahm, Jan-Michael
T2 - 16th European Conference on Computer Vision, ECCV 2020
Y2 - 23 August 2020 through 28 August 2020
ER -