TY - JOUR
T1 - PointSee
T2 - Image Enhances Point Cloud
AU - Gu, Lipeng
AU - Yan, Xuefeng
AU - Cui, Peng
AU - Gong, Lina
AU - Xie, Haoran
AU - Wang, Fu Lee
AU - Qin, Jing
AU - Wei, Mingqiang
N1 - Publisher Copyright:
© 1995-2012 IEEE.
PY - 2024
Y1 - 2024
N2 - There is a prevailing trend towards fusing multi-modal information for 3D object detection (3OD). However, challenges related to computational efficiency, plug-and-play capabilities, and accurate feature alignment have not been adequately addressed in the design of multi-modal fusion networks. In this paper, we present PointSee, a lightweight, flexible, and effective multi-modal fusion solution to facilitate various 3OD networks by semantic feature enhancement of point clouds (e.g., LiDAR or RGB-D data) assembled with scene images. Beyond the existing wisdom of 3OD, PointSee consists of a hidden module (HM) and a seen module (SM): HM decorates point clouds using 2D image information in an offline fusion manner, leading to minimal or even no adaptations of existing 3OD networks; SM further enriches the point clouds by acquiring point-wise representative semantic features, leading to enhanced performance of existing 3OD networks. Besides the new architecture of PointSee, we propose a simple yet efficient training strategy, to ease the potential inaccurate regressions of 2D object detection networks. Extensive experiments on the popular outdoor/indoor benchmarks show quantitative and qualitative improvements of our PointSee over thirty-five state-of-the-art methods.
AB - There is a prevailing trend towards fusing multi-modal information for 3D object detection (3OD). However, challenges related to computational efficiency, plug-and-play capabilities, and accurate feature alignment have not been adequately addressed in the design of multi-modal fusion networks. In this paper, we present PointSee, a lightweight, flexible, and effective multi-modal fusion solution to facilitate various 3OD networks by semantic feature enhancement of point clouds (e.g., LiDAR or RGB-D data) assembled with scene images. Beyond the existing wisdom of 3OD, PointSee consists of a hidden module (HM) and a seen module (SM): HM decorates point clouds using 2D image information in an offline fusion manner, leading to minimal or even no adaptations of existing 3OD networks; SM further enriches the point clouds by acquiring point-wise representative semantic features, leading to enhanced performance of existing 3OD networks. Besides the new architecture of PointSee, we propose a simple yet efficient training strategy, to ease the potential inaccurate regressions of 2D object detection networks. Extensive experiments on the popular outdoor/indoor benchmarks show quantitative and qualitative improvements of our PointSee over thirty-five state-of-the-art methods.
KW - 3D object detection
KW - PointSee
KW - feature enhancement
KW - multi-modal fusion
UR - http://www.scopus.com/inward/record.url?scp=85177038603&partnerID=8YFLogxK
U2 - 10.1109/TVCG.2023.3331779
DO - 10.1109/TVCG.2023.3331779
M3 - Article
C2 - 37948146
AN - SCOPUS:85177038603
SN - 1077-2626
VL - 30
SP - 6291
EP - 6308
JO - IEEE Transactions on Visualization and Computer Graphics
JF - IEEE Transactions on Visualization and Computer Graphics
IS - 9
ER -