PointSee: Image Enhances Point Cloud

Lipeng Gu, Xuefeng Yan, Peng Cui, Lina Gong, Haoran Xie, Fu Lee Wang, Jing Qin, Mingqiang Wei

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

There is a prevailing trend towards fusing multi-modal information for 3D object detection (3OD). However, challenges related to computational efficiency, plug-and-play capabilities, and accurate feature alignment have not been adequately addressed in the design of multi-modal fusion networks. In this paper, we present PointSee, a lightweight, flexible, and effective multi-modal fusion solution to facilitate various 3OD networks by semantic feature enhancement of point clouds (e.g., LiDAR or RGB-D data) assembled with scene images. Beyond the existing wisdom of 3OD, PointSee consists of a hidden module (HM) and a seen module (SM): HM decorates point clouds using 2D image information in an offline fusion manner, leading to minimal or even no adaptations of existing 3OD networks; SM further enriches the point clouds by acquiring point-wise representative semantic features, leading to enhanced performance of existing 3OD networks. Besides the new architecture of PointSee, we propose a simple yet efficient training strategy, to ease the potential inaccurate regressions of 2D object detection networks. Extensive experiments on the popular outdoor/indoor benchmarks show quantitative and qualitative improvements of our PointSee over thirty-five state-of-the-art methods.

Original languageEnglish
Pages (from-to)6291-6308
Number of pages18
JournalIEEE Transactions on Visualization and Computer Graphics
Volume30
Issue number9
DOIs
Publication statusPublished - 2024

Keywords

  • 3D object detection
  • PointSee
  • feature enhancement
  • multi-modal fusion

Fingerprint

Dive into the research topics of 'PointSee: Image Enhances Point Cloud'. Together they form a unique fingerprint.

Cite this