A Diffusion-Based Framework for Occluded Object Movement

Zheng Peng Duan, Jiawei Zhang, Siyu Liu, Zheng Lin, Chun Le Guo, Dongqing Zou, Jimmy Ren, Chongyi Li

Research output: Contribution to journalConference articlepeer-review

Abstract

Seamlessly moving objects within a scene is a common requirement for image editing, but it is still a challenge for existing editing methods. Especially for real-world images, the occlusion situation further increases the difficulty. The main difficulty is that the occluded portion needs to be completed before movement can proceed. To leverage the real-world knowledge embedded in the pre-trained diffusion models, we propose a Diffusion-based framework specifically designed for Occluded Object Movement, named DiffOOM. The proposed DiffOOM consists of two parallel branches that perform object de-occlusion and movement simultaneously. The de-occlusion branch utilizes a background color-fill strategy and a continuously updated object mask to focus the diffusion process on completing the obscured portion of the target object. Concurrently, the movement branch employs latent optimization to place the completed object in the target location and adopts local text-conditioned guidance to integrate the object into new surroundings appropriately. Extensive evaluations demonstrate the superior performance of our method, which is further validated by a comprehensive user study.

Original languageEnglish
Pages (from-to)2816-2824
Number of pages9
JournalProceedings of the AAAI Conference on Artificial Intelligence
Volume39
Issue number3
DOIs
Publication statusPublished - 11 Apr 2025
Externally publishedYes
Event39th Annual AAAI Conference on Artificial Intelligence, AAAI 2025 - Philadelphia, United States
Duration: 25 Feb 20254 Mar 2025

Fingerprint

Dive into the research topics of 'A Diffusion-Based Framework for Occluded Object Movement'. Together they form a unique fingerprint.

Cite this