FireEdit: Fine-grained Instruction-based Image Editing via Region-aware Vision Language Model

1.Shenzhen Campus of Sun Yat-sen University, 2.Hunyuan Tencent, 3.Tsinghua University, 4.HKUST
Teaser Image

Our framework leverages a vision language model (VLM) to guide instruction-based image editing. Our primary innovation is the introduction of region tokens, which enable the VLM to accurately identify edited objects or areas in complex scenarios while preserving high-frequency details in unintended regions during image decoding.

Abstract

Currently, instruction-based image editing methods have made significant progress by leveraging the powerful cross-modal understanding capabilities of visual language models (VLMs). However, they still face challenges in three key areas: 1) complex scenarios; 2) semantic consistency; and 3) fine-grained editing. To address these issues, we propose FireEdit, an innovative Fine-grained Instruction-based image editing framework that exploits a REgion-aware VLM. FireEdit is designed to accurately comprehend user instructions and ensure effective control over the editing process. Specifically, we enhance the fine-grained visual perception capabilities of the VLM by introducing additional region tokens. Relying solely on the output of the LLM to guide the diffusion model may lead to suboptimal editing results. Therefore, we propose a Time-Aware Target Injection module and a Hybrid Visual Cross Attention module. The former dynamically adjusts the guidance strength at various denoising stages by integrating timestep embeddings with the text embeddings. The latter enhances visual details for image editing, thereby preserving semantic consistency between the edited result and the source image. By combining the VLM enhanced with fine-grained region tokens and the time-dependent diffusion model, FireEdit demonstrates significant advantages in comprehending editing instructions and maintaining high semantic consistency. Extensive experiments indicate that our approach surpasses the state-of-the-art instruction-based image editing methods.

Framework

Teaser Image

The overall framework of our proposed FireEdit.

The core of FireEdit is to conduct region-aware fusion of multi-modal tokens to promote VLMs and facilitate fine-grained, localized alignments between editing instructions and images. It also introduces a hybrid visual cross-attention module to better preserve image details and a time-aware target injection module to edit targets adaptively.

Comparation

MY ALT TEXT

Qualitative comparison. We compare the editing performance of FireEdit with SOTA methods on the Emu Edit test set. Each editing instruction is written below each row of images. Compared with other SOTA methods, our approach is superior in accurately locating the edited objects or regions and preserving the detailed information of the input image

Ablation experiment

MY ALT TEXT

Ablation studies for components in our method.