SGEdit: Bridging LLM with Text2Image Generative Model for Scene Graph-based Image Editing

1City University of Hong Kong, 2Microsoft GenAI
Teaser image.

Examples of scene graph-based edits performed by our framework, including changing relationships or replacing, adding, and removing elements.

Abstract

Scene graphs offer a structured, hierarchical representation of images, with nodes and edges symbolizing objects and the relationships among them. It can serve as a natural interface for image editing, dramatically improving precision and flexibility.

Leveraging this benefit, we introduce a new framework that integrates large language model (LLM) with Text2Image generative model for scene graph-based image editing. This integration enables precise modifications at the object level and creative recomposition of scenes without compromising overall image integrity. Our approach involves two primary stages:

  1. Utilizing a LLM-driven scene parser, we construct an image's scene graph, capturing key objects and their interrelationships, as well as parsing fine-grained attributes such as object masks and descriptions. These annotations facilitate concept learning with a fine-tuned diffusion model, representing each object with an optimized token and detailed description prompt.
  2. During the image editing phase, a LLM editing controller guides the edits towards specific areas. These edits are then implemented by an attention-modulated diffusion editor, utilizing the fine-tuned model to perform object additions, deletions, replacements, and adjustments.

Through extensive experiments, we demonstrate that our framework significantly outperforms existing image editing methods in terms of editing precision and scene aesthetics.

Methods

Our approach predicts a scene graph as a user interface, enabling modifications to nodes and edges for various tasks, such as changing relationships or replacing, adding, and removing elements. By integrating LLM and Text2Image generative models, users can explore diverse compositions in their images, ensuring that these alterations accurately reflect the structural changes depicted in the modified scene graph.

Results

Comparisons with other baselines

From left to right: (a) Input images; (b) Scene graphs and user edits; (c) SIMSG [Dhamo et al. 2020]; (d) SGDiff [Yang et al. 2022]; (e) Break-a-scene [Avrahami et al. 2023]; (f) InstructPix2Pix [Brooks et al. 2023]; (g) Ours.

Ablation studies

Ablation studies on concept learning (1-2); scene graph-to-layout generation (3); object removal (4); object insertion (5-7).

Applications

Our approach predicts a scene graph as a user interface, enabling modifications to nodes and edges for various tasks, such as changing relationships or replacing, adding, and removing elements. By integrating LLM and Text2Image generative models, users can explore diverse compositions in their images, ensuring that these alterations accurately reflect the structural changes depicted in the modified scene graph.

BibTeX