Scene graphs offer a structured, hierarchical representation of images, with nodes and edges symbolizing objects and the relationships among them. It can serve as a natural interface for image editing, dramatically improving precision and flexibility.
Leveraging this benefit, we introduce a new framework that integrates large language model (LLM) with Text2Image generative model for scene graph-based image editing. This integration enables precise modifications at the object level and creative recomposition of scenes without compromising overall image integrity. Our approach involves two primary stages:
Through extensive experiments, we demonstrate that our framework significantly outperforms existing image editing methods in terms of editing precision and scene aesthetics.
Our approach predicts a scene graph as a user interface, enabling modifications to nodes and edges for various tasks, such as changing relationships or replacing, adding, and removing elements. By integrating LLM and Text2Image generative models, users can explore diverse compositions in their images, ensuring that these alterations accurately reflect the structural changes depicted in the modified scene graph.
From left to right: (a) Input images; (b) Scene graphs and user edits; (c) SIMSG [Dhamo et al. 2020]; (d) SGDiff [Yang et al. 2022]; (e) Break-a-scene [Avrahami et al. 2023]; (f) InstructPix2Pix [Brooks et al. 2023]; (g) Ours.
Ablation studies on concept learning (1-2); scene graph-to-layout generation (3); object removal (4); object insertion (5-7).
Our approach predicts a scene graph as a user interface, enabling modifications to nodes and edges for various tasks, such as changing relationships or replacing, adding, and removing elements. By integrating LLM and Text2Image generative models, users can explore diverse compositions in their images, ensuring that these alterations accurately reflect the structural changes depicted in the modified scene graph.