DreamEdit3D: Personalization of Multi-View Diffusion Models for 3D Editing

Technical University of Munich
ECCV 2026 (Under Review)
DreamEdit3D teaser - 3D editing results

DreamEdit3D enables personalized 3D scene editing with fine-grained control through multi-view diffusion models, maintaining consistency across all viewpoints.

Abstract

This project presents DreamEdit3D, a novel framework for personalized 3D scene editing by leveraging multi-view diffusion models. The approach enables users to edit 3D scenes with fine-grained control through personalized text-driven modifications while maintaining multi-view consistency. By personalizing multi-view diffusion models, DreamEdit3D ensures coherent edits across all viewpoints, avoiding the inconsistencies common in single-view editing approaches. The framework enables subject-driven 3D editing by fine-tuning diffusion models on user-provided reference images, allowing precise insertion and modification of objects in 3D scenes. Edited multi-view outputs are combined with 3D Gaussian Splatting for high-quality, real-time renderable 3D scene reconstruction.

Method

Our pipeline consists of three key stages: (1) personalizing a multi-view diffusion model on reference images, (2) generating consistent edited multi-view images via text-driven editing, and (3) reconstructing the final 3D scene using 3D Gaussian Splatting.

DreamEdit3D pipeline

Results

Qualitative editing results demonstrating multi-view consistent 3D scene modifications.

Video

Qualitative Comparisons

Visual comparison with MVEdit, PrEditor3D, and Vox-E.

Diverse Editing Results

Demonstrating the diversity and flexibility of our editing framework across various object categories and prompts.

Input
Sunglasses (Seed 1)
Sunglasses (Seed 2)
Sunglasses (Seed 3)
Sunglasses (Seed 4)
Wearing Glasses
Input
Aged & Wrinkles
Close Eyes
Smile with Teeth
Spiky Hair
White Hair

Ablation Results

Analyzing the contribution of each component in our pipeline.

Loss Function Ablation

Original
w/o Attn Loss
w/o Masked Loss
w/o All Masks
Ours (Full)

Single-View vs Multi-View

Original
Single View 1
Single View 2
Single View 3
Ours (Multi-View)

Textual Inversion vs DreamBooth

Original
TI Only
w/o TI
Ours (TI + DB)

BibTeX

@article{ai2026dreamedit3d,
  title     = {DreamEdit3D: Personalization of Multi-View
               Diffusion Models for 3D Editing},
  author    = {Ai, Jinxin and Nie{\ss}ner, Matthias and Erko\c{c}, Ziya},
  journal   = {arXiv preprint},
  year      = {2026}
}

This website is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

The source code of the website is borrowed from here.