ReVersion: Diffusion-Based Relation Inversion from Images

This paper proposes a novel method for inverting diffusion models from exemplar images to generate customized images with new objects, backgrounds, and styles. The proposed method, called ReVersion, first learns a relation prompt from a frozen pre-trained text-to-image diffusion model. The learned relation prompt can then be applied to generate relation-specific images with new objects, backgrounds, and styles. The key insight of the proposed method is the "preposition prior" - real-world relation prompts can be sparsely activated upon a set of basis prepositional words.

Key insights and lessons learned from the paper:

Questions for the authors:

Related topics or future research directions:

References: