DIFFusion Demo

Generate fine-grained edits to images using another class of images as guidance.

For any questions/comments/issues with this demo, please email mia.chiquier@cs.columbia.edu.๐Ÿค–

DIFFusion Demo

Text-based AI image editing can be tricky, as language often fails to capture precise visual ideas, and users may not always know what they want. Our image-guided editing method learns transformations directly from the differences between two image groups, removing the need for detailed verbal descriptions. Designed for scientific applications, it highlights subtle differences in visually similar image categories. It also applies to nicely to marketing, adapting new products into scenes by managing small interior design details. Choose between four example datasets, then adjust the tskip (higher = less edit) and manipulation scalar (higher = more edit) to explore the editing effects. A Gradio demo in our GitHub code release lets users upload datasets and try the method (GPU required).

Counterfactual Generation
Custom T-Skip Value

Select a t-skip value

Manip scale

Select a manip scale

Class Examples
Results