Diffusion models have been shown to be capable of generating high-quality images, suggesting that they could contain meaningful internal representations. Unfortunately, the feature maps that encode a diffusion model's internal information are spread not only over layers of the network, but also over diffusion timesteps, making it challenging to extract useful descriptors.
We propose Diffusion Hyperfeatures, a framework for consolidating multi-scale and multi-timestep feature maps into per-pixel feature descriptors that can be used for downstream tasks. These descriptors can be extracted for both synthetic and real images using the generation and inversion processes. We evaluate the utility of our Diffusion Hyperfeatures on the task of semantic keypoint correspondence: our method achieves superior performance on the SPair-71k real image benchmark. We also demonstrate that our method is flexible and transferable: our feature aggregation network trained on the inversion features of real image pairs can be used on the generation features of synthetic image pairs with unseen objects and compositions.
We extract feature maps varying across timesteps and layers from the diffusion process and consolidate them with our lightweight aggregation network to create our Diffusion Hyperfeatures, in contrast to prior methods that select a subset of raw diffusion features. For real images, we extract these features from the inversion process, and for synthetic images we extract these features from the generation process. Given a pair of images, we find semantic correspondences by performing nearest neighbors over their Diffusion Hyperfeatures.
We distill Diffusion Hyperfeatures for semantic correspondence by tuning our aggregation network on real images from SPair-71k. The dataset is comprised of image pairs with annotated common semantic keypoints for a variety of object categories spanning vehicles, animals, and household objects.
We can take this same aggregation network tuned on these real images representing a limited set of object categories and apply it to synthetic images containing completely unseen and out-of-domain objects.
@article{luo2023dhf,
author = {Luo, Grace and Dunlap, Lisa and Park, Dong Huk and Holynski, Aleksander and Darrell, Trevor},
title = {Diffusion Hyperfeatures: Searching Through Time and Space for Semantic Correspondence},
journal = {arXiv},
year = {2023},
}