Diffusion Hyperfeatures:
Searching Through Time and Space for Semantic Correspondence

1UC Berkeley, 2Google Research
*Equal advising contribution.

NeurIPS 2023


Diffusion models have been shown to be capable of generating high-quality images, suggesting that they could contain meaningful internal representations. Unfortunately, the feature maps that encode a diffusion model's internal information are spread not only over layers of the network, but also over diffusion timesteps, making it challenging to extract useful descriptors.

We propose Diffusion Hyperfeatures, a framework for consolidating multi-scale and multi-timestep feature maps into per-pixel feature descriptors that can be used for downstream tasks. These descriptors can be extracted for both synthetic and real images using the generation and inversion processes. We evaluate the utility of our Diffusion Hyperfeatures on the task of semantic keypoint correspondence: our method achieves superior performance on the SPair-71k real image benchmark. We also demonstrate that our method is flexible and transferable: our feature aggregation network trained on the inversion features of real image pairs can be used on the generation features of synthetic image pairs with unseen objects and compositions.

Diffusion Hyperfeatures

We extract feature maps varying across timesteps and layers from the diffusion process and consolidate them with our lightweight aggregation network to create our Diffusion Hyperfeatures, in contrast to prior methods that select a subset of raw diffusion features. For real images, we extract these features from the inversion process, and for synthetic images we extract these features from the generation process. Given a pair of images, we find semantic correspondences by performing nearest neighbors over their Diffusion Hyperfeatures.

Semantic Keypoint Matching

Tuning on Real Images. We distill Diffusion Hyperfeatures for semantic correspondence by tuning our aggregation network on real images from SPair-71k. The dataset is comprised of image pairs with annotated common semantic keypoints for a variety of object categories spanning vehicles, animals, and household objects.

Transfer to Synthetic Images. We can take this same aggregation network tuned on these real images representing a limited set of object categories and apply it to synthetic images containing completely unseen and out-of-domain objects.

Dense Warping

Our aggregation network, which was trained on the semantic keypoint matching task, can also be used for dense warping. We follow the visualization format from Zhang et. al. (arXiv 2023). Here, we show examples of warps on both real and synthetic images for cat, dog, and bird (where the synthetic images where synthesized from the prompt "Full body photo of {category}."). When warping we compute nearest neighbors matches for all pixels within an object mask, with no visual postprocessing.

Concurrent Work

We would also like to acknowledge the following concurrent works, which use raw diffusion features for the task of semantic correspondence:


      title={Diffusion Hyperfeatures: Searching Through Time and Space for Semantic Correspondence},
      author={Luo, Grace and Dunlap, Lisa and Park, Dong Huk and Holynski, Aleksander and Darrell, Trevor},
      booktitle={Advances in Neural Information Processing Systems},