Generative Models: What do they know? Do they know things? Let's find out!

(Previous Title: Intrinsic LoRA: A Generalist Approach for Discovering Knowledge in Generative Models)

1Toyota Technological Institute at Chicago, 2Adobe
Teaser

Generative models of various types---Autoregressive, GANs and Diffusion---implicitly encode intrinsic images as a by-product of generative training. We show that a model-agnostic approach, Low-Rank Adaptation (LoRA), can recover this intrinsic knowledge. Applying targeted, lightweight LoRA to attention layers in VQGAN (a) and Stable Diffusion (d), and affine layers in StyleGAN (b and c), allows us to recover fundamental intrinsic images---normals, depth, albedo and shading---directly from the models' learned representations, eliminating the need for additional task-specific decoding heads or layers.

Abstract

Generative models excel at mimicking real scenes, suggesting they might inherently encode important intrinsic scene properties. In this paper, we aim to explore the following key questions: (1) What intrinsic knowledge do generative models like GANs, Autoregressive models, and Diffusion models encode? (2) Can we establish a general framework to recover intrinsic representations from these models, regardless of their architecture or model type? (3) How minimal can the required learnable parameters and labeled data be to successfully recover this knowledge? (4) Is there a direct link between the quality of a generative model and the accuracy of the recovered scene intrinsics?

Our findings indicate that a small Low-Rank Adaptators (LoRA) can recover intrinsic images-depth, normals, albedo and shading-across different generators (Autoregressive, GANs and Diffusion) while using the same decoder head that generates the image. As LoRA is lightweight, we introduce very few learnable parameters (as few as 0.04% of Stable Diffusion model weights for a rank of 2), and we find that as few as 250 labeled images are enough to generate intrinsic images with these LoRA modules. Finally, we also show a positive correlation between the generative model's quality and the accuracy of the recovered intrinsics through control experiments.

Summary of scene intrinsic extraction capabilities across different generative models without changing generator head.
: Intrinsics can be extracted with high quality. : Intrinsics can be extracted with medium quality. : Intrinsics cannot be extracted.
Model Pretrain Type Domain Normal Depth Albedo Shading
VQGAN Autoregressive FFHQ
StyleGAN-v2 GAN FFHQ
StyleGAN-v2 GAN LSUN Bed
StyleGAN-XL GAN FFHQ
StyleGAN-XL GAN ImageNet
Stable Diffusion-UNet Diffusion Open
Stable Diffusion Diffusion Open


Image Surface Normals Depth Albedo Shading
Original Image Surface Normal Generated Surface Normal Depth Generated Depth Albedo Generated Albedo Shading Generated Shading
Original Image Surface Normal Generated Surface Normal Depth Generated Depth Albedo Generated Albedo Shading Generated Shading
Original Image Surface Normal Generated Surface Normal Depth Generated Depth Albedo Generated Albedo Shading Generated Shading
Original Image Surface Normal Generated Surface Normal Depth Generated Depth Albedo Generated Albedo Shading Generated Shading
Omnidata-v2 Ours ZoeDepth Ours Paradigms Ours Paradigms Ours
Figure: Comparison of intrinsic maps generated by our method using augmented Stable Diffusion 2.1 and the pseduo ground truth

BibTeX

@article{du2023generative,
      title={Generative Models: What do they know? Do they know things? Let's find out!},
      author={Du, Xiaodan and Kolkin, Nicholas and Shakhnarovich, Greg and Bhattad, Anand},
      journal={arXiv preprint arXiv:2311.17137},
      year={2023}
}