One-Step is Enough: Sparse Autoencoders for Text-to-Image Diffusion Models
Stable Diffustion XL multistep version
Note
If you encounter GPU time limit errors, don't worry, the app still works and you can use it freely.
Demo Overview
This demo showcases the use of Sparse Autoencoders (SAEs) to understand the features learned by the Stable Diffusion XL (Turbo) model.
How to Use
Explore
- Enter a prompt in the text box and click on the "Generate" button to generate an image.
- You can observe the active features in different blocks plot on top of the generated image.
Top Images
- For each feature, you can view the top images that activate the feature the most.
Paint!
- Generate an image using the prompt.
- Paint on the generated image to apply interventions.
- Use the "Feature Icon" button to understand how the selected brush functions.
Remarks
- Not all brushes mix well with all images. Experiment with different brushes and strengths.
- Feature Icon works best with
down.2.1 (composition)
andup.0.1 (style)
blocks. - This demo is provided for research purposes only. We do not take responsibility for the content generated by the demo.
Interesting features to try
To get started, try the following features:
- down.2.1 (composition): 2301 (evil) 3747 (image frame) 4998 (cartoon)
- up.0.1 (style): 4977 (tiger stripes) 90 (fur) 2615 (twilight blur)
Select block
1 50
0 15
0 24
Select blk
0 15
Select block
TimedHook Range (which steps to apply the feature)