Run Stable Diffusion Directly Inside Photoshop with the Auto‑Photoshop Plugin
This guide explains how to install and use the Auto‑Photoshop‑StableDiffusion plugin, enabling seamless Stable Diffusion AI image generation inside Photoshop, covering installation steps, core features like txt2img and img2img, and a real‑world design workflow case study.
AI‑generated art has surged, and Stable Diffusion—thanks to Stability AI’s open‑source model, the new XL version, and ControlNet plugins—has become a practical AI productivity tool. However, Photoshop users traditionally had to switch between the web UI (e.g., Automatic1111) and Photoshop, which was inefficient.
The Auto‑Photoshop‑StableDiffusion‑Plugin lets users access Automatic1111’s capabilities directly inside Photoshop, eliminating the need to toggle between applications.
Installation
Visit the project repository: https://github.com/AbdullahAlfaraj/Auto-Photoshop-StableDiffusion-Plugin
Download the
.ccxfile.
Double‑click the
.ccxfile to run it.
In the Automatic1111 extensions tab, install the "Auto‑Photoshop‑SD" extension.
Copy the plugin URL, paste it into Automatic1111’s extensions tab, and click Install.
After installation, click “Apply and Restart UI”.
Feature Overview
1. txt2Img Workflow
Create a new project in Photoshop.
Click the “Generate” button without changing any settings.
If a cat image appears on the canvas, the setup is correct.
Example prompt used:
1 young man, handsome, super short black hair, hair bun, smiling at the camera, 4k, (((looking at camera))), white hoodie, outside, ((best quality)), solo, <lora:DD姐姐头像_v.1:0.6> Negative prompt: bad hand, (worst quality, low quality:1.4), nsfw, ... Steps: 20, Size: 616x480, Seed: -1, Model: 2DMix, DD姐姐头像, Sampler: 20, CFG scale: 7Choose the best result, click the check‑mark on the right of the viewer, and the image is loaded as a smart object on a new layer.
2. img2Img Workflow
Select the layer to refine, make a rough brush edit, then copy the layer.
Switch the plugin mode to “img2img” and adjust settings (batch size, batch count, etc.).
Click the red “Generate Img2img” button to produce variations.
After generation, select the most satisfactory image and add it to the Photoshop layer. Further refinements (e.g., changing clothing color, using liquify) can be applied directly in Photoshop.
3. ControlNet Integration
Enable ControlNet, choose the “lineart” model to constrain the mask, and adjust the control weight for finer results. This allows precise comic‑style generation and further artistic control.
By combining Photoshop with Stable Diffusion and ControlNet, designers can complete an entire AI‑driven workflow within a single application, roughly doubling efficiency compared to traditional manual methods.
58UXD
58.com User Experience Design Center
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.