-
Notifications
You must be signed in to change notification settings - Fork 60
ControlNet
Woolverine94 edited this page May 19, 2024
·
14 revisions
- Module : ControlNet
- Function : Generate images from a prompt, a negative prompt and a control image using Stable Diffusion and ControlNet
- Input(s) : Prompt, negative prompt, ControlNet input
- Output(s) : Image(s)
-
HF Stable Diffusion models pages :
- SG161222/Realistic_Vision_V3.0_VAE
- RunDiffusion/Juggernaut-XL-Lightning
- Fluently-XL-v3-Lightning
- fluently/Fluently-XL-v4
- recoilme/ColorfulXL-Lightning
- playgroundai/playground-v2-512px-base
- playgroundai/playground-v2-1024px-aesthetic
- playgroundai/playground-v2.5-1024px-aesthetic
- stabilityai/sd-turbo
- stabilityai/sdxl-turbo
- thibaud/sdxl_dpo_turbo
- SG161222/RealVisXL_V4.0_Lightning
- cagliostrolab/animagine-xl-3.1
- aipicasso/emi-2
- dataautogpt3/OpenDalleV1.1
- dataautogpt3/ProteusV0.4
- digiplay/AbsoluteReality_v1.8.1
- segmind/Segmind-Vega
- segmind/SSD-1B
- gsdf/Counterfeit-V2.5
- stabilityai/stable-diffusion-xl-refiner-1.0
- runwayml/stable-diffusion-v1-5
- nitrosocke/Ghibli-Diffusion
-
HF ControlNet models pages :
- lllyasviel/control_v11p_sd15_canny
- lllyasviel/control_v11f1p_sd15_depth
- lllyasviel/control_v11p_sd15s2_lineart_anime
- lllyasviel/control_v11p_sd15_lineart
- lllyasviel/control_v11p_sd15_mlsd
- lllyasviel/control_v11p_sd15_normalbae
- lllyasviel/control_v11p_sd15_openpose
- lllyasviel/control_v11p_sd15_scribble
- lllyasviel/control_v11p_sd15_softedge
- Nacholmo/controlnet-qr-pattern-v2
- monster-labs/control_v1p_sd15_qrcode_monster
- TheMistoAI/MistoLine
- patrickvonplaten/controlnet-depth-sdxl-1.0
- thibaud/controlnet-openpose-sdxl-1.0
- SargeZT/controlnet-sd-xl-1.0-softedge-dexined
- Nacholmo/controlnet-qr-pattern-sdxl
- monster-labs/control_v1p_sdxl_qrcode_monster
-
Usage :
- (optional) Modify the settings to use another model, change the settings for ControlNet or adjust threshold on canny
- Select a Source image that will be used to generate the control image
- Select a pre-processor for the control image
- Click the Preview button
- If the Control image generated suits your needs, continue. Else, you could modify the settings and generate a new one
- You should not modifiy the value in the ControlNet Model field, as it is automatically selected from the used pre-processor
- Fill the prompt with what you want to see in your output image
- Fill the negative prompt with what you DO NOT want to see in your output image
- Click the Generate button
- After generation, generated images are displayed in the gallery. Save them individually or create a downloadable zip of the whole gallery
-
Models :
- You could place huggingface.co or civitai.com Stable diffusion based safetensors models in the directory ./biniou/models/Stable Diffusion. Restart Biniou to see them in the models list.
- Input image :
- Pre-processor : lineart_anime
- Control image :
- Prompt : a fashion modern woman, seated in an outdoor cafe, background is a nice french village plaza, POV RAW photo, best quality, masterpiece, photo realistic, ultra realistic, highly detailed, intricate, aesthetic, detailed skin, detailed face, detailed eyes, detailed shadow, sharp, perfect lighting, volumetric lighting, ray-tracing
- Negative prompt : out of frame, bad quality, blurry, ugly, (bad hands, ugly hands:1.3), (bad faces, ugly faces:1.3), deformation, mutilation
- Output(s) :