Releases: Acly/krita-ai-diffusion
Version 1.16.0
Download krita_ai_diffusion-1.16.0.zip
Important
This release includes an upgrade for ComfyUI and introduces new models.
- If you are using the managed server, the installer will do an upgrade.
- If you are using a custom ComfyUI installation, please update to latest versions!
Per-pixel Strength Masks
This release slightly changes how selections masks are interpreted.
Previously they would be either 1 (selected) or 0 (not selected), with smooth transitions at the edges for blending.
Now you can create masks with any values in between to customize how much the image changes per pixel.
Original | Mask | Result |
---|---|---|
"spooky dense forest"
For example, if you paint a selection mask 50% white in a certain region, and set strength to 80%, the actual strength in that region will be half of 80% = 40%. This is also known as "Differential Diffusion" or "Soft Inpainting". You can allow more drastic changes in some regions and only small adjustments in others - with smooth transitions - in one generation.
Original | Mask | Result |
---|---|---|
"toxic waste, acid lake"
Hint: enable global selection masks in Krita to easily edit selection masks.
Streamlined Cloud GPU
I've been working on a streamlined online service for those who don't want to install or lack the hardware. It is not complete, but can be tested already. Please look for more information and leave your feedback in this discussion!
Note: Local offline will always be the option with the greatest flexibility. But it's not available to everyone, and sometimes convenient can be nice?
Sponsors
This project is happily eating all of my time, and it feels like it is only getting hungrier! 😵
So I've had to think about sustainability. Cloud GPU is part of that, but it's an experiment and won't appeal to everyone. If you like the project, please consider donating via GitHub ♥. Much appreciated!
Other Changes
- Live mode: check for changes on client side and only send a request to ComfyUI when there are actual changes #414 #509 #512
- Use depth-anything to generate depth images for depth control layers
- Use more recent DWPose models to estimate pose for pose control layers
- Added EasyNegative as negative embedding (not required, but installed by default if SD1.5 is selected for managed installs)
- Added Flat2D-Animerge checkpoint to installer as optional download
- Support SDXL Tile ControlNet as Blur control layer and when upscaling #473
- This is an optional model for now, it will be used if installed
- Changed sampling to use a minimum of 4 steps even at low strength (related: #483)
- Was 6 before for LCM, no minimum for other samplers
- Will never inrease step count beyond what is configured as total steps
- Tiled upscaling now uses same step scaling as other modes
- Added option to filter out built-in styles
- Added link to the folder where settings are stored in the UI
- Added custom hotkey to toggle previews #426
- While the preview browser has focus, press space bar to toggle
- You can also assign a custom hotkey in Krita's hotkey settings which will work without focus (not set by default)
- Scroll to bottom of history when switching canvas or opening documents
- Updated to latest IP-Adapter "V2" nodes #524 #531
- Fixed LoRA path splitting when server runs on Windows and client on Linux/Mac #477
- Removed
--force-fp16
as default option for MPS (macOS only) #474 - The download_models.py script can now pre-fetch control pre-processor models
Version 1.15.0
Download krita_ai_diffusion-1.15.0.zip
Discard Images from History
For those images we'd prefer to have never happened. You can multi-select with Ctrl/Shift!
New location for User Data
User files such as settings, custom styles and logs are no longer inside the plugin installation directory. This has some advantages:
- Easier to do a clean update/reinstall of the plugin without losing your settings
- Allows system-wide plugin installation with per-user settings
The settings are now located in a subfolder of Krita's user data. Typical paths:
- Windows:
C:\User\<your-name>\AppData\Roaming\krita\ai_diffusion
- Linux:
~/.local/share/krita/ai_diffusion
- MacOS:
~/Library/Application Support/krita/ai_diffusion
Depending on your system the location may be different. You can find it via the "View Logs" link in the settings.
Experimental Features
Lightning
SDXL Lightning is a way to speed up generation of images with SDXL, similar to LCM and Turbo. Instructions and Discussion
Animation Workspace
This version contains a first draf for working with animations. Currently it's just a UI for batch processing, but may become a place to integrate animation checkpoints in the future. Overview and Discussion
Other Changes
Version 1.14.1
Download krita_ai_diffusion-1.14.1.zip
Changes
- Improved speed at which preview results are displayed when working with complex documents #391
- Replaced latent upscale during 2-pass generation (was only used for small factors, now uses image upscaling) #407
- Fixed server process not being terminated when Krita is closed in some cases (eg. crash) #196 #360 #390
- Fixed backup folders being copied instead of moved during server upgrade
- Fixed some spin boxes not working properly (eg. line count settings) #393
- Fixed inconsistencies in server UI, it should now report state (started, stopped, connected...) correctly
- Hotkey to start generation now also works for Live mode #414
- Set initial default grow/feather to 5% for new installs
Version 1.14.0
Download krita_ai_diffusion-1.14.0.zip
Important
This release includes an upgrade for ComfyUI and introduces new models.
- If you are using the managed server, the installer will do an upgrade.
- If you are using a custom ComfyUI installation, find the new requirements at the bottom.
Selection Fill & Expand for Stable Diffusion XL
While SD XL has been supported for a long time, its capabilities to seamlessly fill selections were very limited. SD 1.5 almost always had superior results. This release adds the inpaint model developed by Fooocus to dramatically improve results. All SD XL checkpoints, including custom downloads, automatically profit from the change.
The new model will be downloaded as part of the "Stable Diffusion XL" workload, make sure to select it in the installer if you want to make use of it!
New Inpaint Modes
This release introduces more nuanced actions for filling and expanding areas of the image.
This nudges generation into a certain direction depending on what you want to do. Works especially well for SD XL, but also improves consitency for SD 1.5. Some examples are below, you can read the full documentation here.
Selection | Result | |
---|---|---|
Fill | ||
Expand | ||
Add Content | ||
Remove Content | ||
Replace Background |
Discussions and Wiki
Feel free to ask questions, discuss ideas, and share workflows, in the Discussions.
Documentation can now be found in the Wiki. It's a work in progress, contributions are appreciated! (I believe not everybody can edit yet, make a post if you have some content.)
Other Changes
- Added option to enqueue new generations in front so they will run first #380 (by @daniel-richter)
- Fixed crash after using flatten image on a preview layer #381
- Fixed error when using HAT upscale models at certain resolutions #378
- Fixed LoRA not being applied when in a subfolder and connecting to Linux server from Windows #307 #375
- Fixed control layer generate button being visible for "Blur" mode (it does not need generation) #369
- Linux: Prioritize Python 3.11/3.10 over other installed versions (not all dependencies support 3.12 yet) #374
- Load checkpoint first in upscale workflow to enable caching #377
- Fixed error during installation because of encoding issues #370
- Added file extensions to document annotations (makes extracting them from .kra files easier)
New requirements for custom ComfyUI
- New custom node: comfyui-inpaint-nodes
- Required model: MAT_Places512_G_fp16 to
models/inpaint
(create folder if needed) - Required models (SDXL only): Fooocus Inpaint (Head) and Fooocus Inpaint (Patch) to
models/inpaint
As usual you can use the download script to fetch models, and the full list is here.
Version 1.13.1
Download krita_ai_diffusion-1.13.1.zip
Changes
- Fixed no checkpoints found after installation #358
- Replaced civitai.com downloads which started to require a login
- Removed old location for IP-Adapter models which no longer works #352
- Only affects installations originally done by a relatively old version of the plugin
- See #352 on how to migrate them manually, otherwise the models will be re-downloaded to the new location
- Improved downscaling of control layer images, especially Canny edge #333
Version 1.13.0
Download krita_ai_diffusion-1.13.0.zip
Important
This release includes an upgrade for ComfyUI and custom nodes.
- If you are using the managed server, the installer will do an upgrade.
- If you are using a custom ComfyUI installation, please update to latest versions!
Face Reference
You can now provide a reference image of a face and generate images with close likeness.
Generate entirely new images with flexibility to change style, lighting, etc:
Hint: Use a portrait as reference which includes hair and shoulders. Size of the reference image doesn't have to match your canvas, it's okay to leave transparent areas.
It also works for changing faces in existing images:
Installation:
- Managed server: Choose the IP-Adapter Face option from Control Extension packages. Available for both SD 1.5 and SD XL.
- Custom ComfyUI: See IP-Adapter-FaceID for models and follow the FaceID section of ComfyUI_IPAdapter_plus. I'm testing with FaceID Plus v2 models, but others may work. Installation of insightface is required.
Hand Refiner
Stable Diffusion often struggles with generating hands. Manually sketching hand posture can often be the most reliable solution.
This release adds the Hand control layer as an alternative. It automatically detects hands in the image or selected area and
tries to generate a plausible depth map. This can then be used to guide generation.
Hints and limitations:
- Works for photos and realistic shading. Fails to detect hands in flat-shaded cartoon artwork.
- You can use selections to detect specific hands!
- Refining hands requires sufficient resolution to work with.
- High strength leads to plastic look and color shifts, try around 50%.
- Hands can still be frustrating...
Installation:
- Managed server: Choose ControlNet Hand Refiner from Control Extensions packages.
- Custom ComfyUI: See HandRefiner for model and latest comfyui_controlnet_aux for preprocessor.
Live Animation Painting (experimental)
There is a new record button in the Live tab which imports results as animation. Maybe it's useful?
LiveRecordingL.mp4
High resolution version on YouTube
Other Changes
- Added option to save the last ComfyUI prompt for debug and experimentation purposes (Interface > Dump Workflow)
- Reduced default max pixel count for "high" performance profile
- Fixed wrong clip-skip setting applied to SDXL checkpoints (Styles > Checkpoint configuration (advanced) > Clip Skip) #323
- Existing custom styles are not automatically modified, you may want to revise them
- Line art control layer is no longer excluded from generation image input
- The lines themselves will become part of your image - if you don't want this hide the layer
- Fixed wrong image size when height is not a multiple of 8 but width is #306
- Fixed some UI update issues (initial upscale model, canvas change)
- Reorganized model search paths and put them all in one place #304
- Disabled IP-Adapter for AMD/DirectML (it appears to be broken) #335
- Tweaked thumbnail spacing in relation to UI font size #278
Version 1.12.0
Download krita_ai_diffusion-1.12.0.zip
Important
This release introduces new upscale models (Omni-SR):
- If you are using the managed server, the installer will download the new models
- If you are using a custom ComfyUI installation, download them here and place them in
models/upscale_models
Persistence
The following information will now be stored in .kra
documents:
- Current workspace, prompt, strength, etc.
- Current control layer setup
- History of generated results
How much of the history is stored can be configured in performance settings. Keep in mind that increasing the limit also makes saving and opening documents slower! The history is compressed, so the default values are actually enough for a lot of images.
Queue Popup Menu
The queue button menu has been extended:
- Batches is like hitting the "Generate" button multiple times (but slightly faster)
- Seed options have been moved here (previously were in settings)
- Cancel buttons to manage the job queue can still be found here
Preview Context Menu
Previews thumbnails now have a context menu button. Alternatively you can always open the menu with right-click. Here you can:
- Copy generation parameters, such as prompt, strength and seed
- This will directly place them in the UI and overwrite the previous setup.
- Quickly save the generated result to disk
- It will be stored as WEBP in the same location as your document.
Resolution Performance Options
Hardware limitations are still a big problem, especially when working on a high-resolution canvas. This release provides some more options to avoid running into out-of-memory situations:
- Resolution Multiplier will scale down the resolution which images are generated at relative to the canvas resolution.
- For example, setting it to
0.5
will generate images at half the resolution - To fit into your image, results are automatically upscaled with a fast AI upscaler
- For example, setting it to
- Maximum Pixel Count allows you to set a resolution limit.
- It works similar to the multiplier, but only kicks in when you cross a certain threshold
- This is enabled by default now! The limit depends on your GPU.
Checkpoint Resolution Override
Stable Diffusion checkpoints have a "preferred" resolution, and the plugin will automatically take it into account. For most checkpoints this is detected automatically and nothing has to be changed!
In some cases a checkpoint may be trained with a different resolution than its base model suggests - for cases where this cannot be detected you can now set it manually. See the example below for the recommended way to setup SDXL Turbo.
Other Changes
- Disabled area conditioning when a control layer is active - this could lead to bad inpaint results when using text prompt & control layers
- Styles are now sorted alphabetically
- History: create new section when negative prompt changes
- History: add strength value and create new section when it changes
- History: disable item dragging (by @vtvrv)
- Fixed result thumbnails not scaling with Windows display scaling #278
- Fixed crash when setting the upscale factor to a non-0.5 increment #264
- Fixed division by zero when all "Image" control layer (IP-adapter) weights are 0 #271
- Fixed history UI not being updated when old images are pruned
- Crop control layer images before sending them to the server
Version 1.11.1
Download krita_ai_diffusion-1.11.1.zip
UI Improvements
The way thumbnails and previews work is motivated by the following ideas:
- Being able to preview results directly in the canvas to be able to zoom, pan, and judge how they fit
- Toggling and cycling through multiple results quickly to compare them
- Easily enqueue additional images when none of the results are quite good enough
The preview looks exactly like the result would after accepting it, which is great to judge how it fits in -- but has the unfortunate side effect that it's really easy to forget it's only a preview and still has to be applied! The respective button was also easily missed at the bottom of the UI, and not intuitive for new users to find.
By re-designing the button and making it more prominent right next to the thumbnail I hope to at least mitigate those issues.
There are a few more small tweaks:
- The "workspace" switch now has a more prominent drop-down arrow
- The "queue" indicator now includes the current job and it's more obvious that you can click it
- Results that were applied now receive a tiny indicator
Other Changes
- Grouped checkpoint-specific settings (VAE, Clip skip) under "Checkpoint configuration (advanced)"
- Initial support for V-prediction / Zero Terminal SNR models #211
- Has to be enabled manually in advanced checkpoint configuration
- Checkpoint model authors usually indicate when this is required
- Added samplers Euler, Euler A #117 (by @sejoung)
- Added option to randomize the seed in Live mode after copying the image #214
- Can be enabled in Interface settings
- Allow large number of LoRA files to be displayed by adding scrollbars if needed (by @vtvrv)
- Fixed layer names not being updated in the control layer UI #217
- Fixed Checkpoint / LoRA refresh button not working
- Fixed upscale target resolution mismatch causing broken image
- Upscale now resizes the image only after generation is finished (allows cancel without modifications)
- Settings window is now sized according to the active screen (by @vtvrv)
- Added a progress indicator for the initial image in Live mode #218
- Installer now uses the .safetensors version of the Clip-Vision model (but the old one still works)
- Installer and download scripts default to the new IP-Adapter location in the models folder (old location still works)
Version 1.10.0
Download krita_ai_diffusion-1.10.0.zip
This version updates the managed ComfyUI server to the latest versions. To those who manage their own ComfyUI install, please make sure you are up-to-date (including custom nodes) to avoid issues!
Advanced Text Prompts
This release adds convenient shortcuts for editing text prompts:
- Emphasize (Ctrl+Up) or reduce importance (Ctrl+Down) of words or expressions
- Applies to a single word at the cursor, or an entire selection
- Will add a weight: >1 for higher, <1 for lower importance (recommended range: 0.7-1.3)
- Add LoRA directly by writing
<lora:filename>
- Must match the filename of an existing LoRA file (extension is optional)
- Also supports custom weights with Ctrl+Up/Down!
Thank you @huchenlei and @Danamir for the extensions.
Improved Preview Layer handling
There are a number of small fixes and changes to iron out how preview layers are handled, and avoid confusion when results are buried somewhere in the layer stack:
- Canvas actions (painting, selections) will no longer be interrupted when generation finishes #201
- Except for the very first generation, which will automatically show up as preview in the canvas
- You can now disable auto-preview in the Interface settings if you don't want to be interrupted at any time
- Previews are always inserted at the top of the layer stack
- Selecting a thumbnail will move preview to the top
- The active layer now remains unchanged when toggling preview results (preview layers never become active)
- Control images, upscales and applied results are inserted on top and become active layers
Other Changes
- Can now set the strength of reference "Image" control layers (aka. IP-Adapter) individually
- Added a
download_models.py
script to make custom ComfyUI installs easier #113 #165- How to use. It lists and downloads all models used by the plugin.
- Fixed missing "Add pose" button #200
- Fixed history widget layout for large width #210
- The installer now explicitly installs wheel and setuptools packages #160 #185 #187
- Checkpoint base models which are not supported (SD2, SSD-1B, SVD) are now filtered out
- More lenient handling and improved error reporting for encoding issues #191
- Improved error message for missing upscale models
- Fixed assertion when closing a document while Live mode is active
Version 1.9.0
Download krita_ai_diffusion-1.9.0.zip
Live Painting on Selections
You can now use selections to control the target area for live painting. This is useful to get good performance even on a larger canvas, or to avoid affecting parts of the image.
AILens.mp4
Note that it is not as good for inpainting as the "traditional" workflow. The full inpainting pipeline is rather heavy, and costs too much performance for live mode.
Improved performance at lower strength
The number of samples now scales with the strength. Lower values require fewer samples, but still yield results that are very similar (and generally just as good) as before. This means such images are now generated much faster. Some examples (SDXL at 1024px, RTX 4070):
Strength | Before | New (1.9.0) | Difference |
---|---|---|---|
100% | 20 samples (7s) | 20 samples (7s) | No change |
50% | 20 samples (7s) | 10 samples (3.5s) | 🟢 2x faster |
30% | 20 samples (7s) | 6 samples (2.3s) | 🟢 3x faster |
Note that the separate "Upscaling steps" settings have been removed, as they automatically scale now. Thanks @Danamir for this improvement!
Other Changes
- Added Clip Skip to style settings #75. Some checkpoints recommend setting this to 2 for better results. Thanks @Fezinaru
- LoRA selection now reflects folder structure #148. This allows organizing and finding LoRA in large collections. Thanks @Danamir
- Added sampler UniPC BH2 #117. It can yield good results at fewer steps than other samplers.
- Added sampler DPM++ SDE. The SDXL Turbo LoRAs and merges work best with this sampler. See #155 for more information.
- Support latest version of ComfyUI_IPAdapter_plus #158 by @Danamir
- Support latest version of comfyui_controlnet_aux #165
- Fixed issues where the control layer UI was not consistent
- Fixed issues where text prompt couldn't be entered properly #85
- Adding a new control layer now uses the last used control mode by default #114
- Made preview toggle and selected thumbnail consistent #114
- Fixed newly added layers now showing up in control layer selection #10
- Fixed various issues when switching between workspaces or documents
- Experimental support for explicit masks and context