ComfyUI - Node-based Workflow for Stable Diffusion
STABLE

ComfyUI - Node-based Workflow for Stable Diffusion

October 18, 2023

I have previously introduced Stable Diffusion webUI, which is currently the most mainstream version used by everyone: AUTOMATIC1111.

A few days ago, I also shared how to control Stable Diffusion image generation with Python.

Today, let's talk about ComfyUI: a node-based (node flow style) user interface for Stable Diffusion.

It roughly looks like this:

Let's compare AUTOMATIC1111 (WebUI) and ComfyUI.

I wrote a script and ran it.

https://github.com/comfyanonymous/ComfyUI/blob/master/notebooks/comfyui_colab.ipynb

It's still quite clear and can help understand the structure of stable diffusion. This process was shared by someone on Zhihu. (https://zhuanlan.zhihu.com/p/620297462)

The general understanding is that through individual nodes, you can control the prompt, model, view, size, etc., to generate the final image.

ABOUT THE AUTHOR

Renee's Entrepreneurial JourneyEssay Editor

This is my little corner of the internet where I share thoughts, ideas, and interesting stuff I come across in the world of AI. Things in this field move fast, and I use this space to slow down a bit—to reflect, explore, and hopefully spark some good conversations.

GOOGLE

See More