How to Use Flux.1 Kontext-dev in ComfyUI (GGUF + GPU): Natural Prompts for Any Style


Natural language image editing has taken a huge leap forward with Flux.1 Kontext-dev—a cutting-edge model that lets you modify existing images using simple, human-like instructions. Whether you’re working with anime, photorealistic portraits, fantasy art, or flat illustrations, Kontext understands the context and delivers meaningful edits while preserving layout, style, or identity.

In this post, we’ll explore how to run Flux.1 Kontext-dev in ComfyUI using the GGUF format. While GGUF makes the model more lightweight, you still need a GPU to run it smoothly. We’ll walk through what makes Kontext different from traditional prompt-based generation, and how to get the best results using clear edit instructions.

Models

  • GGUF Models: You can find the Flux.1 Kontext-dev GGUF models here. I have a RTX 3090 with 24GB VRAM, and I use the flux1-kontext-dev-Q8_0.gguf model. Use Q6 or Q5 if you have less VRAM. The model goes to ComfyUI\models\unet .
  • Diffusion Model: (Optional) If you have 24GB VRAM, you can also use the flux1-dev-kontext_fp8_scaled.safetensors. This model goes to ComfyUI\models\diffusion_models.
  • Text Encoder: Download clip_l.safetensors and t5xxl_fp8_e4m3fn_scaled.safetensors . Place them in ComfyUI\models\text_encoders . If you have used Flux.1 model before, you probably already have these.
  • VAE: Download ae.safetensors and place it in ComfyUI\models\vae .

Installation

  • Update your ComfyUI to the latest version if you haven’t already.
  • Download the following image and drag it to the ComfyUI canvas.
  • Use ComfyUI Manager to install missing nodes.
  • Restart ComfyUI.

Nodes

Select the GGUF model you downloaded under Unet Loader. If you have the fp8_scaled model, you can select it under Load Diffusion Model node. Remember to connect this node to the KSampler.

Select the two text encoders here.

There are two nodes for image input. The second one is disabled now. We will have one example using two image inputs later.

Positive prompt. This is where you instruct the model to do.

Specify the output size here.

Specify various parameters here. I usually just use the default values.

Examples

Input image

Prompt:

Remove the girl from the picture.

Output image:

Prompt:

Add a dog to the right of the girl.

Output image:

Prompt:

Change the outfit color to white.

Output image:

Prompt:

Generate a ghibli anime style of this picture

Output image:

Input image:

Prompt:

Generate a realistic photo of this beautiful asian woman while preserving facial features, pointy chin, and race.

Output image:

Two image inputs:

Prompt:

The woman on the left hugs the woman on the right in front of a great waterfall while preserving the facial features

Output image:

Note that I found this is harder to get it right. You might need to try it a few times to get the image you want.

Conclusion

Flux.1 Kontext-dev brings a flexible, language-driven editing experience to all kinds of images—from stylized characters to cinematic portraits. With GGUF support and ComfyUI integration, it’s now more accessible for local workflows—as long as you’re running on a GPU.

The key to success with Kontext isn’t prompt stacking or keyword engineering, but writing clear, focused edit instructions: what to keep, what to change, and how. Whether you’re swapping outfits, shifting scenes from day to night, or transforming a drawing into a 3D render, Kontext empowers you to control your edits like never before.

Further Reading

Testing OmniGen2 in ComfyUI vs. Flux.1 Kontext: A Promising Tool That’s Not Quite There Yet

Reference

https://docs.comfy.org/tutorials/flux/flux-1-kontext-dev

https://docs.bfl.ai/guides/prompting_guide_kontext_i2i

Be the first to comment

Leave a Reply