- Blog
- ComfyUI Tutorial: Flux Kontext Dev Workflow, Prompts & Best Practices
ComfyUI Tutorial: Flux Kontext Dev Workflow, Prompts & Best Practices
<iframe width="560" height="315" src="https://www.youtube.com/embed/Y7L_cbNJHj0" frameborder="0" allowfullscreen></iframe>
Introduction
ComfyUI Tutorial: Flux Kontext Dev Workflow, Prompts & Best Practices
Welcome to this ComfyUI tutorial, where we’ll explore how to use the Flux.1 Kontext Dev model inside ComfyUI. This guide will walk you through the ComfyUI Flux workflow, from model setup and workflow configuration to prompt techniques and best practices.
Who is this tutorial for?
- Beginners who are new to ComfyUI and want a step-by-step introduction.
- AI art enthusiasts who want to create high-quality edits, style transfers, or consistent characters.
- Designers and researchers who need advanced workflows, structured prompts, and reliable editing tools.
By the end, you’ll understand how to install Flux Kontext Dev, run workflows, optimize prompts, and solve common problems while working with ComfyUI.
What is Flux.1 Kontext Dev?
Flux.1 Kontext Dev is an open-source multimodal model developed by Black Forest Labs. Unlike standard text-to-image models, it can interpret both images and text together, making it especially powerful for local image editing inside ComfyUI.
Key Features
- Character consistency: Maintain the same character across multiple edits and images.
- Targeted editing: Modify only specific regions or elements while keeping the rest of the image unchanged.
- Style transfer: Apply the look and feel of a reference image to a new scene through prompts.
- Fast interaction: Runs locally with low latency, allowing quick experimentation and iteration.
Difference from Flux.1 Krea
Although both Flux.1 Krea and Flux.1 Kontext Dev are part of the Flux model family and can be used in ComfyUI, they serve different creative purposes:
- Flux.1 Krea focuses on text-to-image generation. It produces detailed and photorealistic results, aiming for a clean aesthetic without the “AI look.” It is ideal for users who want high-quality outputs directly from prompts.
- Flux.1 Kontext Dev, on the other hand, specializes in image editing and iterative workflows. It is better suited for multi-step edits, role consistency across projects, and tasks where maintaining context is essential.
This distinction is important for anyone following a ComfyUI Krea vs ComfyUI Kontext Dev workflow tutorial, as it helps you choose the right model for your creative needs.
How to Use ComfyUI Flux(ComfyUI Flux 使い方)

If you are new to ComfyUI and want to understand the basics of working with Flux.1 Kontext Dev, this section will guide you step by step.
It is also optimized for Japanese search queries such as 「ComfyUI Flux 1 使い方」, so beginners from both English and Japanese communities can follow along.
ComfyUI Flux 1 使い方 (Step by Step Guide)
Below is the essential process for setting up and running Flux.1 Kontext Dev inside ComfyUI.
インストール方法 (Installation & Model Setup)
-
- Get the Flux.1 Kontext Dev weights from Hugging Face or the official repository.
- Files are usually in
.safetensorsformat.
-
Place the model in the correct folder
- Copy the downloaded file into:
ComfyUI/models/unet - If there is a VAE or CLIP model, place them in their respective folders:
ComfyUI/models/vae ComfyUI/models/clip
- Copy the downloaded file into:
- Text Encoder t5xxl_fp16.safetensors or 5xxl_fp8_e4m3fn_scaled.safetensors

- Restart ComfyUI
- Once the models are in place, restart ComfyUI to ensure it recognizes the new files.
基本ワークフローの実行方法 (Running the Basic Workflow)
- Download the example workflow JSON (provided in the next section).
- Open ComfyUI and load the workflow JSON file.
- Connect your input image or text prompt.
- Press Queue Prompt to start generation.
- Review the output image and adjust nodes or prompts if necessary.
よくあるエラーと解決方法 (Common Errors & Fixes)
- Black images or empty output → Ensure the model weights are in the correct folder.
- Missing node error → Update ComfyUI and make sure required custom nodes are installed.
- Out of memory (OOM) → Reduce resolution (e.g., 1024px → 768px), or lower batch size.
- Character changes too much → Strengthen the prompt with identity-related keywords, or use grouped workflow for consistency.
Basic Workflow Tutorial
The Basic Workflow is the easiest way to get started with Flux.1 Kontext Dev in ComfyUI.
It allows you to quickly edit or enhance an image with minimal setup.
Download the Workflow File

- You can download a ready-to-use JSON workflow file:
[Download Flux.1 Kontext Dev Basic Workflow JSON](#)(replace with actual link).
Place the file in your ComfyUI workflows folder, or simply drag it into the ComfyUI interface.
Step by Step Instructions
- Open ComfyUI.
- Go to Load Workflow and select the downloaded JSON file.
- In the input node, upload your reference image (or keep it blank for text-to-image).
- Enter your desired prompt in the text box (e.g., “portrait of a futuristic samurai, dramatic lighting”).
- Click Queue Prompt to generate.
- Review the output and tweak settings such as seed, CFG scale, or denoising strength.
Input & Output Example
Input Image:

Output Image:

This workflow is ideal for beginners because it uses the minimum number of nodes and shows how Flux.1 Kontext Dev integrates smoothly with ComfyUI.
Grouped Workflow Tutorial

When working with Flux.1 Kontext Dev inside ComfyUI, you can choose between a basic workflow and a grouped workflow. The grouped version is designed for more complex creative tasks and provides better control over multiple characters, consistent styles, and batch editing.
Difference from Basic Workflow
- Basic Workflow → Best for simple edits, single character consistency, and quick generations.
- Grouped Workflow → Ideal for multi-character scenes, style-mixing, and handling multiple prompts in one session.
Use Cases
- Large scenes with multiple characters.
- Keeping a consistent look across a set of images.
- Combining style transfer + character edits in one pipeline.
Download & Demo
You can download grouped workflow JSON files directly from the official Flux tutorial resources.
After downloading, simply drag and drop the file into ComfyUI, and follow the step-by-step guide.
Prompt Engineering in Flux Kontext Dev
Crafting effective prompts is the key to unlocking the full power of ComfyUI Flux workflows. Here are the major use cases:
Basic Modifications
- Adjust colors, backgrounds, and lighting.
- Example: "A portrait of a girl, blue background, cinematic lighting."
Style Transfer
- Convert between illustration and photo styles.
- Example: "Anime character redrawn in cinematic photography style."
Character Consistency
- Maintain the same character across multiple images.
- Use reference images + descriptive prompts.
- Example: "The same character with short black hair, wearing a red jacket, different poses."
Text Editing
- Replace or add text in posters, covers, and banners.
- Example: "A movie poster with the title 'Flux Adventure' in bold typography."
Prompt Templates (Best Practices)
- Base prompt →
"high-quality, consistent character design, cinematic lighting" - Style transfer →
"oil painting style, soft brush strokes, warm color palette" - Text editing →
"clean typography, bold sans-serif, centered layout"
Troubleshooting (よくある問題解決)
Even with the best setup, you may encounter issues when running ComfyUI Flux workflows. Here are common problems and solutions:
Common Issues
- Black Image Output → Check if the model is properly installed and loaded.
- Missing Nodes → Update ComfyUI and install required custom nodes.
- Out of VRAM (CUDA OOM) → Lower resolution, use batch size = 1, or switch to a smaller model.
Japanese FAQ: ComfyUI Flux エラー解決方法
- 真っ黒な画像しか出ない場合 → モデルのパスとファイル名を確認してください。
- ノードが見つからない場合 → ComfyUI を最新版にアップデートし、必要なカスタムノードを導入してください。
- VRAM不足エラー (CUDA OOM) → 解像度を下げるか、バッチサイズを 1 にしてください。
With these fixes, most issues in the ComfyUI Flux 使い方 (workflow usage) can be resolved quickly.
Best Practices & Templates
To get the most out of Flux.1 Kontext Dev in ComfyUI, it’s essential to apply structured workflows and prompt templates that enhance control, consistency, and repeatability.
Keep Prompts Concise
- Use clear, specific language with minimal fluff. For instance:
"Change the car to red, keep the background unchanged."
This aligns with best-practice prompts highlighted in recent guides. :contentReference[oaicite:0]{index=0}
Build Workflows Step by Step
- Use grouped nodes where possible: they make workflows modular, reusable, and easier to manage. Flux now supports quick group-node insertion for faster workflow setup. :contentReference[oaicite:1]{index=1}
- Start with a basic workflow and gradually add complexity—this “incremental development” method improves debugging and clarity.
Recommended Settings for Different GPUs
- If working on hardware with lower VRAM (e.g., ≤12 GB), choose lightweight model versions like FP8 or GGUF. :contentReference[oaicite:2]{index=2}
- For powerful GPUs (24 GB+), the full
safetensorsmodels deliver higher fidelity. As confirmed by community examples on flux weight versions. :contentReference[oaicite:3]{index=3}
Copyable Template Library
Use the following prompt templates to streamline editing tasks:
| Use Case | Template Prompt |
|---|---|
| Object Modification | "Change [object] to [new state], keep [elements to preserve] unchanged." :contentReference[oaicite:4]{index=4} |
| Style Transfer | "Transform to [specific style], while maintaining [composition/character] unchanged." :contentReference[oaicite:5]{index=5} |
| Background Replacement | "Change the background to [new background], keep the subject in the same pose." :contentReference[oaicite:6]{index=6} |
| Text Editing | "Replace '[original text]' with '[new text]', maintain the same font style." :contentReference[oaicite:7]{index=7} |
Tip: Copy these templates as starting points and tweak them based on your target context and output consistency needs.
Conclusion & Next Steps
Conclusion
In this ComfyUI tutorial, you’ve learned how to:
- Install and configure Flux.1 Kontext Dev, including handling grouped workflows for advanced editing.
- Leverage prompt engineering and structured best practices to achieve:
- Color and background modifications
- Style transfers
- Consistent character edits across multiple images
- Text replacements in designs like posters or covers
- Troubleshoot common issues ranging from black outputs to missing nodes and memory constraints.
- Apply concise prompt templates and adapt them for various AI art needs.
Next Steps
- Explore other Flux model variants such as Flux.1 Krea Dev (for text-to-image generation) or Flux.1 Fill Dev (for inpainting/outpainting workflows). :contentReference[oaicite:8]{index=8}
- Visit the ComfyUI documentation hub to browse tutorials covering Flux.1 Text-to-Image, ControlNet, API Nodes, and more. :contentReference[oaicite:9]{index=9}
- Stay updated with new Flux.1 Kontext features and prompting strategies from community hubs and blogs like ComfyUI-Wiki or Next Diffusion. :contentReference[oaicite:10]{index=10}
Now go ahead and start creating—your next masterpiece is just a prompt away!
