Bringing Spaces to Life: An Advanced Interior Design Visualization Workflow

CN
ComfyUI.org
2025-06-04 08:25:31

1. Workflow Overview

mbknvzkszujcay3cyvd图片压缩888888.png

This workflow leverages multiple ControlNets for precision image generation, optimized for interior design visualization. Key features:

  1. Dual-ControlNet: Depth + Canny for spatial and edge control.

  2. Flux Framework: Uses Interior Design Flux FP8 Model (1024x1536 output).

  3. LoRA Enhancement: Metallic Typography LoRA for material textures.
    Use Case: Design proposals, furniture layout previews.


2. Core Models

Model Name

Function

Source/Installation

Flux Interior Model

Base generative model (FP8)

Manual install to models/checkpoints

FLUX-ControlNet-Depth

Depth map control

Manual install to models/controlnet

FLUX-ControlNet-Canny

Edge detection control

Same as above


3. Key Nodes

Node Name

Functionality

Installation

AIO_Preprocessor

Image preprocessing (Depth/Canny)

Requires comfyui_controlnet_aux

ControlNetApplyAdvanced

Multi-ControlNet fusion

Built-in

FluxGuidance

Condition boost (strength:3.5)

Requires ComfyUI-Flux

Dependencies:

  • Plugins ComfyUI-Flux & comfyui_controlnet_aux


4. Workflow Groups

  • Group 1: Model Loading

    • Nodes: CheckpointLoaderSimpleLoRA Stack

    • Loads base model, LoRA, and ControlNets.

  • Group 2: Preprocessing

    • Nodes: LoadImage → Dual AIO_Preprocessor (Depth + Canny).

    • Resolution: 1024x1536.

  • Group 3: Generation

    • Nodes: ControlNetApplyAdvancedKSampler (20 steps, Euler).

    • Output: HD interior render (auto-saved).


5. Inputs & Outputs

  • Inputs:

    • Reference image (e.g., floor plan, 580x750 recommended).

    • Prompt: "Gilt on a black background, words" (customizable).

  • Output:

    • 1024x1536 render (filename prefix ComfyUI).


6. Notes

⚠️ VRAM: Minimum 12GB (dual ControlNets + high-res).
⚠️ Troubleshooting:

  • Control failure → Verify ControlNet/preprocessor match.

  • Distortion → Adjust strength in ControlNetApplyAdvanced (default:0.8).
    ⚠️ Tips:

  • Use DepthAnythingV2Preprocessor for better depth estimation.

  • Try dpmpp_2m sampler for finer details.


Demo Steps

  1. Replace image in LoadImage (clear contours work best).

  2. Edit prompt in CLIPTextEncode (e.g., "Modern living room, warm tones").

  3. Run Queue Prompt. Output saves to ComfyUI/output.