Inference-only release package for a unified blind and reference-based face restoration adapter built on FLUX.2-klein-base-4B.
Selected qualitative results from the bundled examples are shown below.
Each row shows the degraded input together with blind, single-reference, and multi-reference restoration results.
A ComfyUI extension for this project is available at https://github.com/cosmicrealm/ComfyUI-Flux-FaceIR.
It packages the aligned-face restoration workflow from flux-restoration as a ComfyUI custom node, intended for cropped/aligned face inputs with optional reference images.
- Base model:
black-forest-labs/FLUX.2-klein-base-4B - Hugging Face: https://huggingface.co/black-forest-labs/FLUX.2-klein-base-4B
- Default LoRA checkpoint inside this release package https://huggingface.co/zhangjinyang/flux-restoration/blob/main/pretrained_models/lora_weights.safetensors:
pretrained_models/lora_weights.safetensors
The LoRA path is relative. Do not hardcode an absolute path in code.
Download the FLUX.2-klein-base-4B snapshot and pass its directory via --model_dir, for example:
--model_dir /path/to/FLUX.2-klein-base-4BThe directory should contain:
flux-2-klein-base-4b.safetensors
vae/
text_encoder/
tokenizer/
cd release/flux-restoration
pip install -r requirements.txtRecommended runtime:
- Python 3.9+
- PyTorch 2.1+
scripts/infer.py
It supports two usage patterns:
- Direct single-image inference
- Batch inference from a manifest JSON
Modes:
blindref-singleref-multi
CUDA_VISIBLE_DEVICES=0 python scripts/infer.py \
--model_dir /path/to/FLUX.2-klein-base-4B \
--mode blind \
--degraded_image examples/lq/bill_gates/1.png \
--output_dir outputs/demoCUDA_VISIBLE_DEVICES=0 python scripts/infer.py \
--model_dir /path/to/FLUX.2-klein-base-4B \
--mode ref-single \
--degraded_image examples/lq/elon_musk/3.png \
--reference_image examples/hq/elon_musk/1.png \
--output_dir outputs/demoCUDA_VISIBLE_DEVICES=0 python scripts/infer.py \
--model_dir /path/to/FLUX.2-klein-base-4B \
--mode ref-multi \
--degraded_image examples/lq/zhang_ziyi/3.png \
--reference_image examples/hq/zhang_ziyi/1.png \
--reference_image examples/hq/zhang_ziyi/2.png \
--reference_image examples/hq/zhang_ziyi/4.png \
--output_dir outputs/demoNotes:
blindignores any reference input.ref-singleuses the first provided--reference_image.ref-multiuses up to--max_reference_imagesreferences. Default is3.
Three manifests are bundled under:
examples/manifests/
blind.json
ref_single.json
ref_multi.json
CUDA_VISIBLE_DEVICES=0 python scripts/infer.py \
--model_dir /path/to/FLUX.2-klein-base-4B \
--manifest_json examples/manifests/blind.jsonCUDA_VISIBLE_DEVICES=0 python scripts/infer.py \
--model_dir /path/to/FLUX.2-klein-base-4B \
--manifest_json examples/manifests/ref_single.jsonCUDA_VISIBLE_DEVICES=0 python scripts/infer.py \
--model_dir /path/to/FLUX.2-klein-base-4B \
--manifest_json examples/manifests/ref_multi.jsonEach manifest item contains:
degraded_imagetarget_imagereference_imagesoutput_pathmode- sample metadata such as
identityandindex
Bundled example assets are organized as:
examples/
lq/
<identity>/
1.png
2.png
...
hq/
<identity>/
1.png
2.png
...
manifests/
blind.json
ref_single.json
ref_multi.json
summary.json
outputs/
blind/
ref_single/
ref_multi/
Path convention:
- degraded input comes from
examples/lq/<identity>/<index>.png - references come from
examples/hq/<identity>/<other_index>.png
The bundled manifests in examples/manifests/ point to these files and to the corresponding output locations under examples/outputs/.
For manifest runs, predictions are written to the paths stored in the JSON files, for example:
examples/outputs/blind/bill_gates/001/pred.png
examples/outputs/ref_single/elon_musk/003/pred.png
examples/outputs/ref_multi/zhang_ziyi/003/pred.png
For direct single-image inference, outputs default to:
outputs/release_lora_ref/



































































































