

A 284B-parameter Mixture-of-Experts model engineered for fast, affordable inference without sacrificing reasoning depth. Thirteen billion parameters active per forward pass. One million tokens of context.
DeepSeek V4 Flash is the efficiency-first member of DeepSeek's fourth-generation model family, released in preview on April 24, 2026. It sits alongside V4 Pro as a complementary option, where Pro optimizes for maximum intelligence, Flash optimizes for throughput, latency, and cost per token without falling dramatically short on quality.
The model uses a sparse Mixture-of-Experts design: while it carries 284 billion parameters in total, only 13 billion are active during any single inference call. That translates directly into lower compute and lower cost while keeping outputs sharper than a dense 13B model would achieve on its own.
Several architectural decisions separate V4 Flash from earlier DeepSeek releases and from the broader open-source field.
Compressed Sparse Attention (CSA)Compresses KV caches along the sequence dimension (compression rate 4 in Flash), then applies DeepSeek Sparse Attention. A lightning indexer picks the top 512 most relevant compressed KV entries per query, plus a 128-token sliding window so local context is never missed.
Heavily Compressed Attention (HCA)Applies a much more aggressive compression rate of 128, then performs dense attention over that compressed representation, giving the model a cheap global view of distant tokens in every layer. CSA and HCA layers are interleaved throughout the network.
Manifold-Constrained Hyper-Connections (mHC)Strengthens conventional residual connections to enhance stability of signal propagation across layers while preserving model expressivity — a key factor in maintaining quality at high compression ratios.
Muon optimizerUsed during training for faster convergence and greater stability. Alongside FP4/FP8 mixed precision (expert weights in FP4, most other weights in FP8), this keeps training costs low while preserving model quality.
The model uses one shared expert plus a pool of routed experts. The first three MoE layers use Hash routing (expert assignment by a fixed hash of the token ID), while the remaining layers use standard DeepSeekMoE learned routing. Multi-Token Prediction is enabled at depth 1 — the same strategy used in V3.
Pre-trained on more than 32 trillion diverse, high-quality tokens. Post-training used a two-stage pipeline: first, independent cultivation of domain-specific experts via supervised fine-tuning and reinforcement learning with GRPO; second, unified model consolidation via on-policy distillation, integrating distinct proficiencies into a single model.
V4 Flash supports three configurable reasoning effort modes, giving developers direct control over the latency/quality trade-off without switching models entirely.
On the Artificial Analysis Intelligence Index (v4.0 — covering GDPval-AA, GPQA Diamond, HLE, IFBench, SciCode, Terminal-Bench, and others), V4 Flash in reasoning mode scores 47 versus an open-weight median of 28. Selected highlights below.
Key scoresV4 Flash is positioned as the cost-effective default for most serving scenarios — the model you reach for first unless maximum frontier intelligence is explicitly required. Its combination of speed, long context, and low cost makes it a natural fit across a wide range of production workloads.
Long-context repo understanding, diff review, autocomplete at high throughput.
High-volume retrieval synthesis where cache hits reduce input costs to fractions of a cent.
Multi-step tool-calling loops; performs on par with V4 Pro on simple agent tasks.
1M-token context absorbs entire contracts, codebases, or report archives in a single call.
Think Max mode produces frontier-level formal reasoning at a fraction of Pro pricing.
Sub-second TTFT and 84 t/s throughput keep conversational latency imperceptible.
Pro carries 1.6T total / 49B active params. Flash is roughly 3–4× cheaper and faster, with reasoning that closely approaches Pro quality. Simple agent tasks: parity. Knowledge-intensive or highly complex agentic chains: Pro leads.
Flash uses 10% of V3.2's FLOPs and 7% of its KV cache at 1M-token context, a generational efficiency leap, while introducing hybrid attention and configurable reasoning modes that V3.2 lacked.
V4 Flash is currently the cheapest among small capable models, undercutting GPT-5.4 Nano on price while offering open weights and a 1M-token context that most nano-class models do not provide.
DeepSeek V4 Flash is the efficiency-first member of DeepSeek's fourth-generation model family, released in preview on April 24, 2026. It sits alongside V4 Pro as a complementary option, where Pro optimizes for maximum intelligence, Flash optimizes for throughput, latency, and cost per token without falling dramatically short on quality.
The model uses a sparse Mixture-of-Experts design: while it carries 284 billion parameters in total, only 13 billion are active during any single inference call. That translates directly into lower compute and lower cost while keeping outputs sharper than a dense 13B model would achieve on its own.
Several architectural decisions separate V4 Flash from earlier DeepSeek releases and from the broader open-source field.
Compressed Sparse Attention (CSA)Compresses KV caches along the sequence dimension (compression rate 4 in Flash), then applies DeepSeek Sparse Attention. A lightning indexer picks the top 512 most relevant compressed KV entries per query, plus a 128-token sliding window so local context is never missed.
Heavily Compressed Attention (HCA)Applies a much more aggressive compression rate of 128, then performs dense attention over that compressed representation, giving the model a cheap global view of distant tokens in every layer. CSA and HCA layers are interleaved throughout the network.
Manifold-Constrained Hyper-Connections (mHC)Strengthens conventional residual connections to enhance stability of signal propagation across layers while preserving model expressivity — a key factor in maintaining quality at high compression ratios.
Muon optimizerUsed during training for faster convergence and greater stability. Alongside FP4/FP8 mixed precision (expert weights in FP4, most other weights in FP8), this keeps training costs low while preserving model quality.
The model uses one shared expert plus a pool of routed experts. The first three MoE layers use Hash routing (expert assignment by a fixed hash of the token ID), while the remaining layers use standard DeepSeekMoE learned routing. Multi-Token Prediction is enabled at depth 1 — the same strategy used in V3.
Pre-trained on more than 32 trillion diverse, high-quality tokens. Post-training used a two-stage pipeline: first, independent cultivation of domain-specific experts via supervised fine-tuning and reinforcement learning with GRPO; second, unified model consolidation via on-policy distillation, integrating distinct proficiencies into a single model.
V4 Flash supports three configurable reasoning effort modes, giving developers direct control over the latency/quality trade-off without switching models entirely.
On the Artificial Analysis Intelligence Index (v4.0 — covering GDPval-AA, GPQA Diamond, HLE, IFBench, SciCode, Terminal-Bench, and others), V4 Flash in reasoning mode scores 47 versus an open-weight median of 28. Selected highlights below.
Key scoresV4 Flash is positioned as the cost-effective default for most serving scenarios — the model you reach for first unless maximum frontier intelligence is explicitly required. Its combination of speed, long context, and low cost makes it a natural fit across a wide range of production workloads.
Long-context repo understanding, diff review, autocomplete at high throughput.
High-volume retrieval synthesis where cache hits reduce input costs to fractions of a cent.
Multi-step tool-calling loops; performs on par with V4 Pro on simple agent tasks.
1M-token context absorbs entire contracts, codebases, or report archives in a single call.
Think Max mode produces frontier-level formal reasoning at a fraction of Pro pricing.
Sub-second TTFT and 84 t/s throughput keep conversational latency imperceptible.
Pro carries 1.6T total / 49B active params. Flash is roughly 3–4× cheaper and faster, with reasoning that closely approaches Pro quality. Simple agent tasks: parity. Knowledge-intensive or highly complex agentic chains: Pro leads.
Flash uses 10% of V3.2's FLOPs and 7% of its KV cache at 1M-token context, a generational efficiency leap, while introducing hybrid attention and configurable reasoning modes that V3.2 lacked.
V4 Flash is currently the cheapest among small capable models, undercutting GPT-5.4 Nano on price while offering open weights and a 1M-token context that most nano-class models do not provide.