This repository hosts and maintains up-to-date systematization of Gradient Inversion Attacks and defenses in Federated Learning, following the categorization and threat model taxonomy introduced in our paper "SoK: Gradient Inversion Attacks in Federated Learning" published in USENIX Security 2025.
The tables are structured to provide a clear, comparative overview of the state-of-the-art, including:
- Attack techniques and their properties
- Defense mechanisms and their effectiveness
- Threat models with real-world applicability
Our goal is to offer a living resource for researchers and practitioners, reflecting the latest developments and evaluations in the field. Tables will be updated regularly as new works and insights emerge.
Feel free to open issues or pull requests to suggest updates or corrections.
For further information, please contact: [email protected]
If you find this repository useful for your research, please consider citing our work:
@inproceedings{carletti2025sok,
title = {SoK: Gradient Inversion Attacks in Federated Learning},
author = {Vincenzo Carletti and Pasquale Foggia and Carlo Mazzocca and Giuseppe Parrella and Mario Vento},
booktitle = {34th USENIX Security Symposium (USENIX Security 25)},
year = {2025},
pages={6439--6459},
url = {https://www.usenix.org/conference/usenixsecurity25/presentation/carletti},
publisher = {USENIX Association},
}| ID | Threat Model | Model Updates | Basic Knowledge | Training Details | Surrogate Data | Client Data Distribution | Active Manipulation | Client Selection | Real-World Applicability |
|---|---|---|---|---|---|---|---|---|---|
| A | Eavesdropper | ✔️ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ★★★ |
| B | Informed Eavesdropper | ✔️ | ✔️ | ✗ | ✗ | ✗ | ✗ | ✗ | ★★★ |
| C | Parameter-Aware Eavesdropper | ✔️ | ✔️ | ✔️ | ✗ | ✗ | ✗ | ✗ | ★★★ |
| D | Data-Enhanced Eavesdropper | ✔️ | ✔️ | ✔️ | ✔️ | ✗ | ✗ | ✗ | ★★☆ |
| E | Statistical-Informed Eavesdropper | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✗ | ✗ | ★★☆ |
| F | Active Manipulator | ✔️ | ✔️ | ✔️ | ✗ | ✗ | ✔️ | ✗ | ★☆☆ |
| G | Data-Enhanced Manipulator | ✔️ | ✔️ | ✔️ | ✔️ | ✗ | ✔️ | ✗ | ★☆☆ |
| H | Active Client Manipulator | ✔️ | ✔️ | ✔️ | ✔️ | ✗ | ✔️ | ✔️ | ★☆☆ |
Legend for Applicability:
★★★ = highly applicable and less detectable
★★☆ = potentially applicable (depends on specific configuration)
★☆☆ = less applicable and more detectable
| Category | Technique | Work | Threat Model | Learning Algorithm | Image Resolution | Batch Size | Shared Model | Learning Task | Label Recovery |
Open Source |
|---|---|---|---|---|---|---|---|---|---|---|
| Optimization-based | Basic Optimization | Zhu et al. (2019) [1] | A | FedSGD | 64×64 | 8 | LeNet | Object and Face Classification | ★ | link |
| Zhao et al. (2020) [2] | A | FedSGD | 32×32 | 1 | LeNet | Object and Face Classification | 🆕 | link | ||
| Geiping et al. (2020) [3] | B | FedSGD | 32×32 | 100 | ResNets | Object Classification | ◦ | link | ||
| Yin et al. (2021) [4] | E | FedSGD | 224×224 | 48 | ResNet-50 | Object Classification | 🆕 | ✗ | ||
| Hatamizadeh et al. (2022) [5] | E | FedSGD | 224×224 | 30 | ViT | Object and Face Classification | [4] | ✗ | ||
| Dimitrov et al. (2022) [6] | C | FedAVG | 32×32 | 10×5 | CNNs | Object Classification | link | link | ||
| Hatamizadeh et al. (2022) [7] | E | FedAVG | 224×224 | 512×8 | ResNet-18 | X-Ray Image Classification | ★ | ✗ | ||
| Kariyappa et al. (2023) [8] | B | FedSGD | 224×224 | 1024 | VGG-16 | Object Classification | ★ | link | ||
| Usynin et al. (2023) [9] | D | FedSGD | 224×224 | 1 | VGG, ResNet | Object Classification | [2] | ✗ | ||
| Li et al. (2023) [10] | E | FedSGD | 224×224 | 256 | ResNet-50 | Object Classification | 🆕 | link | ||
| Ye et al. (2024) [11] | E | FedSGD | 224×224 | 8 | ResNet-34 | Object Classification | 🆕 | link | ||
| Li et al. (2025) [12] | B | FedSGD | 224×224 | 128 | ResNet-18 | Object Classification | [2] | ✗ | ||
| Augmented Optimization | Yang et al. (2023) [13] | D | FedSGD | 32×32 | 1 | LeNet, ResNet-18 | Object and Face Classification | [2] | ✗ | |
| Yue et al. (2023) [14] | D | FedAVG | 128×128 | 5×16 | LeNet, ResNet-18 | Object Classification | [2] | link | ||
| Sun et al. (2024) [15] | D | FedSGD | 64×64 | 4 | ResNet-18 | Object Classification | ◦ | link | ||
| Liu et al. (2025) [16] | D | FedSGD | N/A | 1 | N/A | Object Classification | ◦ | ✗ | ||
| Generative Model-based | Online Opt. | Wang et al. (2019) [17] | D | FedSGD | 64×64 | 1 | LeNet, ResNet-18 | Object and Face Classification | [2] | ✗ |
| Direct Reconstruction | Ren et al. (2022) [18] | B | FedSGD | 64×64 | 256 | LeNet, ResNet-18 | Object and Face Classification | ★ | link | |
| Xue et al. (2023) [19] | D | FedSGD | 224×224 | 8 | ResNet-50 | Object Classification | ◦ | ✗ | ||
| Latent-Space Optimization | Jeon et al. (2021) [20] | D | FedSGD | 64×64 | 4 | ResNet-18 | Object and Face Classification | ◦ | link | |
| Li et al. (2022) [21] | D | FedSGD | 224×224 | 1 | ResNet-18 | Object and Face Classification | [2] | link | ||
| Fang et al. (2023) [22] | D | FedSGD | 64×64 | 1 | ResNet-18 | Object and Face Classification | [2] | link | ||
| Xu et al. (2023) [23] | D | FedSGD | 128×128 | 1 | LeNet, ResNet-18 | Object and Face Classification | [2] | ✗ | ||
| Gu et al. (2024) [24] | D | FedSGD | 256×256 | 1 | LeNet-7, ResNet-18 | Object and Face Classification | - | ✗ | ||
| Analytic-based | Closed Form | Zhu et al. (2021) [25] | B | FedSGD | 64×64 | 1 | 6 layer CNN | Object Classification | 🆕 | link |
| Lu et al. (2022) [26] | B | FedSGD | 224×224 | 1 | ViT | Object Classification | [2] | ✗ | ||
| Dimitrov et al. (2024) [27] | B | FedSGD | 256×256 | 25 | 6 layer FC-NN | Object Classification | - | ✗ | ||
| Gradient Sparsification | Fowl et al. (2022) [28] | G | Both | Input Size | 256 | Model Agnostic² | Object Classification | - | link | |
| Wen et al. (2022) [29] | F | FedSGD | Input Size | 1 | Model Agnostic | Object Classification | - | ✗ | ||
| Boenisch et al. (2023) [30] | G | Both | Input Size | 100 | FC Networks³ | Object Classification | - | colab | ||
| Gradient Isolation | Boenisch et al. (2023) [31] | H | FedSGD | Input Size | 100 | FC Networks³ | Object Classification | - | ✗ | |
| Zhao et al. (2023) [32] | G | FedAVG | Input Size | 1×64 | Model Agnostic² | Object Classification | - | ✗ | ||
| Zhao et al. (2024) [33] | G | FedAVG | Input Size | 5×8 | Model Agnostic² | Object Classification | - | link | ||
| Wang et al. (2024) [34] | F | FedSGD | Input Size | 100 | LeNet, VGG-16 | Object Classification | - | link | ||
| Garov et al. (2024) [35] | G | Both | Input Size | 512 | Model Agnostic² | Object Classification | - | link | ||
| Shi et al. (2025) [36] | G | Both | Input Size | 1024 | Various | Object Classification | - | link |
Symbol legend:
🆕 = new label recovery method
◦ = assumes label knowledge
★ = optimization-based label reconstruction
- = no label restoration algorithm used
² = adds linear layers in the shared models (potentially detectable by clients)
³ = targets linear layers, extendable to full models by modifying preceding layers to transmit inputs
"Batch Size" reflects the maximum used; attacks utilizing FedAVG are expressed as E×B, where E is the number of iterations/epochs.
"Open Source" contains clickable links to available repositories.
References:
[1] Deep Leakage from Gradients. NeurIPS. paper
[2] iDLG: Improved Deep Leakage from Gradients. paper
[3] Inverting Gradients - How Easy is it to Break Privacy in Federated Learning? paper
[4] See Through Gradients: Image Batch Recovery via GradInversion. paper
[5] GradViT: Gradient Inversion of Vision Transformers. paper
[6] Data Leakage in Federated Averaging. paper
[7] Do Gradient Inversion Attacks Make Federated Learning Unsafe? paper
[8] Cocktail Party Attack: Breaking Aggregation-based Privacy in Federated Learning Using Independent Component Analysis. paper
[9] Usynin, D., et al. (2023). Beyond Gradients: Exploiting Adversarial Priors in Model Inversion Attacks. paper
[10] E2EGI: End-to-End Gradient Inversion in Federated Learning. paper
[11] High-Fidelity Gradient Inversion in Distributed Learning. paper
[12] Temporal Gradient Inversion Attacks with Robust Optimization. paper
[13] Using Highly Compressed Gradients in Federated Learning for Data Reconstruction Attacks. paper
[14] Gradient Obfuscation Gives a False Sense of Security in Federated Learning. paper
[15] GI-PIP: Do We Require Impractical Auxiliary Dataset for Gradient Inversion Attacks? paper
[16] Mjölnir: Breaking the Shield of Perturbation-Protected Gradients via Adaptive Diffusion. paper
[17] Beyond Inferring Class Representatives: User-Level Privacy Leakage From Federated Learning. paper
[18] GRNN: Generative Regression Neural Network—A Data Leakage Attack for Federated Learning. paper
[19] Fast Generation-Based Gradient Leakage Attacks against Highly Compressed Gradients. paper
[20] Gradient Inversion with Generative Image Prior. NeurIPS. paper
[21] Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage. paper
[22] GIFD: A Generative Gradient Inversion Method with Feature Domain Optimization. paper
[23] CGIR: Conditional Generative Instance Reconstruction Attacks Against Federated Learning. paper
[24] Federated Learning Vulnerabilities: Privacy Attacks with Denoising Diffusion Probabilistic Models. paper
[25] R-GAP: Recursive Gradient Attack on Privacy. paper
[26] APRIL: Finding the Achilles' Heel on Privacy for Vision Transformers. paper
[27] SPEAR: Exact Gradient Inversion of Batches in Federated Learning. paper [28] Robbing the Fed: Directly Obtaining Private Data in Federated Learning with Modified Models. paper
[29] Fishing for User Data in Large-Batch Federated Learning via Gradient Magnification. paper
[30] When the Curious Abandon Honesty: Federated Learning is Not Private. paper
[31] Reconstructing Individual Data Points in Federated Learning Hardened with Differential Privacy and Secure Aggregation. paper
[32] The Resource Problem of Using Linear Layer Leakage Attack in Federated Learning. paper
[33] LOKI: Large-scale Data Reconstruction Attack Against Federated Learning Through Model Manipulation. paper
[34] Maximum Knowledge Orthogonality Reconstruction with Gradients in Federated Learning. paper
[35] Hiding in Plain Sight: Disguising Data Stealing Attacks in Federated Learning. paper
[36] Scale-MIA: A Scalable Model Inversion Attack against Secure Federated Learning via Latent Space Reconstruction. paper
| Cat. | Technique | Work | Year | Where | Threat Models | Intuition | Main Weakness | Open Source |
|---|---|---|---|---|---|---|---|---|
| F | DP-based | [37] | N/A | Server | A (green) - C (green) D (orange) - E (orange) F (red) - H (red) | Server adds noise to clipped client contributions | Requires trusted (passive) server and ideal sampling conditions | ~ |
| [38] | N/A | Client | A (green) - C (green) D (orange) - H (orange) | Clients add noise to their own updates | Significantly compromises model utility; May be weakened from tailored GIAs [39, 14, 22] | ~ | ||
| Cryptography-Based | [40] | N/A | Client | A (green) - E (green) F (red) - H (red) | Server has access to aggregated client contributions only | Vulnerable to active malicious servers; Adds communication overhead | ~ | |
| [41] | N/A | Client | A (green) - H (green) | Enables computations on encrypted data without decryption | High computational and communication overhead | ~ | ||
| H | Gradient Perturbation | [42,43] | N/A | Client | A (green) B (orange) - C (orange) D (red) - H (red) | Transmits only the most significant gradient elements | Bypassed by modern GIAs [14, 22, 19, 16] | ~ |
| [14] | N/A | Client | A (green) B (orange) - C (orange) D (red) - H (red) | Reduces gradient precision with fewer bits | Bypassed by modern GIAs [14, 22, 19, 16] | ~ | ||
| [44] | N/A | Client | A (green) B (orange) - C (orange) D (red) - H (red) | Limits the magnitude of gradients | Bypassed by modern GIAs [14, 22, 19, 16] | ~ | ||
| [45] | 2021 | Client | A (green) - B (green) C (orange) D (red) - H (red) | Perturbs data representation in FC layer to modify gradient pattern | Bypassed by modern GIAs [14, 22] | link | ||
| [46] | 2022 | Client | A (green) - B (green) C (orange) - H (orange) | Adds Gaussian noise to high-sensitivity components of model weights | Not tested against recent generative model-based GIAs | link | ||
| [47] | 2024 | Client | A (green) - B (green) C (orange) - H (orange) | Adaptive noise injection with sensitivity-informed perturbation strategy | Not tested against recent generative model-based GIAs | ✗ | ||
| [48] | 2025 | Client | A (green) - D (green) E (orange) - H (orange) | Perturb gradients in a subspace orthogonal to the original one | Not evaluated against attack with stronger threat model | link | ||
| H | Learning Algorithm Modification | DigestNN [49] | 2021 | Client | A (green) - B (green) C (orange) - H (orange) | Transforms data into dissimilar representations | Not tested against generative model-based GIAs | ✗ |
| [50] | 2022 | Client | A (green) - B (green) C (orange) - H (orange) | Slices and encrypts gradients between clients | Not tested against generative model-based GIAs | link | ||
| [51] | 2022 | Client | A (green) - D (green) E (orange) - H (orange) | Dynamically modifies learning rate for each client to make gradient estimation difficult | Uncertain impact on optimization dynamics | ✗ | ||
| [52] | 2023 | Client | A (green) - B (green) C (orange) - H (orange) | Uses augmentation to balance privacy and utility | Vulnerable during early training phases [53] | link | ||
| [54] | 2023 | Client | A (green) - B (green) C (orange) - H (orange) | Decomposes weight matrices into cascading sub-matrices creating nonlinear mapping between gradients and raw data | Not tested against generative model-based GIAs | ✗ | ||
| [55] | 2024 | Client | A (green) - B (green) C (orange) - H (orange) | Plug-and-play defense using vicinal distribution augmentation of training data | Not tested against generative model-based GIAs | link | ||
| [56] | 2024 | Client | A (green) - B (green) C (orange) D (green) E (orange) F (green) G (orange) - H (orange) |
Use visually different synthesized concealed samples to compute model updates | Introduce computational overhead to synthesize concealed images | link | ||
| H | Model Modification | [57] | 2020 | Client | A (green) - B (green) C (orange) - H (orange) | Parallel branch with weights hidden from server | May be vulnerable to branch simulation scenarios or recent GIAs | ✗ |
| [58] | 2022 | Client | A (green) - B (green) C (orange) - H (orange) | Variational block adding randomness | Proven ineffective against advanced GIAs [14] | link | ||
| [59] | 2023 | Client | A (green) B (orange) - H (orange) | Extends model with branch hidden from server | May be vulnerable to branch simulation scenarios or recent GIAs | link |
References:
[37] Learning Differentially Private Recurrent Language Models. paper
[38] FedCDP: Client-level Differential Privacy for Federated Learning. paper
[39] Does Differential Privacy Really Protect Federated Learning From Gradient Leakage Attacks? paper
[40] Sok: Secure aggregation based on cryptographic schemes for federated learning. paper
[41] Practical Secure Aggregation for Privacy-Preserving Machine Learning. paper
[42] Revisiting Gradient Pruning: A Dual Realization for Defending against Gradient Attacks. paper
[43] Preserving data privacy in federated learning through large gradient pruning. paper
[44] Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage paper
[45] Soteria: Provable Defense Against Privacy Leakage in Federated Learning. GitHub paper
[46] Protect Privacy from Gradient Leakage Attack in Federated Learning GitHub paper
[47] More Than Enough is Too Much: Adaptive Defenses Against Gradient Leakage in Production Federated Learning paper
[48] CENSOR: Defense Against Gradient Inversion via Orthogonal Subspace Bayesian Sampling. GitHub paper
[49] Digestive neural networks: A novel defense strategy against inference attacks in federated learning paper
[50] Enhanced Security and Privacy via Fragmented Federated Learning GitHub paper
[51] Enhancing Privacy Preservation in Federated Learning via Learning Rate Perturbation. paper
[52] Automatic Transformation Search Against Deep Leakage from Gradients GitHub paper
[53] Bayesian Framework for Gradient Leakage paper
[54] Privacy-Encoded Federated Learning Against Gradient-Based Data Reconstruction Attacks paper
[55] Gradient Inversion Attacks: Impact Factors Analyses and Privacy Enhancement GitHub paper
[56] Concealing Sensitive Samples against Gradient Leakage in Federated Learning GitHub paper
[57] Rethinking Privacy Preserving Deep Learning: How to Evaluate and Thwart Privacy Attacks paper
[58] PRECODE - A Generic Model Extension To Prevent Deep Gradient Leakage GitHub paper
[59] Gradient Leakage Defense with Key-Lock Module for Federated Learning GitHub paper