You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Detecting Adversarial Examples from Sensitivity Inconsistency of Spatial-Transform Domain (Paper ID: 2700)
Preliminaries
Pytorch
pywt (pip install PyWavelets)
0. Train Dual model:
TrainDualModel.py
Pretrained primal and dual models for testing have been placed in the folder ''pre_trained''.
python TrainDualModel.py --dataset=cifar10 --net_type=resnet --lr=1e-6 --wd=0.005
1. Prepare correctly classified imags, the correspodning adversarial examples and natural noise examples:
ADV_Samples.py
The produced examples for testing have been placed in the Google drive https://drive.google.com/file/d/1AbYkSKaOb7RozZ2TJlD4bvkxrSus12JJ/view?usp=sharing.
python ADV_Samples.py --dataset=cifar10 --net_type=resnet --adv_type=BIM --adv_parameter=0.006
2. Train SID over datasource generated by ADV_Samples
KnownAttack.py
Four pretrained SID's have been placed in `./ExperimentRecord/KnownAttack/`.
python KnownAttack.py
3. Validate generalizability of SID's basded on detectors obtained by running KnownAttack.py
TransferAttack.py
About
The code of our AAAI 2021 paper "Detecting Adversarial Examples from Sensitivity Inconsistency of Spatial-transform Domain"