Skip to content

Akshayjain97/Non-Uniform_Illumination

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

60 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Non-Uniform_Illumination

we introduce a novel attack technique, Non-Uniform Illumination (NUI), where images are subtly altered using varying NUI masks.

The (NUI) attack mask is created using several non-linear equations generating non-uniform variations of brightness and darkness exploiting the spatial structure of the image. We demonstrate how the proposed (NUI) attack degrades the performance of VGG, ResNet, MobilenetV3-small and InceptionV3 models on various renowned datasets, including CIFAR10, Caltech256, and TinyImageNet. We evaluate the proposed attack’s effectiveness on various classes of images.

Overview

method_diag

Explanation of the Non-Uniform Illumination (NUI) attack to fool the CNN models on the task of image classification. The 1st row shows the training over the original training data, the 2nd row shows the testing over the test data and the 3rd row shows the testing over the transformed test set.

Method

method_diag

Here is the workflow of the proposed method and the experimental settings used for the training and testing to fool the CNN models using the NUI attack technique.

Result

method_diag

We have applied 12 different masks to perform the NUI attack. In the above figure, the effect of the NUI attack on different images is shown. Here the 1st row is the original images and the later rows contain the transformed images after the NUI attack. The 2nd row contains images perturbed using mask function mask(1) given above and similarly all the other rows contain images perturbed using the mask function from mask(2) to mask(12) respectively

Below are the Accuracy, precision, recall and F1-score for the mobilenetV4-small model

method_diag method_diag method_diag method_diag

To describe the changes in the data distribution due to the NUI attack, we have shown the TSNE graph.

The below graphs are the TSNE for the original data and data after applying the NUI attack using one of the mask, calculated using MobilenetV3-small

method_diag method_diag

Links and Citation

Find our work at https://ieeexplore.ieee.org/document/10916770 or access the arxiv_paper pdf in the repository.

Cite our work using: A. Jain, S. R. Dubey, S. K. Singh, K. Santosh and B. B. Chaudhuri, "Non-Uniform Illumination Attack for Fooling Convolutional Neural Networks," IEEE Transactions on Artificial Intelligence, March 2025.

About

Image perturbation attack on the basis of non-linear illumination.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors