Speaker: Debdoot Sheet, PhD, SMIEEE - Speaker Website
Assistant Professor, Department of Electrical Engineering and Centre for Artificial Intelligence
Adversarial attacks in machine learning are a set of techniques which attempt to fool models through malicious input. This technique is applied to a variety of tasks viz. attack or cause a malfunction in standard machine learning based models. A well cited example of such an attack in a deep learning based image recognition system, is addition of noise to an image of a primate which leads to its misclassification as a dog. Deep learning is a genre of machine learning algorithms that attempt to solve tasks by data driven modelling of abstractions following a stratified description paradigm using non-linear transformation architectures. Put in simple terms, a deep learning system for classifying primate vs. dogs would be constructed using a large collection of labelled images of primates and dogs, fed to a multi-layer deep neural network (DNN), computing the error in its inference, and backpropagating the error through the DNN in order to update the free parameters in it to minimize the inference error. Intuitively, the earlier layers in the DNN that are close to the image would start with aggregating pixel wide information using multiple attributes termed as low level features viz. edges, gradients, etc.; which would be hierarchically aggregated to obtain complex features viz. blobs, lines, curves, etc.; that are further aggregated to make a decision in favor of presence or absence of an object. Being entirely data driven, while these hierarchical abstractions are not always intuitively explainable, it turns out that more often than not, adversarial attacks on these models tend to make the models behave very erratically which cannot be humanly explained viz. in the cited case where the addition of not so visually corruptive very low noise tends to make the model classify a primate as a dog.
While these challenges persist in destabilizing deep learning based systems when deployed in practical scenario, recent research associated with generative modeling has shown the affirmative might of using such attacks in training robust deep learning system. The general perception is to associate adversarial learning only with generative modelling, while a lot of contributions essentially use adversarial learning in order to address the perception-distortion tradeoff in cost functions. This has helped us generate images from random vectors using generative adversarial networks (GAN), develop single image super-resolution algorithms (Deep SR GAN), adversarial transformation to images, semantic segmentation under adversarial losses.
This tutorial will focus on understanding the perception-distortion tradeoff and its mathematical constructs with respect to a loss function. Subsequently this understanding would be used to discern the computational mechanics of implementing adversarial losses within generative models, regression learning problems like super-resolution and image-to-image transformation, and classification problems like semantic segmentation. The information would be delivered through standard lectures and intertwined with some hands-on Python based implementations which can be carried out by participants on their standard laptops. Setup instructions would be provided to the audience prior to the tutorial sessions.