Security of Generative Adversarial Networks
The overarching goal of this work is to explore the security landscape of Generative Adversarial Networks (GANs). In recent years, their adoption started to gain traction, and they are now used in many critical domains. Security is paramount in many of these domains. Since GANs are a system of two or more neural networks, security weaknesses in one of its components can be exploited against the system. This is the attack vector considered in this work. Specifically, this research evaluated the threat potential of an adversarial attack against the discriminator part of the system. Such an attack aims to distort the output by injecting maliciously modified input during training. The attack was empirically evaluated against four types of GANs, injections of 10% and 20% malicious data, and two datasets. The targets were CGAN, ACGAN, WGAN, and WGAN-GP. The datasets were MNIST and F-MNIST. The attack was created by improving an existing attack on GANs. The lower bound for the injection size turned out to be 10% for the improvement and 10-20% for the baseline attack. It was shown that the attack on WGAN-GP can overcome a filtering-based defence for F-MNIST. Furthermore, it was demonstrated that differentially private GANs are likely impossible to defend using current countermeasures.
History
Language
engDegree
- Master of Science
Program
- Computer Science
Granting Institution
Ryerson UniversityLAC Thesis Type
- Thesis