Toronto Metropolitan University
Browse
89c15419e39b39a4eaa92a043cd1b169.pdf (8.66 MB)

Security of Generative Adversarial Networks

Download (8.66 MB)
thesis
posted on 2024-03-18, 18:09 authored by Kyrylo Rudavskyy

The overarching goal of this work is to explore the security landscape of Generative Adversarial Networks (GANs). In recent years, their adoption started to gain traction, and they are now used in many critical domains. Security is paramount in many of these domains. Since GANs are a system of two or more neural networks, security weaknesses in one of its components can be exploited against the system. This is the attack vector considered in this work. Specifically, this research evaluated the threat potential of an adversarial attack against the discriminator part of the system. Such an attack aims to distort the output by injecting maliciously modified input during training. The attack was empirically evaluated against four types of GANs, injections of 10% and 20% malicious data, and two datasets. The targets were CGAN, ACGAN, WGAN, and WGAN-GP. The datasets were MNIST and F-MNIST. The attack was created by improving an existing attack on GANs. The lower bound for the injection size turned out to be 10% for the improvement and 10-20% for the baseline attack. It was shown that the attack on WGAN-GP can overcome a filtering-based defence for F-MNIST. Furthermore, it was demonstrated that differentially private GANs are likely impossible to defend using current countermeasures.

History

Language

eng

Degree

  • Master of Science

Program

  • Computer Science

Granting Institution

Ryerson University

LAC Thesis Type

  • Thesis

Thesis Advisor

Ali Miri

Year

2022

Usage metrics

    Computer Science (Theses)

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC