Toronto Metropolitan University
Browse
- No file added yet -

Progressively Growing of Least Squares Generative Adversarial Networks

Download (2.39 MB)
thesis
posted on 2023-08-29, 16:09 authored by Sharif Mansour

In the past decade, generative models have seen exponentially use in the world of computer vision. One architecture that has consistently contributed to this domain is generative adversarial networks. These networks can produce outstanding results and very realistic appearing images. They do not however come without their downfalls, as they tend to be extremely unstable when training for resolutions beyond 64x64. As a result, several solutions have been proposed to combat stability and other issues found during training such as a lack of variation in images produced. The first set of solutions focus on using a variety of different loss functions such as the Wasserstein distance loss function or the least squares loss function. While other solutions propose altering the architecture used or even the training methodology which the networks undergo. To build upon the success of other solutions this paper will propose an architecture which grows during training to allow for high resolution images to be produced. This solution will combine the efforts of multiple other ones while also contributing novel changes to the GAN architecture. As an outcome, this report will showcase the new proposed approach and its ability to produce comparable results to other state-of-the-art solutions. 

History

Language

English

Degree

  • Master of Engineering

Program

  • Electrical and Computer Engineering

Granting Institution

Ryerson University

LAC Thesis Type

  • MRP

Thesis Advisor

Dr. Cungang Yang

Year

2021

Usage metrics

    Electrical and Computer Engineering (Theses)

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC