Progressively Growing of Least Squares Generative Adversarial Networks
In the past decade, generative models have seen exponentially use in the world of computer vision. One architecture that has consistently contributed to this domain is generative adversarial networks. These networks can produce outstanding results and very realistic appearing images. They do not however come without their downfalls, as they tend to be extremely unstable when training for resolutions beyond 64x64. As a result, several solutions have been proposed to combat stability and other issues found during training such as a lack of variation in images produced. The first set of solutions focus on using a variety of different loss functions such as the Wasserstein distance loss function or the least squares loss function. While other solutions propose altering the architecture used or even the training methodology which the networks undergo. To build upon the success of other solutions this paper will propose an architecture which grows during training to allow for high resolution images to be produced. This solution will combine the efforts of multiple other ones while also contributing novel changes to the GAN architecture. As an outcome, this report will showcase the new proposed approach and its ability to produce comparable results to other state-of-the-art solutions.
History
Language
EnglishDegree
- Master of Engineering
Program
- Electrical and Computer Engineering
Granting Institution
Ryerson UniversityLAC Thesis Type
- MRP