Cyclic Adversarial Framework with Implicit Autoencoder and Wasserstein Loss (CAFIAWL)

Warning The system is temporarily closed to updates for reporting purpose.

Bonabi Mobaraki, Ehsan (2020) Cyclic Adversarial Framework with Implicit Autoencoder and Wasserstein Loss (CAFIAWL). [Thesis]

PDF - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader

Official URL: https://risc01.sabanciuniv.edu/record=b2486358 _(Table of contents)


Since the day that the Simple Perceptron was invented, Artificial Neural Networks (ANNs) attracted many researchers. Technological improvements in computers and the internet paved the way for unseen computational power and an immense amount of data that boosted the interest (therefore the advance), particularly in the last decade. As of today, NNs seem to take a vital role in all different types of machine learning research and the main engine of many applications. Not only learning from the data with machines in order to make informed decisions but also “creating” something new, unseen, novel with machines is also a very appealing area of research. The generative models are among the most promising models that can address this goal and eventually lead to “computational creativity”. Recently the Variational Autoencoders (VAE) and the Generative Adversarial Networks (GAN) have shown tremendous success in terms of their generative performance. However, the conventional forms of VAEs had problems in terms of the quality of the outputs and GANs suffered hard from a problem that limited the diversity of the generated outputs, i.e., the mode collapse problem. One line of research that targets to eliminate these weaknesses of both algorithms is developing hybrid models which capture the strengths of these algorithms but avoiding their weaknesses. In this research, we propose a novel generative model. The proposed model is composed of four adversarial networks. Two of the adversarial networks are very similar to conventional GANs and the remaining two are basically WGAN that is based on the Wasserstein loss function. The way that these adversarial networks are put together also incorporates two implicit autoencoders to the proposed model and provides a cyclic framework that addresses the mode collapse problem. The performance of the proposed model is evaluated in various aspects by using the MNIST data. The analysis suggests that the proposed model generates good quality output meanwhile avoids the mode collapse problem

Item Type:Thesis
Uncontrolled Keywords:Auto-Encoder. -- Bi-GAN. -- Wasserstein loss. --Cycle-GAN. -- VGH. -- Mode Collapse. -- Otomatik Kodlayıcılar. -- GAN. -- Çift Yönlü. -- Wasserstein Ölçütü. -- Cycle-GAN. -- VGH. -- Mod Çöküşü Sorunu.
Subjects:T Technology > TK Electrical engineering. Electronics Nuclear engineering > TK7800-8360 Electronics > TK7885-7895 Computer engineering. Computer hardware
ID Code:41180
Deposited By:IC-Cataloging
Deposited On:24 Oct 2020 12:11
Last Modified:24 Oct 2020 12:11

Repository Staff Only: item control page