MSBD5012 Term Paper
Due Date: 7 December 2019
As announced by the university administration, there won’t be any proctored examinations this semester and alternative assessment arrangements are to be made by the course instructors. For CSIT6000G/MSBD5012, you are asked to write a term paper in lieu of the final exam.
You need to choose one topic from the following list, and explain it informally:
· Adversarial attack
· Variational autoencoders
· Generative adversarial networks
· Deep reinforcement learning
The targeted readers of the paper are computer science students who are about to take the course. In other words, you are asked to explain the chosen topic to the past you at 1 September 2019. Obviously, there isn’t enough space for you to include all the details. However, you need to cover the
key concepts and key ideas.
You can follow the outlines of the relevant lectures, and might need to include contents before those lectures as background.
You can include diagrams and mathematical formulae. However, avoid mathematical formulae as much as possible because the purpose is to give informal explanations, not formal proofs.
The term paper should be no more than 4 pages in length, and the font size of the main text should be 12pt. You are encouraged to use latex latex template of ICLR 2020. Generate a pdf file for submission via Canvas and name your file “ [Last Name]_ [First Name]_[Student ID].pdf”. Discussions among students are encouraged. However, you need to write up your paper independently. A plagiarism checker will be run on all submitted reports.
The term paper will be graded using the following scheme:
· Overall understanding of the topic: 50%
· Clarity of explanations: 30%
· Effort (how polished the report is): 20%
The term paper is due by 23:59 on 7 December, the scheduled final exam date. No late submissions will be accepted.
In machine learning course, we learn how to teach our program to learn the information among the data. There are a lot’s kinds of work can be down, just like classification, regression and generate new data as same as nature. Generating new data is pretty interesting job. Thinking about what if you can teach your program to learn the plot by Picasso’s. The plots by Picasso’s is very famous and expensive. The most painting by Picasso is _Les femmes d’Alger_ which worth $179 million. If we can teach our program to plot a image similar with his style easily. That’s will be a amazing job. But now we can made it by the algorithm GANs.
GANs are the algorithms represented for Generative Adversarial Networks. It is the process of two complex algorithms (neural networks) competing against each other. One algorithms is called Generator, the other algorithm is called Discriminator.
The generative model and the discriminant model play a game with each other and learn to produce quite good output. Taking pictures as an example, the main task of the generator is to learn the real picture set, so that the pictures generated by yourself are closer to the real pictures, and the “disguise” discriminator. The main task of the discriminator is to find out the picture generated by the generator, distinguish it from the real picture, and perform true and false discrimination. Throughout the iteration process, the generator continuously strives to make the generated image more and more real, and the discriminator continuously strives to identify the authenticity of the picture. This is similar to the game between the generator and the discriminator. After repeated iterations, the two finally reached a balance: the picture generated by the generator is very close to the real picture, and it is difficult for the discriminator to distinguish the difference between the real and fake pictures. Its performance is that for true and false pictures, the probability output of the discriminator is close to 0.5.
Let’s still assume an I wants to replicate the style of Picasso. After I have watch all the detail of Picasso’s paintings, I think I have learned a lot. So I find a a collector to help me improve my level. The collector has rich experience and sharp eyes, and the paintings on the market that imitate Picasso cannot escape his eyes. The collector told me a word: when will your painting deceive me, you will be successful.
Then I give hime this one:
The collector glanced lightly and was very angry. “0 points! This is also called painting? Too much difference!” After listening to the collector’s words, I began to reflect on myself and did not hesitate to draw, even it is a black image. So I drew another picture:
The collector saw : 1 point ! Repaint! As soon as I thought it was still impossible, the painting was too bad, so I went back to study Picasso’s painting style, and continued to improve and re-create, until one day I showed the new painting to the collector:
This time, the collector was wearing glasses and carefully analyzing. After a long time, the collector patted my shoulder and said that the painting was very good. Haha, I was so happy to be praised and affirmed by the collector.
This example is actually a GAN training process. I am a generator, the purpose is to output a picture that can fool collectors, making it difficult for collectors to distinguish between true and false! The collector is the discriminator, the purpose is to identify my painting and judge it to be false! The whole process is a game of “generation-confrontation”. In the end, I (the generator) outputs a picture of “truths and false truths”, and even collectors (the discriminator) can hardly distinguish.
After we talk about the basic ideal, then let’s see what is the Generative Adversarial Networks(GANs) mathematically. Generally, GANs are a model architecture for training a generative model, and it is most common to use deep learning models in this architecture.
The GAN architecture was first described in the 2014 paper by Ian Goodfellow, et al. titled “Generative Adversarial Networks.” After this paper appeared, there are plenty related paper followed. A standardized approach called Deep Convolutional Generative Adversarial Networks, or DCGAN, that led to more stable models was later formalized by Alec Radford, et al. in the 2015 paper titled Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks.
The GAN model architecture involves two sub-models: a _generator model_ for generating new examples and a _discriminator model_ for classifying whether generated examples are real, from the domain, or fake, generated by the generator model.
- Generator. Model that is used to generate new plausible examples from the problem domain.
- Discriminator. Model that is used to classify examples as real (_from the domain_) or fake (_generated_).
Generative adversarial networks are based on a game theoretic scenario in which the generator network must compete against an adversary. The generator network directly produces samples. Its adversary, the discriminator network, attempts to distinguish between samples drawn from the training data and samples drawn from the generator.
The generator model takes a fixed-length random vector as input and generates a sample in the domain. The vector is drawn from randomly from a Gaussian distribution, and the vector is used to seed the generative process. After training, points in this multidimensional vector space will correspond to points in the problem domain, forming a compressed representation of the data distribution.
This vector space is referred to as a latent space, or a vector space comprised of latent variables. Latent variables, or hidden variables, are those variables that are important for a domain but are not directly observable.
We often refer to latent variables, or a latent space, as a projection or compression of a data distribution. That is, a latent space provides a compression or high-level concepts of the observed raw data such as the input data distribution. In the case of GANs, the generator model applies meaning to points in a chosen latent space, such that new points drawn from the latent space can be provided to the generator model as input and used to generate new and different output examples.
After training, the generator model is kept and used to generate new samples.
Example of the GAN Generator Model
The discriminator model takes an example from the domain as input (real or generated) and predicts a binary class label of real or fake (generated).
The real example comes from the training dataset. The generated examples are output by the generator model.
The discriminator is a normal (and well understood) classification model.
After the training process, the discriminator model is discarded as we are interested in the generator.
Sometimes, the generator can be repurposed as it has learned to effectively extract features from examples in the problem domain. Some or all of the feature extraction layers can be used in transfer learning applications using the same or similar input data.
Generative modeling is an unsupervised learning problem, as we discussed in the previous section, although a clever property of the GAN architecture is that the training of the generative model is framed as a supervised learning problem. The two models, the generator and discriminator, are trained together. The generator generates a batch of samples, and these, along with real examples from the domain, are provided to the discriminator and classified as real or fake. The discriminator is then updated to get better at discriminating real and fake samples in the next round, and importantly, the generator is updated based on how well, or not, the generated samples fooled the discriminator.
In this way, the two models are competing against each other, they are adversarial in the game theory sense, and are playing a zero-sum game. In this case, zero-sum means that when the discriminator successfully identifies real and fake samples, it is rewarded or no change is needed to the model parameters, whereas the generator is penalized with large updates to model parameters. Alternately, when the generator fools the discriminator, it is rewarded, or no change is needed to the model parameters, but the discriminator is penalized and its model parameters are updated.
At a limit, the generator generates perfect replicas from the input domain every time, and the discriminator cannot tell the difference and predicts “unsure” (e.g. 50% for real and fake) in every case. This is just an example of an idealized case; we do not need to get to this point to arrive at a useful generator model.
Example of the Generative Adversarial Network Model Architecture