StyleGAN2: Near-Perfect Human Face Synthesis…and More

Dear Fellow Scholars, this is Two Minute Papers
with Károly Zsolnai-Fehér. Neural network-based learning algorithms are
on the rise these days, and even though it is common knowledge that they are capable
of image classification, or in other words, looking at an image and saying whether it
depicts a dog or a cat, nowadays, they can do much, much more. In this series, we covered a stunning paper
that showcased a system that could not only classify an image, but write a proper sentence
on what is going on, and could cover even highly non-trivial cases. You may be surprised, but this thing is not
recent at all. This is 4 year old news! Insanity. Later, researchers turned this whole problem
around, and performed something that was previously thought to be impossible. They started using these networks to generate
photorealistic images from a written text description. We could create new bird species by specifying
that it should have orange legs and a short yellow bill. Later, researchers at NVIDIA recognized and
addressed two shortcomings: one was that the images were not that detailed, and two, even
though we could input text, we couldn’t exert too much artistic control over the results. In came StyleGAN to the rescue, which was
then able to perform both of these difficult tasks really well. These images were progressively grown, which
means that we started out with a coarse image, and go over it over and over again, adding
new details. This is what the results look like and we
can marvel at the fact that none of these people are real. However, some of these images were still contaminated
by unwanted artifacts. Furthermore, there are some features that
are highly localized as we exert control over these images, you can see how this part of
the teeth and eyes are pinned to a particular location and the algorithm just refuses to
let it go, sometimes to the detriment of its surroundings. This new work is titled StyleGAN2 and it addresses
all of these problems in one go. Perhaps this is the only place on the internet
where we can say that finally, teeth and eyes are now allowed to float around freely, and
mean it with a positive sentiment. Here you see a few hand-picked examples from
the best ones, and I have to say, these are eye-poppingly detailed and correct looking
images. My goodness! The mixing examples you have seen earlier
are also outstanding. Way better than the previous version. Also, note that as there are plenty of training
images out there for many other things beyond human faces, it can also generate cars, churches,
horses, and of course, cats. Now that the original StyleGAN 1 work has
been out for a while, we have a little more clarity and understanding as to how it does
what it does, and the redundant parts of architecture have been revised and simplified. This clarity comes with additional advantages
beyond faster and higher-quality training and image generation. Interestingly, despite the fact that the quality
has improved significantly, images made with the new method can be detected more easily. Note that the paper does much, much more than
this, so make sure to have a look in the video description! In this series, we always say that two more
papers down the line, and this technique will be leaps and bounds beyond the first iteration. Well, here we are, not two, only one more
paper down the line. What a time to be alive! The source code of this project is also available. What’s more, it even runs in your browser. This episode has been supported by Weights
& Biases. Weights & Biases provides tools to track your
experiments in your deep learning projects. It can save you a ton of time and money in
these projects and is being used by OpenAI, Toyota Research, Stanford and Berkeley. Here you see a beautiful final report on one
of their projects on classifying parts of street images, and see how these learning
algorithms evolve over time. Make sure to visit them through
or just click the link in the video description and you can get a free demo today. Our thanks to Weights & Biases for helping
us make better videos for you. Thanks for watching and for your generous
support, and I’ll see you next time!


Add a Comment

Your email address will not be published. Required fields are marked *