HOME > Development > High Resolution Generative Adversarial Networks (GANs)

High Resolution Generative Adversarial Networks (GANs)

  • Development
  • Feb 28, 2025
SynopsisHigh Resolution Generative Adversarial Networks (GANs , avail...
High Resolution Generative Adversarial Networks (GANs)  No.1

High Resolution Generative Adversarial Networks (GANs), available at $54.99, has an average rating of 4.4, with 57 lectures, based on 86 reviews, and has 1003 subscribers.

You will learn about Create a GAN capable of generating high resolution images using TensorFlow 2.0 Distribute training on a TPU or multiple GPUS Implement the R2 loss function Implement a scaled convolutional layer Implement up-sampling and down-sampling layers Implement mini-batch standard deviation to capture dataset variation Generate infinite random images from a trained generator Apply a perceptual path length filter to generated images Generate interpolations between two different generated images This course is ideal for individuals who are Machine learning developers who want to create high resolution images with GANs It is particularly useful for Machine learning developers who want to create high resolution images with GANs.

Enroll now: High Resolution Generative Adversarial Networks (GANs)

Summary

Title: High Resolution Generative Adversarial Networks (GANs)

Price: $54.99

Average Rating: 4.4

Number of Lectures: 57

Number of Published Lectures: 57

Number of Curriculum Items: 57

Number of Published Curriculum Objects: 57

Original Price: $29.99

Quality Status: approved

Status: Live

What You Will Learn

  • Create a GAN capable of generating high resolution images using TensorFlow 2.0
  • Distribute training on a TPU or multiple GPUS
  • Implement the R2 loss function
  • Implement a scaled convolutional layer
  • Implement up-sampling and down-sampling layers
  • Implement mini-batch standard deviation to capture dataset variation
  • Generate infinite random images from a trained generator
  • Apply a perceptual path length filter to generated images
  • Generate interpolations between two different generated images
  • Who Should Attend

  • Machine learning developers who want to create high resolution images with GANs
  • Target Audiences

  • Machine learning developers who want to create high resolution images with GANs
  • This course covers the fundamentals necessary for a state-of-the-art GAN. Anyone who experimented with GANs on their own knows that it’s easy to throw together a GAN that spits out MNIST digits, but it’s another level of difficulty entirely to produce photorealistic images at a resolution higher than a thumbnail.

    This course comprehensively bridges the gap between MNIST digits and high-definition faces. You’ll create and train a GAN that can be used in real-world applications.

    And because training high-resolution networks of any kind is computationally expensively, you’ll also learn how to distribute your training across multiple GPUs or TPUs. Then for training, we’ll leverage Google’s TPU hardware for free in Google Colab. This allows students to train generators up to 512×512 resolution with no hardware costs at all.

    The material for this course was pulled from the ProGAN, StyleGAN, and StyleGAN 2 papers which have produced ground-breaking and awe-inspiring results. We’ll even use the same Flicker Faces HD dataset to replicate their results.

    Finally, what GAN course would be complete without having some fun with the generator? Students will learn not only how to generate an infinite quantity of unique images, but also how to filter them to the highest-quality images by using a perceptual path length filter. You’ll even learn how to generate smooth interpolations between two generated images, which make for some really interesting visuals.

    Course Curriculum

    Chapter 1: Introduction

    Lecture 1: Introduction

    Lecture 2: Myths About GANs And the Truth About What They Really Are

    Chapter 2: Architecture

    Lecture 1: Generator – High Level

    Lecture 2: Generator – Details

    Lecture 3: Discriminator – High Level

    Lecture 4: Discriminator – Details

    Chapter 3: Weight Scaling

    Lecture 1: Theory

    Lecture 2: Conv2d

    Lecture 3: Dense

    Lecture 4: LeakyRelu

    Chapter 4: Resampling

    Lecture 1: Resampling Theory

    Lecture 2: Blurring Theory

    Lecture 3: Blurring Code

    Lecture 4: Resampling Code

    Chapter 5: Combined Resampling + Convolution

    Lecture 1: Theory

    Lecture 2: Downsampling Code

    Lecture 3: Upsampling Code

    Chapter 6: Minibatch Standard Deviation

    Lecture 1: Theory

    Lecture 2: Code

    Chapter 7: PixelNorm and Image Conversion

    Lecture 1: Pixelwise Normalization Theory

    Lecture 2: Pixelwise Normalization Code

    Lecture 3: Image Conversion

    Chapter 8: Model Code

    Lecture 1: Generator

    Lecture 2: Discriminator

    Chapter 9: Loss and Training Step

    Lecture 1: High Level Training Overview

    Lecture 2: Why the Wasserstein Loss Doesnt Scale to High Resolutions

    Lecture 3: R2 Loss Theory

    Lecture 4: Lazy Regularization Theory

    Lecture 5: Step Function Code

    Chapter 10: Using a TPU With a Distributed Strategy

    Lecture 1: Theory

    Lecture 2: Simple Example

    Lecture 3: Tips

    Lecture 4: Distributing Our Training Loop

    Chapter 11: Supporting Callbacks

    Lecture 1: Calling Back From the Training Loop

    Lecture 2: Visualization Callback – Introduction

    Lecture 3: Visualization Callback – Visualization Generator

    Lecture 4: Visualization Callback – Callback Itself

    Lecture 5: Checkpoint Callback – Overview

    Lecture 6: Checkpoint Callback – Checkpointer

    Lecture 7: Checkpoint Callback – Serialization

    Lecture 8: Checkpoint Callback – Pickling the TrainingState

    Chapter 12: Training

    Lecture 1: Dataset

    Lecture 2: Options

    Lecture 3: train() Function

    Lecture 4: Main Training Script

    Lecture 5: Main Training Script Demo

    Lecture 6: Training in Colab

    Lecture 7: BFloat16

    Chapter 13: Generating Images

    Lecture 1: Perceptual Path Length Filter – Theory

    Lecture 2: Perceptual Path Length Filter – Code

    Lecture 3: Perceptual Path Length Filter – Effect On Variety

    Lecture 4: Main Image Generation Script

    Lecture 5: Results!

    Lecture 6: Interpolations – Point to Point

    Lecture 7: Interpolations – Circular

    Chapter 14: Wrapping Up

    Lecture 1: Creating TFRecords

    Lecture 2: Conclusion

    Instructors

  • High Resolution Generative Adversarial Networks (GANs)  No.2
    Brad Klingensmith
    Machine Learning Instructor
  • Rating Distribution

  • 1 stars: 2 votes
  • 2 stars: 3 votes
  • 3 stars: 8 votes
  • 4 stars: 29 votes
  • 5 stars: 44 votes
  • Frequently Asked Questions

    How long do I have access to the course materials?

    You can view and review the lecture materials indefinitely, like an on-demand channel.

    Can I take my courses with me wherever I go?

    Definitely! If you have an internet connection, courses on Udemy are available on any device at any time. If you don’t have an internet connection, some instructors also let their students download course lectures. That’s up to the instructor though, so make sure you get on their good side!