HOME > Development > Advanced Reinforcement Learning in Python- cutting-edge DQNs

Advanced Reinforcement Learning in Python- cutting-edge DQNs

  • Development
  • Apr 01, 2025
SynopsisAdvanced Reinforcement Learning in Python: cutting-edge DQNs,...
Advanced Reinforcement Learning in Python- cutting-edge DQNs  No.1

Advanced Reinforcement Learning in Python: cutting-edge DQNs, available at $69.99, has an average rating of 4.85, with 104 lectures, based on 81 reviews, and has 1430 subscribers.

You will learn about Master some of the most advanced Reinforcement Learning algorithms. Learn how to create AIs that can act in a complex environment to achieve their goals. Create from scratch advanced Reinforcement Learning agents using Pythons most popular tools (PyTorch Lightning, OpenAI gym, Optuna) Learn how to perform hyperparameter tuning (Choosing the best experimental conditions for our AI to learn) Fundamentally understand the learning process for each algorithm. Debug and extend the algorithms presented. Understand and implement new algorithms from research papers. This course is ideal for individuals who are Developers who want to get a job in Machine Learning. or Data scientists/analysts and ML practitioners seeking to expand their breadth of knowledge. or Robotics students and researchers. or Engineering students and researchers. It is particularly useful for Developers who want to get a job in Machine Learning. or Data scientists/analysts and ML practitioners seeking to expand their breadth of knowledge. or Robotics students and researchers. or Engineering students and researchers.

Enroll now: Advanced Reinforcement Learning in Python: cutting-edge DQNs

Summary

Title: Advanced Reinforcement Learning in Python: cutting-edge DQNs

Price: $69.99

Average Rating: 4.85

Number of Lectures: 104

Number of Published Lectures: 103

Number of Curriculum Items: 104

Number of Published Curriculum Objects: 103

Original Price: $199.99

Quality Status: approved

Status: Live

What You Will Learn

  • Master some of the most advanced Reinforcement Learning algorithms.
  • Learn how to create AIs that can act in a complex environment to achieve their goals.
  • Create from scratch advanced Reinforcement Learning agents using Pythons most popular tools (PyTorch Lightning, OpenAI gym, Optuna)
  • Learn how to perform hyperparameter tuning (Choosing the best experimental conditions for our AI to learn)
  • Fundamentally understand the learning process for each algorithm.
  • Debug and extend the algorithms presented.
  • Understand and implement new algorithms from research papers.
  • Who Should Attend

  • Developers who want to get a job in Machine Learning.
  • Data scientists/analysts and ML practitioners seeking to expand their breadth of knowledge.
  • Robotics students and researchers.
  • Engineering students and researchers.
  • Target Audiences

  • Developers who want to get a job in Machine Learning.
  • Data scientists/analysts and ML practitioners seeking to expand their breadth of knowledge.
  • Robotics students and researchers.
  • Engineering students and researchers.
  • This is the most complete Advanced Reinforcement Learning course on Udemy. In it, you will learn to implement some of the most powerful Deep Reinforcement Learning algorithms in Python using PyTorch and PyTorch lightning. You will implement from scratch adaptive algorithms that solve control tasks based on experience. You will learn to combine these techniques with Neural Networks and Deep Learning methods to create adaptive Artificial Intelligence agents capable of solving decision-making tasks.

    This course will introduce you to the state of the art in Reinforcement Learning techniques. It will also prepare you for the next courses in this series, where we will explore other advanced methods that excel in other types of task.

    The course is focused on developing practical skills. Therefore, after learning the most important concepts of each family of methods, we will implement one or more of their algorithms in jupyter notebooks, from scratch.

    Leveling modules: 

    – Refresher: The Markov decision process (MDP).

    – Refresher: Q-Learning.

    – Refresher: Brief introduction to Neural Networks.

    – Refresher: Deep Q-Learning.

    Advanced Reinforcement Learning:

    – PyTorch Lightning.

    – Hyperparameter tuning with Optuna.

    – Reinforcement Learning with image inputs

    – Double Deep Q-Learning

    – Dueling Deep Q-Networks

    – Prioritized Experience Replay (PER)

    – Distributional Deep Q-Networks

    – Noisy Deep Q-Networks

    – N-step Deep Q-Learning

    – Rainbow Deep Q-Learning

    Course Curriculum

    Chapter 1: Introduction

    Lecture 1: Introduction

    Lecture 2: Reinforcement Learning series

    Lecture 3: Google Colab

    Lecture 4: Where to begin

    Lecture 5: Complete code

    Lecture 6: Connect with me on social media

    Chapter 2: Refresher: The Markov Decision Process (MDP)

    Lecture 1: Module overview

    Lecture 2: Elements common to all control tasks

    Lecture 3: The Markov decision process (MDP)

    Lecture 4: Types of Markov decision process

    Lecture 5: Trajectory vs episode

    Lecture 6: Reward vs Return

    Lecture 7: Discount factor

    Lecture 8: Policy

    Lecture 9: State values v(s) and action values q(s,a)

    Lecture 10: Bellman equations

    Lecture 11: Solving a Markov decision process

    Chapter 3: Refresher: Q-Learning

    Lecture 1: Module overview

    Lecture 2: Temporal difference methods

    Lecture 3: Solving control tasks with temporal difference method

    Lecture 4: Q-Learning

    Lecture 5: Advantages of temporal difference methods

    Chapter 4: Refresher: Brief introduction to Neural Networks

    Lecture 1: Module overview

    Lecture 2: Function approximators

    Lecture 3: Artificial Neural Networks

    Lecture 4: Artificial Neurons

    Lecture 5: How to represent a Neural Network

    Lecture 6: Stochastic Gradient Descent

    Lecture 7: Neural Network optimization

    Chapter 5: Refresher: Deep Q-Learning

    Lecture 1: Module overview

    Lecture 2: Deep Q-Learning

    Lecture 3: Experience replay

    Lecture 4: Target Network

    Chapter 6: PyTorch Lightning

    Lecture 1: PyTorch Lightning

    Lecture 2: Link to the code notebook

    Lecture 3: Introduction to PyTorch Lightning

    Lecture 4: Create the Deep Q-Network

    Lecture 5: Create the policy

    Lecture 6: Create the replay buffer

    Lecture 7: Create the environment

    Lecture 8: Define the class for the Deep Q-Learning algorithm

    Lecture 9: Define the play_episode() function

    Lecture 10: Prepare the data loader and the optimizer

    Lecture 11: Define the train_step() method

    Lecture 12: Define the train_epoch_end() method

    Lecture 13: [Important] Lecture correction

    Lecture 14: Train the Deep Q-Learning algorithm

    Lecture 15: Explore the resulting agent

    Chapter 7: Hyperparameter tuning with Optuna

    Lecture 1: Hyperparameter tuning with Optuna

    Lecture 2: Link to the code notebook

    Lecture 3: Log average return

    Lecture 4: Define the objective function

    Lecture 5: Create and launch the hyperparameter tuning job

    Lecture 6: Explore the best trial

    Chapter 8: Double Deep Q-Learning

    Lecture 1: Maximization bias and Double Deep Q-Learning

    Lecture 2: Link to the code notebook

    Lecture 3: Create the Double Deep Q-Learning algorithm

    Lecture 4: Check the resulting agent

    Chapter 9: Dueling Deep Q-Networks

    Lecture 1: Dueling Deep Q-Networks

    Lecture 2: Link to the code notebook

    Lecture 3: Create the dueling DQN

    Lecture 4: Observation and reward normalization

    Lecture 5: Create the environment – Part 1

    Lecture 6: Create the environment – Part 2

    Lecture 7: Implement Deep Q-Learning

    Lecture 8: Check the resulting agent

    Chapter 10: Prioritized Experience Replay

    Lecture 1: Prioritized Experience Replay

    Lecture 2: Link to the code notebook

    Lecture 3: DQN for visual inputs

    Lecture 4: Prioritized Experience Repay Buffer

    Lecture 5: Create the environment

    Lecture 6: Implement the Deep Q-Learning algorithm with Prioritized Experience Replay

    Lecture 7: Errata Lecture

    Lecture 8: Launch the training process

    Lecture 9: Check the resulting agent

    Chapter 11: Noisy Deep Q-Networks

    Lecture 1: Noisy Deep Q-Networks

    Lecture 2: Link to the code notebook

    Lecture 3: Create the noisy linear layer class

    Lecture 4: Create the Deep Q-Network

    Lecture 5: Create the policy

    Lecture 6: Create the environment

    Lecture 7: Train the algorithm

    Lecture 8: Check the results

    Chapter 12: N-step Deep Q-Learning

    Lecture 1: N-step Deep Q-Learning

    Lecture 2: Link to the code notebook

    Lecture 3: N-step Deep Q-Learning – Part 1

    Lecture 4: N-step Deep Q-Learning – Part 2

    Lecture 5: N-step Deep Q-Learning – Part 3

    Instructors

  • Advanced Reinforcement Learning in Python- cutting-edge DQNs  No.2
    Escape Velocity Labs
    Hands-on, comprehensive AI courses
  • Rating Distribution

  • 1 stars: 2 votes
  • 2 stars: 1 votes
  • 3 stars: 6 votes
  • 4 stars: 9 votes
  • 5 stars: 63 votes
  • Frequently Asked Questions

    How long do I have access to the course materials?

    You can view and review the lecture materials indefinitely, like an on-demand channel.

    Can I take my courses with me wherever I go?

    Definitely! If you have an internet connection, courses on Udemy are available on any device at any time. If you don’t have an internet connection, some instructors also let their students download course lectures. That’s up to the instructor though, so make sure you get on their good side!