HOME > Development > Modern Reinforcement Learning- Deep Q Agents (PyTorch TF2)

Modern Reinforcement Learning- Deep Q Agents (PyTorch TF2)

  • Development
  • Mar 02, 2025
SynopsisModern Reinforcement Learning: Deep Q Agents (PyTorch & T...
Modern Reinforcement Learning- Deep Q Agents (PyTorch TF2)  No.1

Modern Reinforcement Learning: Deep Q Agents (PyTorch & TF2), available at $79.99, has an average rating of 4.39, with 50 lectures, based on 1057 reviews, and has 6202 subscribers.

You will learn about How to read and implement deep reinforcement learning papers How to code Deep Q learning agents How to Code Double Deep Q Learning Agents How to Code Dueling Deep Q and Dueling Double Deep Q Learning Agents How to write modular and extensible deep reinforcement learning software How to automate hyperparameter tuning with command line arguments This course is ideal for individuals who are Python developers eager to learn about cutting edge deep reinforcement learning It is particularly useful for Python developers eager to learn about cutting edge deep reinforcement learning.

Enroll now: Modern Reinforcement Learning: Deep Q Agents (PyTorch & TF2)

Summary

Title: Modern Reinforcement Learning: Deep Q Agents (PyTorch & TF2)

Price: $79.99

Average Rating: 4.39

Number of Lectures: 50

Number of Published Lectures: 50

Number of Curriculum Items: 50

Number of Published Curriculum Objects: 50

Original Price: $199.99

Quality Status: approved

Status: Live

What You Will Learn

  • How to read and implement deep reinforcement learning papers
  • How to code Deep Q learning agents
  • How to Code Double Deep Q Learning Agents
  • How to Code Dueling Deep Q and Dueling Double Deep Q Learning Agents
  • How to write modular and extensible deep reinforcement learning software
  • How to automate hyperparameter tuning with command line arguments
  • Who Should Attend

  • Python developers eager to learn about cutting edge deep reinforcement learning
  • Target Audiences

  • Python developers eager to learn about cutting edge deep reinforcement learning
  • In this complete deep reinforcement learning course you will learn a repeatable framework for reading and implementing deep reinforcement learning research papers. You will read the original papers that introduced the Deep Q learning, Double Deep Q learning, and Dueling Deep Q learning algorithms. You will then learn how to implement these in pythonic and concise PyTorch and Tensorflow 2code, that can be extended to include any future deep Q learning algorithms. These algorithms will be used to solve a variety of environments from the Open AI gym’s Atari library, including Pong, Breakout, and Bankheist.

    You will learn the key to making these Deep Q Learning algorithms work, which is how to modify the Open AI Gym’s Atari library to meet the specifications of the original Deep Q Learning papers. You will learn how to:

  • Repeat actions to reduce computational overhead

  • Rescale the Atari screen images to increase efficiency

  • Stack frames to give the Deep Q agent a sense of motion

  • Evaluate the Deep Q agent’s performance with random no-ops to deal with model over training

  • Clip rewards to enable the Deep Q learning agent to generalize across Atari games with different score scales

  • If you do not have prior experience in reinforcement or deep reinforcement learning, that’s no problem. Included in the course is a complete and concise course on the fundamentals of reinforcement learning. The introductory course in reinforcement learning will be taught in the context of solving the Frozen Lake environment from the Open AI Gym.

    We will cover:

  • Markov decision processes

  • Temporal difference learning

  • The original Q learning algorithm

  • How to solve the Bellman equation

  • Value functions and action value functions

  • Model free vs. model based reinforcement learning

  • Solutions to the explore-exploit dilemma, including optimistic initial values and epsilon-greedy action selection

  • Also included is a mini course in deep learning using the PyTorch framework. This is geared for students who are familiar with the basic concepts of deep learning, but not the specifics, or those who are comfortable with deep learning in another framework, such as Tensorflow or Keras. You will learn how to code a deep neural network in Pytorch as well as how convolutional neural networks function. This will be put to use in implementing a naive Deep Q learning agent to solve the Cartpole problem from the Open AI gym. 

    Course Curriculum

    Chapter 1: Introduction

    Lecture 1: What You Will Learn In This Course

    Lecture 2: Required Background, software, and hardware

    Lecture 3: How to Succeed in this Course

    Chapter 2: Fundamentals of Reinforcement Learning

    Lecture 1: Agents, Environments, and Actions

    Lecture 2: Markov Decision Processes

    Lecture 3: Value Functions, Action Value Functions, and the Bellman Equation

    Lecture 4: Model Free vs. Model Based Learning

    Lecture 5: The Explore-Exploit Dilemma

    Lecture 6: Temporal Difference Learning

    Chapter 3: Deep Learning Crash Course

    Lecture 1: Dealing with Continuous State Spaces with Deep Neural Networks

    Lecture 2: Naive Deep Q Learning in Code: Step 1 – Coding the Deep Q Network

    Lecture 3: Naive Deep Q Learning in Code: Step 2 – Coding the Agent Class

    Lecture 4: Naive Deep Q Learning in Code: Step 3 – Coding the Main Loop and Learning

    Lecture 5: Naive Deep Q Learning in Code: Step 4 – Verifying the Functionality of Our Code

    Lecture 6: Naive Deep Q Learning in Code: Step 5 – Analyzing Our Agents Performance

    Lecture 7: Dealing with Screen Images with Convolutional Neural Networks

    Chapter 4: Human Level Control Through Deep Reinforcement Learning: From Paper to Code

    Lecture 1: How to Read Deep Learning Papers

    Lecture 2: Analyzing the Paper

    Lecture 3: How to Modify the OpenAI Gym Atari Environments

    Lecture 4: How to Preprocess the OpenAI Gym Atari Screen Images

    Lecture 5: How to Stack the Preprocessed Atari Screen Images

    Lecture 6: How to Combine All the Changes

    Lecture 7: How to Add Reward Clipping, Fire First, and No Ops

    Lecture 8: How to Code the Agents Memory

    Lecture 9: How to Code the Deep Q Network

    Lecture 10: Coding the Deep Q Agent: Step 1 – Coding the Constructor

    Lecture 11: Coding the Deep Q Agent: Step 2 – Epsilon-Greedy Action Selection

    Lecture 12: Coding the Deep Q Agent: Step 3 – Memory, Model Saving and Network Copying

    Lecture 13: Coding the Deep Q Agent: Step 4 – The Agents Learn Function

    Lecture 14: Coding the Deep Q Agent: Step 5 – The Main Loop and Analyzing the Performance

    Chapter 5: Deep Reinforcement Learning with Double Q Learning

    Lecture 1: Analyzing the Paper

    Lecture 2: Coding the Double Q Learning Agent and Analyzing Performance

    Chapter 6: Dueling Network Architectures for Deep Reinforcement Learning

    Lecture 1: Analyzing the Paper

    Lecture 2: Coding the Dueling Deep Q Network

    Lecture 3: Coding the Dueling Deep Q Learning Agent and Analyzing Performance

    Lecture 4: Coding the Dueling Double Deep Q Learning Agent and Analyzing Performance

    Chapter 7: Improving On Our Solutions

    Lecture 1: Implementing a Command Line Interface for Rapid Model Testing

    Lecture 2: Consolidating Our Code Base for Maximum Extensability

    Lecture 3: How to Test Our Agent and Watch it Play the Game in Real Time

    Chapter 8: Conclusion

    Lecture 1: Summarizing What Weve Learned

    Chapter 9: Bonus Lecture

    Lecture 1: Bonus Video: Where to Go From Here

    Chapter 10: Tensorflow 2 Implementations

    Lecture 1: Differences Between Tensorflow 2 and PyTorch

    Lecture 2: Coding the Deep Q Network Class in Tensorflow 2

    Lecture 3: Coding the Deep Q Learning Agent in Tensorflow 2

    Lecture 4: Testing the Tensorflow 2 Deep Q Learning Agent

    Lecture 5: Coding the Tensorflow 2 Double Q Learning Agent

    Lecture 6: Coding the Dueling Network and Agent in Tensorflow 2

    Lecture 7: Coding the Dueling Double DQN Agent in Tensorflow 2

    Chapter 11: Appendix

    Lecture 1: Installing the New OpenAI Gym in a Virtual Environment

    Lecture 2: Making the DQN Agent Compliant with the New Gym Interface

    Instructors

  • Modern Reinforcement Learning- Deep Q Agents (PyTorch TF2)  No.2
    Phil Tabor
    Machine Learning Engineer
  • Rating Distribution

  • 1 stars: 8 votes
  • 2 stars: 17 votes
  • 3 stars: 72 votes
  • 4 stars: 318 votes
  • 5 stars: 642 votes
  • Frequently Asked Questions

    How long do I have access to the course materials?

    You can view and review the lecture materials indefinitely, like an on-demand channel.

    Can I take my courses with me wherever I go?

    Definitely! If you have an internet connection, courses on Udemy are available on any device at any time. If you don’t have an internet connection, some instructors also let their students download course lectures. That’s up to the instructor though, so make sure you get on their good side!