HOME > Development > AWS Data Architect Bootcamp 43 Services 500 FAQs 20+ Tools

AWS Data Architect Bootcamp 43 Services 500 FAQs 20+ Tools

  • Development
  • Jan 23, 2025
SynopsisAWS Data Architect Bootcamp – 43 Services 500 FAQs 20+...
AWS Data Architect Bootcamp 43 Services 500 FAQs 20+ Tools  No.1

AWS Data Architect Bootcamp – 43 Services 500 FAQs 20+ Tools, available at $109.99, has an average rating of 4.68, with 295 lectures, based on 1310 reviews, and has 16735 subscribers.

You will learn about Confidently architect AWS solutions for Ingestion, Migration, Streaming, Storage, Big Data, Analytics, Machine Learning, Cognitive Solutions and more Learn the use-cases, integration and cost of 40+ AWS Services to design cost-economic and efficient solutions for a variety of requirements Answer detailed technical questions of your design and development teams regarding implementation and build Practice hands-on labs on complex AWS services like IoT, EMR, SageMaker, Redshift, Glue, Comprehend and many more This course is ideal for individuals who are Database professionals who are starting new on the AWS platform or are looking to learn a variety of AWS Services to widen their knowledge or Beginner or Experienced Data Architects who want to increase the breadth of their knowledge on AWS to start working at the next level or Technology Executives who want to quickly assess the suitability of any given AWS services for their use-cases or AWS Professionals who are preparing for Big Data Specialty Certification or preparing for a technical interview It is particularly useful for Database professionals who are starting new on the AWS platform or are looking to learn a variety of AWS Services to widen their knowledge or Beginner or Experienced Data Architects who want to increase the breadth of their knowledge on AWS to start working at the next level or Technology Executives who want to quickly assess the suitability of any given AWS services for their use-cases or AWS Professionals who are preparing for Big Data Specialty Certification or preparing for a technical interview.

Enroll now: AWS Data Architect Bootcamp – 43 Services 500 FAQs 20+ Tools

Summary

Title: AWS Data Architect Bootcamp – 43 Services 500 FAQs 20+ Tools

Price: $109.99

Average Rating: 4.68

Number of Lectures: 295

Number of Published Lectures: 295

Number of Curriculum Items: 295

Number of Published Curriculum Objects: 295

Original Price: $199.99

Quality Status: approved

Status: Live

What You Will Learn

  • Confidently architect AWS solutions for Ingestion, Migration, Streaming, Storage, Big Data, Analytics, Machine Learning, Cognitive Solutions and more
  • Learn the use-cases, integration and cost of 40+ AWS Services to design cost-economic and efficient solutions for a variety of requirements
  • Answer detailed technical questions of your design and development teams regarding implementation and build
  • Practice hands-on labs on complex AWS services like IoT, EMR, SageMaker, Redshift, Glue, Comprehend and many more
  • Who Should Attend

  • Database professionals who are starting new on the AWS platform or are looking to learn a variety of AWS Services to widen their knowledge
  • Beginner or Experienced Data Architects who want to increase the breadth of their knowledge on AWS to start working at the next level
  • Technology Executives who want to quickly assess the suitability of any given AWS services for their use-cases
  • AWS Professionals who are preparing for Big Data Specialty Certification or preparing for a technical interview
  • Target Audiences

  • Database professionals who are starting new on the AWS platform or are looking to learn a variety of AWS Services to widen their knowledge
  • Beginner or Experienced Data Architects who want to increase the breadth of their knowledge on AWS to start working at the next level
  • Technology Executives who want to quickly assess the suitability of any given AWS services for their use-cases
  • AWS Professionals who are preparing for Big Data Specialty Certification or preparing for a technical interview
  • Hi! Welcome to the AWS?Data Architect Bootcamp course, the only course you need to learn everything about data architecture on AWS and play the role of an Enterprise Data Architect. This is the most-comprehensive AWS?course related to AWS data architecture on the market. Here’s why:

  • This is the only online course taught by an Enterprise Cloud Architect, who leads large teams of junior architects in the real world, who has an industry experience of close to two decades in the IT?industry, who is a published author, and leads technology architecture of XXX million dollar projects on cloud for multi-national clients. Data Architects draw a salary in the range of $150K – $250K?on an average. This course trains you for that job!This is my 10th course on Udemy, 3rd on AWS topics (previous 2 are best-sellers).

  • Typical AWS?classroom trainings on data architecture which contains a fraction of the topics covered in this course, costs $3000 – $5000. And this course teaches you 5 to 7 times more topics than AWS?Training (40+ AWS?Services)?in the fraction of the cost.

  • Everything covered in this course is kept latest. Services which are in Beta and launched in Re-invent (last Nov)?are already covered in the course. AWS?innovates and adds features to their stack very fast, and I?keep my course constantly updatedwith those changes. Think of this course as a Architecture Updates subscription.

  • Developers have questions, Architect’s have questions, Clients have questions – All technical curious minds have questions. And this course also has 500+?questions and answers (FAQs)curated from AWS?FAQs, to equip you with as many ready-to-use answers as you would need in your architect role.

  • The entire course is formed of 40+?services. Every service is composed of the below listed sections, with their proportion in each section / service.

  • Architecture (12%) – Diagrams, Integration, Terminology

  • Use-Cases (6%) – Whether and When to use the AWS Service

  • Pricing (2%) – Cost estimation methods to assess overall solution cost

  • Labs (75%)– To-the-point labs for architectural understanding covering all major and important features

  • Frequently Asked Questions (5%) – Selected question from AWS FAQs explained concisely. (Total 500+)

  • Apart from AWS?Services, we will use a number of client toolsto operate on AWS?Services, Databases and other technology stack. Here is a list of the tools that we would be using:

    1. EC2? ? ? ? ? 2. Putty? ? ? ? ? 3. Cloud9,? ? ? ? ?4. HeidiSQL? ? ? ? 5. MySQL Workbench? ? ? ? ?6. Pgadmin? ? ? 7. SSMS

    8. Oracle SQL Developer? ? ? ? ?9. Aginity Workbench for Redshift? ? ? ? ? ? 10. SQL Workbench / J

    11. WinSCP? ? ? ? ? 12. AWS CLI? ? ? ? ? ? ? 13. FoxyProxy? ? ? ? ? ? 14. Oracle Virtualbox? ? ? ? ? ? ? 15. Linux Shell Commands

    16. FastGlacier? ? ? ? ? ? ? 17. Rstudio? ? ? ? ? ? ?18. Redis Client? ? ? ? ? ? ? ?19. Telnet? ? ? ? ? ? ? ? 20. S3 Browser

    21. Juypter Notebooks

    Below is a detailed description of the curriculum as AWS?Services we will be learning to understand how they fit in the overall cloud data architecture on AWS and address various use-cases. If you have any questions, please don’t hesitate to contact me.

    1. AWS?Transfer for SFTP (Nov 2018 Release)– We will start our journey in this course with this service and learn how to ingest files in self-service manner using an sFTP?server on AWS?and sFTP?tools on-premise to ingest file based data on AWS.

    2. AWS?Snowball – Large data volumes spanning hundreds of TBs are not ideal for ingestion via network. Using this service, we will learn how to ingest mega volume data using device based offline data transport mechanism to AWS?cloud.

    3. AWS?Kinesis Data Firehose – One of the data ingestion mechanism is streaming. We will learn how to channel streamed data from Kinesis Data Streams to AWS?Data Storage &?Analytics Repositories like S3, Redshift, ElasticSearch and more using this service.

    4. AWS?Kinesis Data Streams – Clients can have streaming infrastructure or even devices (IoT)?which may stream data continuously. Using this service we will learn how to collect streaming data and store it on AWS.

    5. AWS?Managed Streaming for Kafka (MSK)?(Nov 2018 Release) – AWS?recently added Kafka to their technology stack, which has lot of similarities with Kinesis. Learn comparative features as well as the method of standing up Kafka cluster on AWS?to accept streaming data in AWS.

    6. AWS?Schema Conversion Tool – Database migration is a complex process and can be homogeneous (for ex. SQL?Server on-premise to SQL?Server on AWS) or heterogeneous ( for ex. MySQL?to PostgreSQL). We will use this offline tool to learn about assessing migration complexities, generate migration assessment reports, and even perform schema migration.

    7. AWS?Database Migration Service (DMS) – Database Migration / Replication is a very common need for any federated data solution. We will use this service to learn how to migrate and/or replicate on-premise data from databases to AWS?hosted relational databases on AWS?RDS.

    8. AWS?Data Sync (Nov 2018 Release)?– Continuous synchronization of data from on-premise to cloud hosted data repositories becomes a key requirement in environments where data is generated or changes very fast. We will use to service to learn how it can solve this requirement.

    9. AWS?Storage Gateway – This service has striking resemblance with AWS?Data Sync, and is one of the alternatives for standing cached volumes and stored volumes on AWS?to build a bridge between on-premise data storage and AWS. We will briefly learn similarities between AWS?Data Sync and AWS?Storage Gateway.

    10. AWS?ElastiCache ( Memcached )?– After covering most of the mechanisms of data ingestion, we will shift focus on caching data before moving on the databases. We will start learning about caching with Memcached flavor of this service which offers powerful caching capabilities for simpler data types.

    11. AWS?ElastiCache ( Redis )?– We will learn comparative difference between Memcached and Redis for caching, and learn how to use Redis flavor of caching which can build cache clusters and can host complex data types.

    12. AWS?S3 (Advanced)?– AWS?S3 is the basis of data storage and data lake in AWS. We will learn advanced tactics like locking data for legal compliance, cross-region global replication, data querying with S3 Select feature, Life-cycle management to move data to cold storage etc.

    13. AWS?Glacier – Data keep accumulating on cloud and can increase storage costs dramatically. Infrequently used data is suitable for cold storage, where this service comes into play. We will learning archival, archive retrieval and archive querying using this service.

    14. AWS?Relational Database Service?(MariaDB)?– We will be focusing heavily on AWS?Service, which consists of 6 different types of databases. We will learn basic concepts of AWS?RDS?using MariaDB, stand-up an instance and query it with a client tool.

    15. AWS?Relational Database Service?(SQL?Server)– Data needs to be imported and exported between data-centers and cloud hosted database instances. We will learn such tactics for dealing with backups and restores across cloud using SQL?Server database on RDS?with a client tool.

    16. AWS?Relational Database Service?(Oracle)– We will spend some time to learn how to stand up Oracle on AWS?RDS, especially for Oracle professionals.

    17. AWS?Relational Database Service?(MySQL)– After spending time on practicing basic concepts, with MySQL?database on AWS?RDS, we will start practicing advanced concepts for High-Availability and Performance, like Read Replicas and Performance Insights features.

    18. AWS?Relational Database Service?(PostgreSQL)– There can be use-cases where there may be need to convert one database to another on cloud, for example convert PostgreSQL?to MySQL. We will learn about some compatibility features where we can create a MySQL?read replica from a PostgreSQL?instance and make a read replica as an independent database.

    19. AWS?Relational Database Service?(Aurora)– Aurora on AWS?RDS?is a native database service from AWS. It comes in two flavors – cluster hosted and serverless, which is suitable for different use-cases. Also the storage architecture of Aurora is shared by various other AWS?services like AWS?Neptune and DocumentDB. We will learn this service in-depth.

    20. AWS?Neptune – Relational databases is just one of the types of databases in the industry as well as on AWS. Graph is special use-case for very densely connected data where the value of relationships is much higher than normal. We will learn graph theory of RDF?vs Property Graph, and learn how Neptune fits in this picture, stand-up a Neptune Server as well as client, and operate on it with query languages like Gremlin ( Tinkerpop )?and SPARQL.

    21. AWS?DocumentDB (Nov 2018 Release) – MongoDB?is one of the industry leader in NoSQL?Document Databases. AWS?has recently introduced this new service which is a native implementation of AWS?to provide an equivalent database with MongoDB?compatibility. We will learn details of the same.

    22. AWS?DynamoDB?– Key-value databases are important for housing voluminous data typically logs, tokens etc. We will learn document database implementation in depth with advanced features like streaming, caching, data expiration and more.

    23. AWS?API?Gateway – REST?APIs are today’s standard mechanism of data ingestion. We will learn how to build data ingestion and access pipeline with APIs using this service with AWS?DynamoDB.

    24. AWS?Lambda – Microservices are often tied with APIs, and are the cornerstone of any programmatic integration with AWS?Services, typically AWS’s Artificial Intelligence and Machine Learning Services. We will learn developing Lambda functions

    25. AWS?CloudWatch – System logging is at the center of all programmatic logic execution, and it ties very closely with microservices and metrics logging for a variety of AWS?Services. We will learn how to access and log data from microservices in CloudWatch logs.

    26. AWS?Internet of Things (IoT)– Today IoT?is one of the fastest growing areas, and from a data perspective, its one of the most valued source of data. The first challenge enterprises phase is the mechanism of ingesting data from devices and then processing it. With prime focus on ingestion, we will learn how to solution this using an end-to-end practical example which reads data from a device and sends text messages on your cell phone.

    27. AWS?Data Pipeline – With Data Lakes already overflowing with data, moving data within cloud repositories and from on-premises to AWS requires an orchestration engine which can move the data around with some processing. We will learn how to solve this use-case with this service.

    28. Amazon Redshift and Redshift Spectrum – All stored data in relational or non-relational format needs to be analyzed and warehoused. We will learn how to cater the requirement for a peta-byte scale, massively parallel data warehouse using this service.

    29. AWS?ElasticSearch – ElasticSearch is one of the market leaders in search framework along with its alternative Apache Solr. AWS?provides its own managed implementation of ElasticSearch, which can be used as one of the options to search data from different repositories. We will learn how to use this service for addressing search use-cases, and understand how tools like Logtash and Kibana fits in the overall solution.

    30. AWS?CloudSearch – Standing up an AWS?ElasticSearch needs some ElasticSearch specific understanding. For use-cases which needs a more managed solution, AWS?provides an alternative packaged solution for search based on Apache Solr. We will learn how to stand up this service and use if for standing up search solutions in an express manner.

    31. AWS?Elastic MapReduce (EMR) – After spending sufficient time on Ingestion, Migration, Storage, Databases, Search and Processing, now we will enter the world of Big Data Analytics where we will spend significant amount of time learning how to standup a Hadoop based cluster and process data with frameworks like Spark, Hive, Oozie, EMRFS, Tez, Jupyter Notebooks, EMR?Notebooks, Dynamic Port Forwarding, RStudio on EMR, Read and Process data from S3 in EMR, Integrate Glue with Hive, Integrate DynamoDB?with Hive and much more.

    32. AWS?Backup (Nov 2018 Release)?– Creating backup routines of various data repositories is a Standard Operating Procedure of production environments. AWS?made this job easier for support team with this brand new service. We will learn about the details of this service.

    33. AWS?Glue – AWS?has centralized Data Cataloging and ETL?for any and every data repository in AWS?with this service. We will learn how to use features like crawlers, data catalog, serde (serialization de-serialization libraries), Extract-Transform-Load (ETL)?jobs and many more features that addresses a variety of use-cases with this service.

    34. AWS?Athena – Serverless data lake is formed using four major services :?S3, Glue, Redshift, Athena and QuickSight. This service is at the tail end of the process, and acts like a query engine for the data lake. We will learn how it serves that purpose and completes the picture.

    35. AWS?QuickSight – AWS?filled the gap of a cloud-native reporting service in 2017 with the launch of this service. We will learn how it fits in the Serverless Data Lake picture and allows to create reports and dashboards.

    36. AWS?Rekognition – We will start our journey into the world of cognitive services powered by Artificial Intelligence with this service. Images and Video are vital source of data, and extracting information from these data sources and processing that data in a programmatic manner has various applications. We will learn how to perform this integration with Rekognition.

    37. AWS?Textract (Nov 2018 Release) – Optical Character Recognition is another vital source of data, for ex. we are very much used to scanning of bar codes, tax forms, ebooks etc. We will learn how to extract text from documents using this AI?powered brand new service form AWS.

    38. AWS?Comprehend – Natural Language Processing (NLP) is a very big practice area of data analytics, typically performed using data science languages like R?and Python. AWS?makes the job of NLP easier by wrapping up a AI?powered NLP?service. We will learn the use of this service and understand how it complements services like Textract and Rekognition.

    39. AWS?Transcribe – One major source of data that we have not touched so far is Speech to Text. We will learn how to use this AP?powered service to extract text from speech, and how it can be effectively used for a number of use-cases.

    40. AWS?Polly – We would have covered many use-cases of processing textual data from one form to another, but processing text to speech, which is the exact opposite function of Transcribe, we will learn to perform that with this AI?powered service from AWS. We will also learn the use of Speech Synthesis Language to control the details of the speech that gets generated.

    41. AWS?SageMaker – After comfortably using AI?powered service, which abstracts the complexity of machine learning models from end-users, we will now venture in the world of machine learning with this service. We will execute a machine learning model end-to-end and learn how to access data from S3, create a model, create notebooks for executing code to explore and process data, train – build – deploy machine learning model, tune hyper-parameters, and finally accessing it from a load balanced infrastructure using API?endpoints.

    42. AWS?Personalize – Recommendation Engines requires building a reinforced deep learning neural network. Amazon has been in the business of recommending products to customers since decades. They have packages their method of recommendation as a product and launched it as a service, which is making a debut in the form of Personalize. We will perform an end-to-end exercise to understand how to use this service for generating recommendations.

    43. AWS?Lake Formation (Nov 2018 Release)?– As forming data lakes is a tedious process, AWS?has introduce a set of orchestration steps in the form of service to expedite the generation of Data Lakes. As this service is in early preview (Beta)?and is subject to change, we will look at a preview of the GUI?of this service before concluding the curriculum of this course.

    If you are not sure whether this course is right for you, feel free to drop me a message and I?will be happy to answer your question related to suitability of this course for you. Hope you will enroll in the course. I?hope to see you soon in the class !

    Course Curriculum

    Chapter 1: Introduction

    Lecture 1: Instructor Introduction

    Lecture 2: Course Coverage and Format

    Lecture 3: Pre-requisites – What youll need for this course

    Chapter 2: AWS Transfer for SFTP

    Lecture 1: AWS SFTP Architecture Diagram, Use-Case and Pricing

    Lecture 2: Lab: AWS SFTP – Create a AWS SFTP Server and Users

    Lecture 3: Lab: AWS SFTP – Configure SFTP Client

    Lecture 4: Lab: AWS SFTP – Create a service role and testing AWS SFTP

    Lecture 5: Lab: AWS SFTP – Delete Users and SFTP Server

    Lecture 6: AWS Transfer for SFTP – FAQs

    Lecture 7: Resources

    Chapter 3: AWS Snowball

    Lecture 1: AWS Snowball Architecture Diagram, Use-Case, and Pricing

    Lecture 2: Lab: AWS Snowball – Create a Snowball job

    Lecture 3: AWS Snowball – FAQs

    Lecture 4: Resources

    Chapter 4: Amazon Kinesis Data Streams and Firehose

    Lecture 1: AWS Kinesis Architecture Diagram, Use-Case and Pricing

    Lecture 2: AWS Kinesis Streams and Kinesis Firehose Basics

    Lecture 3: Lab: AWS Kinesis Firehose – Create a Delivery Stream

    Lecture 4: Lab: AWS Kinesis Firehose – Post data to a delivery stream

    Lecture 5: Lab: AWS Kinesis Firehose – Delete Delivery Stream

    Lecture 6: Lab: AWS Kinesis – Create a Kinesis Data Stream

    Lecture 7: Lab: AWS Kinesis – Create a Kinesis Data Stream Consumer

    Lecture 8: Lab: AWS Kinesis – Create a Kinesis Data Stream Producer

    Lecture 9: Lab: AWS Kinesis – Post data to a Kinesis Delivery Stream

    Lecture 10: Lab: AWS Kinesis Data Streams – Delete Streams

    Lecture 11: AWS Kinesis Data Streams & Firehose – FAQs

    Lecture 12: Resources

    Chapter 5: Amazon Managed Streaming for Kafka

    Lecture 1: AWS MSK – Architecture Diagram, Use-Case and Pricing

    Lecture 2: Lab: AWS MSK – Create a network for hosting brokers

    Lecture 3: Lab: AWS MSK – Create Kafka Cluster using MSK

    Lecture 4: Lab: AWS MSK – Create a Kafka Client to connect to MSK Kafka Cluster

    Lecture 5: Lab: AWS MSK – Delete Kafka Cluster Instance

    Lecture 6: AWS MSK – FAQs

    Lecture 7: Resources

    Chapter 6: AWS Database Migration Service (DMS) and Schema Conversion Tool (SCT)

    Lecture 1: AWS DMS and SCT – Architecture Diagram, Use-Case and Pricing

    Lecture 2: Lab: AWS SCT – Analyze Relational Database Schema

    Lecture 3: Lab: AWS SCT – Create a Redshift Cluster as destination

    Lecture 4: Lab: AWS SCT – Compare schema mapping from SQL Server OLAP to Redshift

    Lecture 5: Lab: AWS SCT – Assess schema objects for conversion

    Lecture 6: Lab: AWS SCT – Apply Schema migration to Redshift Data Warehouse

    Lecture 7: Lab: AWS SCT – Delete SQL Server and Redshift Instance

    Lecture 8: Lab: AWS DMS – Create a DMS Target

    Lecture 9: Lab: AWS DMS – Create a DMS Source

    Lecture 10: Lab: AWS DMS – Configure DMS Source

    Lecture 11: Lab: AWS DMS – Configure Security and Data Structures

    Lecture 12: Lab: AWS DMS – Replication Instance and Database endpoints

    Lecture 13: Lab: AWS DMS – Create Migration or Replication Task

    Lecture 14: Lab: AWS DMS – Delete DMS Instance and other resources

    Lecture 15: AWS DMS and SCT – FAQs

    Lecture 16: Resources

    Chapter 7: AWS Data Sync and Storage Gateway

    Lecture 1: AWS Data Sync – Architecture Diagram, Use-Case and Pricing

    Lecture 2: Lab: AWS Data Sync – Create an Agent

    Lecture 3: Lab: AWS Data Sync – Configure an Agent

    Lecture 4: Lab: AWS Data Sync – Create a data transfer Task

    Lecture 5: Lab: AWS Data Sync – Execute a data transfer task

    Lecture 6: Lab: AWS Data Sync – Delete agent and tasks

    Lecture 7: Lab: AWS Data Sync – Comparison with Storage Gateway

    Lecture 8: AWS Data Sync – FAQs

    Lecture 9: Resources

    Chapter 8: Amazon ElastiCache – Redis and Memcached

    Lecture 1: Amazon ElastiCache – Caching Strategies and Architecture

    Lecture 2: Amazon ElastiCache – Memcached vs Redis, Use-Cases and Pricing

    Lecture 3: Lab: Amazon ElastiCache – Create ElastiCache Redis Clustered Instance

    Lecture 4: Lab: Amazon ElastiCache – Access Elasticache Redis Instance from a client

    Lecture 5: Lab: Amazon ElastiCache – Delete Instance

    Lecture 6: Lab: Amazon ElastiCache – Create ElastiCache Memcached Instance

    Lecture 7: Lab: Amazon ElastiCache – Access Elasticache Memcached Instance from a client

    Lecture 8: Lab: Amazon ElastiCache – Delete Instance

    Lecture 9: Amazon ElastiCache – FAQs

    Lecture 10: Resources

    Chapter 9: Amazon S3 and Glacier – Advanced

    Lecture 1: Lab: Amazon S3 – Transfer Acceleration

    Lecture 2: Lab: Amazon S3 – Storage Tiers and Life-cycle Management Rules

    Lecture 3: Lab: Amazon S3 – Cross Region Replication

    Lecture 4: Lab: Amazon S3 – Storage Analytics, Usage Metrics, and Inventory Reports

    Lecture 5: Lab: Amazon S3 – Object Locking

    Lecture 6: Lab: Amazon S3 – S3 Select for Big Data

    Lecture 7: Lab: Amazon Glacier – Create an Archive Vault

    Lecture 8: Lab: Amazon Glacier – Move data from S3 to Glacier

    Lecture 9: Lab: Amazon Glacier – Retrieve data from Glacier

    Lecture 10: Lab: Amazon Glacier – Capacity Units and Glacier Select

    Lecture 11: Amazon S3 and Glacier – FAQs

    Lecture 12: Resources

    Chapter 10: Amazon ( AWS ) RDS – Basics

    Lecture 1: AWS RDS – Overview, Use-case and coverage

    Lecture 2: AWS RDS – Architecture Diagram and Details

    Lecture 3: AWS RDS – Pricing

    Lecture 4: AWS RDS – Basic FAQs

    Chapter 11: Amazon ( AWS ) RDS – Maria DB

    Lecture 1: Lab: AWS RDS Maria DB – Basic Settings and Pricing

    Lecture 2: Lab: AWS RDS Maria DB – Advanced Settings

    Lecture 3: Lab: AWS RDS Maria DB – Instance Settings & Monitoring

    Lecture 4: Lab: AWS RDS Maria DB – Accessing database instance with HeidiSQL

    Lecture 5: Lab: AWS RDS Maria DB – Backup and Snapshot Actions

    Instructors

  • AWS Data Architect Bootcamp 43 Services 500 FAQs 20+ Tools  No.2
    Siddharth Mehta
    Enterprise Cloud Architect, Published Author, Cloud Geek
  • Rating Distribution

  • 1 stars: 27 votes
  • 2 stars: 28 votes
  • 3 stars: 128 votes
  • 4 stars: 442 votes
  • 5 stars: 685 votes
  • Frequently Asked Questions

    How long do I have access to the course materials?

    You can view and review the lecture materials indefinitely, like an on-demand channel.

    Can I take my courses with me wherever I go?

    Definitely! If you have an internet connection, courses on Udemy are available on any device at any time. If you don’t have an internet connection, some instructors also let their students download course lectures. That’s up to the instructor though, so make sure you get on their good side!