HOME > Development > Apache Spark with Scala Hands On with Big Data!

Apache Spark with Scala Hands On with Big Data!

  • Development
  • Jan 14, 2025
SynopsisApache Spark with Scala – Hands On with Big Data!, avai...
Apache Spark with Scala Hands On Big Data!  No.1

Apache Spark with Scala – Hands On with Big Data!, available at $99.99, has an average rating of 4.5, with 73 lectures, 6 quizzes, based on 17803 reviews, and has 98157 subscribers.

You will learn about Develop distributed code using the Scala programming language Transform structured data using SparkSQL, DataSets, and DataFrames Frame big data analysis problems as Apache Spark scripts Optimize Spark jobs through partitioning, caching, and other techniques Build, deploy, and run Spark scripts on Hadoop clusters Process continual streams of data with Spark Streaming Traverse and analyze graph structures using GraphX Analyze massive data set with Machine Learning on Spark This course is ideal for individuals who are Software engineers who want to expand their skills into the world of big data processing on a cluster or If you have no previous programming or scripting experience, youll want to take an introductory programming course first. It is particularly useful for Software engineers who want to expand their skills into the world of big data processing on a cluster or If you have no previous programming or scripting experience, youll want to take an introductory programming course first.

Enroll now: Apache Spark with Scala – Hands On with Big Data!

Summary

Title: Apache Spark with Scala – Hands On with Big Data!

Price: $99.99

Average Rating: 4.5

Number of Lectures: 73

Number of Quizzes: 6

Number of Published Lectures: 69

Number of Published Quizzes: 6

Number of Curriculum Items: 79

Number of Published Curriculum Objects: 75

Original Price: $19.99

Quality Status: approved

Status: Live

What You Will Learn

  • Develop distributed code using the Scala programming language
  • Transform structured data using SparkSQL, DataSets, and DataFrames
  • Frame big data analysis problems as Apache Spark scripts
  • Optimize Spark jobs through partitioning, caching, and other techniques
  • Build, deploy, and run Spark scripts on Hadoop clusters
  • Process continual streams of data with Spark Streaming
  • Traverse and analyze graph structures using GraphX
  • Analyze massive data set with Machine Learning on Spark
  • Who Should Attend

  • Software engineers who want to expand their skills into the world of big data processing on a cluster
  • If you have no previous programming or scripting experience, youll want to take an introductory programming course first.
  • Target Audiences

  • Software engineers who want to expand their skills into the world of big data processing on a cluster
  • If you have no previous programming or scripting experience, youll want to take an introductory programming course first.
  • New! Completely updated and re-recorded for Spark 3, IntelliJ, Structured Streaming, and a stronger focus on the DataSet API.

    “Big data” analysis is a hot and highly valuable skill – and this course will teach you the hottest technology in big data: Apache Spark. Employers including AmazonEBayNASA JPL, and Yahoo all use Spark to quickly extract meaning from massive data sets across a fault-tolerant Hadoop cluster. You’ll learn those same techniques, using your own Windows system right at home. It’s easier than you might think, and you’ll be learning from an ex-engineer and senior manager from Amazon and IMDb.

    Spark works best when using the Scala programming language, and this course includes a crash-course in Scala to get you up to speed quickly. For those more familiar with Python however, a Python version of this class is also available: “Taming Big Data with Apache Spark and Python – Hands On”.

    Learn and master the art of framing data analysis problems as Spark problems through over 20 hands-on examples, and then scale them up to run on cloud computing services in this course.

  • Learn the concepts of Spark’s Resilient Distributed Datasets, DataFrames, and Datasets.

  • Get a crash course in the Scala programming language

  • Develop and run Spark jobs quickly using Scala, IntelliJ, and SBT

  • Translate complex analysis problems into iterative or multi-stage Spark scripts

  • Scale up to larger data sets using Amazon’s Elastic MapReduce service

  • Understand how Hadoop YARN distributes Spark across computing clusters

  • Practice using other Spark technologies, like Spark SQL, DataFrames, DataSets, Spark Streaming, Machine Learning, and GraphX

  • By the end of this course, you’ll be running code that analyzes gigabytes worth of information – in the cloud – in a matter of minutes. 

    We’ll have some fun along the way. You’ll get warmed up with some simple examples of using Spark to analyze movie ratings data and text in a book. Once you’ve got the basics under your belt, we’ll move to some more complex and interesting tasks. We’ll use a million movie ratings to find movies that are similar to each other, and you might even discover some new movies you might like in the process! We’ll analyze a social graph of superheroes, and learn who the most “popular” superhero is – and develop a system to find “degrees of separation” between superheroes. Are all Marvel superheroes within a few degrees of being connected to SpiderMan? You’ll find the answer.

    This course is very hands-on; you’ll spend most of your time following along with the instructor as we write, analyze, and run real code together – both on your own system, and in the cloud using Amazon’s Elastic MapReduce service. over 8 hours of video content is included, with over 20 real examples of increasing complexity you can build, run and study yourself. Move through them at your own pace, on your own schedule. The course wraps up with an overview of other Spark-based technologies, including Spark SQL, Spark Streaming, and GraphX.

    Enroll now, and enjoy the course!

    “I studied Spark for the first time using Frank’s course “Apache Spark 2 with Scala – Hands On with Big Data!”. It was a great starting point for me,  gaining knowledge in Scala and most importantly practical examples of Spark applications. It gave me an understanding of all the relevant Spark core concepts,  RDDs, Dataframes & Datasets, Spark Streaming, AWS EMR. Within a few months of completion, I used the knowledge gained from the course to propose in my current company to  work primarily on Spark applications. Since then I have continued to work with Spark.I would highly recommend any of Franks courses as he simplifies concepts well and his teaching manner is easy to follow and continue with!  “ – Joey Faherty

    Course Curriculum

    Chapter 1: Getting Started

    Lecture 1: Udemy 101: Getting the Most From This Course

    Lecture 2: Alternate download link for the ml-100k dataset

    Lecture 3: WARNING: DO NOT INSTALL JAVA 21+ IN THE NEXT LECTURE

    Lecture 4: Introduction, and installing the course materials, IntelliJ, and Scala

    Lecture 5: Introduction to Apache Spark

    Lecture 6: Important note

    Chapter 2: Scala Crash Course [Optional]

    Lecture 1: [Activity] Scala Basics

    Lecture 2: [Exercise] Flow Control in Scala

    Lecture 3: [Exercise] Functions in Scala

    Lecture 4: [Exercise] Data Structures in Scala

    Chapter 3: Using Resilient Distributed Datasets (RDDs)

    Lecture 1: The Resilient Distributed Dataset

    Lecture 2: Ratings Histogram Example

    Lecture 3: Spark Internals

    Lecture 4: Key / Value RDDs, and the Average Friends by Age example

    Lecture 5: [Activity] Running the Average Friends by Age Example

    Lecture 6: Filtering RDDs, and the Minimum Temperature by Location Example

    Lecture 7: [Activity] Running the Minimum Temperature Example, and Modifying it for Maximum

    Lecture 8: [Activity] Counting Word Occurrences using Flatmap()

    Lecture 9: [Activity] Improving the Word Count Script with Regular Expressions

    Lecture 10: [Activity] Sorting the Word Count Results

    Lecture 11: [Exercise] Find the Total Amount Spent by Customer

    Lecture 12: [Exercise] Check your Results, and Sort Them by Total Amount Spent

    Lecture 13: Check Your Results and Implementation Against Mine

    Chapter 4: SparkSQL, DataFrames, and DataSets

    Lecture 1: Introduction to SparkSQL

    Lecture 2: [Activity] Using SparkSQL

    Lecture 3: [Activity] Using DataSets

    Lecture 4: [Exercise] Implement the Friends by Age example using DataSets

    Lecture 5: Exercise Solution: Friends by Age, with Datasets.

    Lecture 6: [Activity] Word Count example, using Datasets

    Lecture 7: [Activity] Revisiting the Minimum Temperature example, with Datasets

    Lecture 8: [Exercise] Implement the Total Spent by Customer problem with Datasets

    Lecture 9: Exercise Solution: Total Spent by Customer with Datasets

    Chapter 5: Advanced Examples of Spark Programs

    Lecture 1: [Activity] Find the Most Popular Movie

    Lecture 2: [Activity] Use Broadcast Variables to Display Movie Names

    Lecture 3: [Activity] Find the Most Popular Superhero in a Social Graph

    Lecture 4: [Exercise] Find the Most Obscure Superheroes

    Lecture 5: Exercise Solution: Find the Most Obscure Superheroes

    Lecture 6: Superhero Degrees of Separation: Introducing Breadth-First Search

    Lecture 7: Superhero Degrees of Separation: Accumulators, and Implementing BFS in Spark

    Lecture 8: [Activity] Superhero Degrees of Separation: Review the code, and run it!

    Lecture 9: Item-Based Collaborative Filtering in Spark, cache(), and persist()

    Lecture 10: [Activity] Running the Similar Movies Script using Sparks Cluster Manager

    Lecture 11: [Exercise] Improve the Quality of Similar Movies

    Chapter 6: Running Spark on a Cluster

    Lecture 1: [Activity] Using spark-submit to run Spark driver scripts

    Lecture 2: [Activity] Packaging driver scripts with SBT

    Lecture 3: [Exercise] Package a Script with SBT and Run it Locally with spark-submit

    Lecture 4: Exercise solution: Using SBT and spark-submit

    Lecture 5: Introducing Amazon Elastic MapReduce

    Lecture 6: Creating Similar Movies from One Million Ratings on EMR

    Lecture 7: Partitioning

    Lecture 8: Best Practices for Running on a Cluster

    Lecture 9: Troubleshooting, and Managing Dependencies

    Chapter 7: Machine Learning with Spark ML

    Lecture 1: Introducing MLLib

    Lecture 2: [Activity] Using MLLib to Produce Movie Recommendations

    Lecture 3: Linear Regression with MLLib

    Lecture 4: [Activity] Running a Linear Regression with Spark

    Lecture 5: [Exercise] Predict Real Estate Values with Decision Trees in Spark

    Lecture 6: Exercise Solution: Predicting Real Estate with Decision Trees in Spark

    Chapter 8: Intro to Spark Streaming

    Lecture 1: The DStream API for Spark Streaming

    Lecture 2: [Activity] Real-time Monitoring of the Most Popular Hashtags on Twitter

    Lecture 3: Structured Streaming

    Lecture 4: [Activity] Using Structured Streaming for real-time log analysis

    Lecture 5: [Exercise] Windowed Operations with Structured Streaming

    Lecture 6: Exercise Solution: Top URLs in a 30-second Window

    Chapter 9: Intro to GraphX

    Lecture 1: GraphX, Pregel, and Breadth-First-Search with Pregel.

    Lecture 2: Using the Pregel API with Spark GraphX

    Lecture 3: [Activity] Superhero Degrees of Separation using GraphX

    Chapter 10: You Made It! Where to Go from Here.

    Lecture 1: Learning More, and Career Tips

    Lecture 2: Bonus Lecture: More courses to explore!

    Instructors

  • Apache Spark with Scala Hands On Big Data!  No.2
    Sundog Education by Frank Kane
    Join over 800K students learning ML, AI, AWS, and Data Eng.
  • Apache Spark with Scala Hands On Big Data!  No.3
    Frank Kane
    Ex-Amazon Sr. Engineer and Sr. Manager, CEO Sundog Education
  • Apache Spark with Scala Hands On Big Data!  No.4
    Sundog Education Team
    Sundog Education Team
  • Rating Distribution

  • 1 stars: 150 votes
  • 2 stars: 241 votes
  • 3 stars: 1429 votes
  • 4 stars: 6366 votes
  • 5 stars: 9617 votes
  • Frequently Asked Questions

    How long do I have access to the course materials?

    You can view and review the lecture materials indefinitely, like an on-demand channel.

    Can I take my courses with me wherever I go?

    Definitely! If you have an internet connection, courses on Udemy are available on any device at any time. If you don’t have an internet connection, some instructors also let their students download course lectures. That’s up to the instructor though, so make sure you get on their good side!