Big Data Hadoop From Yes-M Systems LLC
With Interview preparations, Resume preparations and Marketing Help
Student Location: To students from around the world
Delivery Method: Instructor-Led – Live online Training
Big Data Hadoop Training Online
Master Big Data with Hadoop with Yes M Systems course and learn about industry-popular technologies such as Spark, RDD, Scala, SQL, Machine Learning, Hive, Oozie, and much more! grow your knowledge in Big Data and Hadoop Ecosystem tools with our student-focused course. Learn on the move, now!
Key Features
Module I. Introduction to Big Data and Hadoop
* What is Big Data?
* What are the challenges for processing big data?
* What technologies support big data?
* 3V’s of BigData and Growing.
* What is Hadoop?
* Why Hadoop and its Use cases
* History of Hadoop
* Different Ecosystems of Hadoop.
* Advantages and Disadvantages of Hadoop
* Real Life Use Cases
Module II. HDFS (Hadoop Distributed File System)
* HDFS architecture
* Features of HDFS
* Where does it fit and Where doesn’t fit?
* HDFS daemons and its functionalities
* Name Node and its functionality
* Data Node and its functionality
* Secondary Name Node and its functionality
* Data Storage in HDFS
* Introduction about Blocks
* Data replication
*Accessing HDFS
* CLI(Command Line Interface) and admin commands
* Java Based Approach
* Hadoop Administration
* Hadoop Configuration Files
* Configuring Hadoop Domains
* Precedence of Hadoop Configuration
* Diving into Hadoop Configuration
* Scheduler
* RackAwareness
* Cluster Administration Utilities
* Rebalancing HDFS DATA
* Copy Large amount of data from HDFS
* FSImage and Edit.log file
Module III. MAPREDUCE
* Map Reduce architecture
* JobTracker , TaskTracker and its functionality
* Job execution flow
* Configuring development environment using Eclipse
* Map Reduce Programming Model
* How to write a basic Map Reduce jobs
* Running the Map Reduce jobs in local mode and distributed mode
* Different Data types in Map Reduce
* How to use Input Formatters and Output Formatters in Map Reduce Jobs
* Input formatters and its associated Record Readers with examples
* Text Input Formatter
* Key Value Text Input Formatter
* Sequence File Input Formatter
* How to write custom Input Formatters and its Record Readers
* Output formatters and its associated Record Writers with examples
* Text Output Formatter
* Sequence File Output Formatter
* How to write custom Output Formatters and its Record Writers
* How to write Combiners, Partitioners and use of these
* Importance of Distributed Cache
* Importance Counters and how to use Counters
Module IV. Advance MapReduce Programming
* Joins – Map Side and Reduce Side
* Use of Secondary Sorting
* Importance of Writable and Writable Comparable Api’s
* How to write Map Reduce Keys and Values
* Use of Compression techniques
* Snappy, LZO and Zip
* How to debug Map Reduce Jobs in Local and Pseudo Mode.
* Introduction to Map Reduce Streaming and Pipes with examples
* Job Submission
* Job Initialization
* Task Assignment
* Task Execution
* Progress and status bar
* Job Completion
* Failures
* Task Failure
* Tasktracker failure
* JobTracker failure
* Job Scheduling
* Shuffle & Sort in depth
* Diving into Shuffle and Sort
* Dive into Input Splits
* Dive into Buffer Concepts
* Dive into Configuration Tuning
* Dive into Task Execution
* The Task assignment Environment
* Speculative Execution
* Output Committers
* Task JVM Reuse
* Multiple Inputs & Multiple Outputs
* Build In Counters
* Dive into Counters – Job Counters & User Defined Counters
* Sql operations using Java MapReduce
* Introduction to YARN (Next Generation Map Reduce)
Module V. Apache HIVE
* Hive Introduction
* Hive architecture
* Driver
* Compiler
* Semantic Analyzer
* Hive Integration with Hadoop
* Hive Query Language(Hive QL)
* SQL VS Hive QL
* Hive Installation and Configuration
* Hive, Map-Reduce and Local-Mode
* Hive DLL and DML Operations
* Hive Services
* CLI
* Schema Design
* Views
* Indexes
* Hiveserver
* Metastore
* Embedded metastore configuration
* External metastore configuration
* Transformations in Hive
* UDFs in Hive
* How to write a simple hive queries
* Usage
* Tuning
* Hive with HBASE Integration
Module VI. Apache PIG
* Introduction to Apache Pig
* Map Reduce Vs Apache Pig
* SQL Vs Apache Pig
* Different data types in Pig
* Modes of Execution in Pig
* Local Mode
* Map Reduce Mode
* Execution Mechanism
* Grunt Shell
* Script
* Embedded
* Transformations in Pig
* How to write a simple pig script
* UDFs in Pig
Module VII. Apache SQOOP
* Introduction to Sqoop
* MySQL client and Server Installation
* How to connect to Relational Database using Sqoop
* Sqoop Commands and Examples on Import and Export commands.
* Transferring an Entire Table
* Specifying a Target Directory
* Importing only a Subset of data
* Protecting your password
* Using a file format other than CSV
* Compressing Imported Data
* Speeding up Transfers
* Overriding Type Mapping
* Controlling Parallelism
* Encoding Null Values
* Importing all your tables
* Incremental Import
* Importing only new data
* Incrementing Importing Mutable data
* Preserving the last imported value
* Storing Password in the Metastore
* Overriding arguments to a saved job
* Sharing the MetaStore between sqoop client
* Importing data from two tables
* Using Custom Boundary Queries
* Renaming Sqoop Job instances
* Importing Queries with duplicate columns
* Transferring data from Hadoop
* Inserting Data in Batches
* Updating or Inserting at the same time
* Exporting Corrupted Data
Module VIII. Apache FLUME
* Introduction to flume
* Flume agent usage
Module IX. Apache OOZIE
* Introduction to Oozie
* Executing workflow jobs
FAQ'S
Do I get any discount on the course?
Yes, you get two kinds of discounts. They are group discount and referral discount. Group discount is offered when you join as a group, and referral discount is offered when you are referred from someone who has already enrolled in our training.
Who will provide the environment to execute the Practicals ?
The trainer will give Server Access to the course seekers, and we make sure you acquire practical hands-on training by providing you with every utility that is needed for your understanding of the course.
What is the qualification of the trainer?
The trainer is a certified consultant and has significant amount of experience in working with the technology.
Does MyyesM accept the course fees in installments?
Yes, we accept payments in two installments.
How does MyyesM Refund Policy work?
If you are enrolled in classes and/or have paid fees, but want to cancel the registration for certain reason, it can be attained within first 2 sessions of the training. Please make a note that refunds will be processed within 30 days of prior request.
Course Testimonials
I am thankful to Yes M Systems for their BigData Hadoop course. The training has made me interview-ready and now I have superior command over Business Analytics. The course content is carefully designed and offers real-world insights backed by industry challenges.
Sharmistha Chatterjee
The Big Data Hadoop course is driven by real-time projects and around the clock learning. I found the course content up to mark. Popular topics such as HDFS, Apache HIVE, PIG are thoroughly covered by the instructors. Yes M Systems team is amazing, thank you.
Amrita Misra
Yes M Systems Big Hadoop certification has helped me master the concepts of the Hadoop framework and fit into the Big Data processing lifecycle. The course thoroughly covers concepts such as Big Data Systems, ETL, HBase, Apache Spark, and other popular frameworks.
Harjinder Singh
Sale ManagerDisclaimer: Yes-M Systems and/or their instructors reserve the right to make any changes to the syllabus as deemed necessary to best fulfill the course objectives. Students registered for this course will be made aware of any changes in a timely fashion using reasonable means.