request a call

What will you get?

3+ hours of high quality video content

One on one session with a personal mentor

37+ hours of labs and projects

Certification and job assistance

No constraint of time and place

Curriculum

  • Introduction to Big Data and Hadoop
      Introduction to Big Data
    • What is Big Data?
    • What is Big Data problem?
    • Why not to ignore Big Data?
    • 4 V's
      Introduction to Hadoop
    • What is Hadoop?
    • History of Hadoop
    • Basic Block Diagram
    • Hadoop 1.0 - Architecture, Characteristics
    • Problems with Hadoop 1.0
    • Hadoop 2.x - Architecture, Characteristics
    • Difference between Hadoop 1. 0 and Hadoop 2.x
  • Introduction to HDFS and its use
    • Features
    • Architecture
    • Data Storage Unit - Block
    • Using HDFS practically
    • HDFS Fault Tolerance (Hadoop 1.0)
    • HDFS High Availability (Hadoop 2.x)
    • Scaling HDFS horizontally - HDFS Federation (Hadoop 2.x)
    • Case Study - HDFS in industry
  • Single node cluster setup
  • Introduction to MapReduce programming and example (Using Python)
  • Overview of Hadoop Ecosystem
  • Mini Projects

Sample Content

  • Module 1: Big Data and Hadoop

  • Module 2:Hadoop distributed file systems

  • Module 3: File systems in Hadoop

Projects

City population by sex, city and city type Industry / Area - Demographic, Govt and Social


About project - The United Nations Statistics Division collects, compiles and disseminates official demographic and social statistics on a wide range of topics. Data have been collected since 1948 through a set of questionnaires dispatched annually to over 230 national statistical offices and have been published in the Demographic Yearbook collection. The Demographic Yearbook disseminates statistics on population size and composition, births, deaths, marriage and divorce, as well as respective rates, on an annual basis. The Demographic Yearbook census datasets cover a wide range of additional topics including economic activity, educational attainment, household characteristics, housing characteristics, ethnicity, language, foreign-born and foreign population.
Data - The data available in file unsd-citypopulation-year-fm.csv contains the city population data for various countries over a wide period. The file gives details for both the sexes. There are 11 fields in the file. The important ones are: Country, Year, Area, Sex, City, City Type, Source Year and Value.

Trade and Transport data analysis. Industry / Area - Govt and Transportation


About project - UN/LOCODE, the United Nations Code for Trade and Transport Locations, is a geographic coding scheme developed and maintained by United Nations Economic Commission for Europe (UNECE). UN/LOCODE assigns codes to locations used in trade and transport with functions such as seaports, rail and road terminals, airports, Postal Exchange Office and border crossing points.
Data - The sample data being used in this project, finds its source at UNECE. There are five files in this data set, namely - code-list, country-codes, function-classifiers, subdivision-codes, and status-indicators. Please refer the given link to understand the entries in the data sets: https://en.wikipedia.org/wiki/UN/LOCODE

Research Team

You could say there are 3 different kinds of contributors to this team. The first kind are computer science researchers from across the country. Meenal, who is pursuing her PHD in High Performance Computing has led a team of more than 5 research faculty members from across the country and created the core content of the course. Second type of contributors are Experienced Hadoop trainers. Spearheaded by Venkat, who has been one of the early experts of Hadoop in India. Venkat has reviewed, tested and improved over the work done by the research team over the months working closely with them. Venkat is now a senior faculty for live session and project guidance. Third and most important contributors are a network of more than 50 Industry professionals from across the globe who have reviewed and contributed to this course through case-studies and practical problem solving. it is because of this network our course offers the most industry - relevant projects.

How do you want to learn?

This is the most comprehensive course that covers all the topics as per the latest syllabus for .
  • Self-Paced Course 5000.0

    You will get the content for hadoop course

    • 3+ hours of Video lectures (online & DVD)
    • Study Material in the e-books for all modules
    • Complete Doubt solving and faculty guidance via mail

    5000.0

  • Guided Course 9000.0

    This package includes complete hadoop course with added advantage of job assistance.

    • 55+ hours of learning experience
    • 3+ hours of Video lectures (online & DVD)
    • Study Material in the e-books for all modules
    • Personal mentor faculty to guide you through the lab and project work
    • 7+ hours of one-on-one interaction time with faculty
    • 16 Labs and Mini Project
    • Course End Projects on real industry data
    • Completely flexible, faculty interaction scheduling
    • Placement guidance and Interview preparation

    9000.0

Why should you learn from ufaber?

Other Classes Ufaber course
Video content You will get nothing or screen capture videos High production quality, graphics videos
Faculty student ratio Minimum 1-15 1 to 1 learning
Interaction in live class Maximum 5% 100% interactive classes
Pace of learning Very rigid system, scheduled classes Completely customized, as per your needs
Quality of Projects Simple, standard copy book data sets Industry data sets, continuously updated
Faculty answerability Limited, freelance faculty Dedicated full time researchers
Mentoring Not possible at all Continuous mentoring, at every stage of the course
Ultimate Course offering Certificate Job Readiness
Placement assistance Negligible From resume to interview preparation

Review

  • I was impressed with the depth of the material while still keeping in mind the target audience like myself, I have a wide range of overall IT experience but I am quite new to the Big Data and Hadoop realms. Very informative for people who want to get into the discipline and go deeper in understanding and proficiency.

    Philip Vaillancourt

Frequently Asked Questions

  • INDUSTRY SPECIFIC
    Is the hype around Big Data and Hadoop justified?
    Hadoop is no more just a hype it is reality. It is touted to be a 50 billion market in a few years. That will be the fastest any industry has ever grown. Every industry which has data in any process of its operation will sooner or later move to Big Data capability.
  • How is Hadoop currently used in the industry?
    Currently Hadoop is adapted in two broad kind of scenarios by companies.
    a) Where Big data is a problem: These are companies where the volume of data itself is huge or the data is very unstructured and the thus associated costs were always a problem. Hadoop here enters and allows such companies to either scale up their data capability in cost effective manner or does data crunching and cleaning so it allows only relevant data to be stored.
    b) Where Big data brings in opportunities:These are companies deal with massive data but now want to use that data to benefit their business. They could use Hadoop installations for customer analytics, predictive modeling, recommendation engines, process optimization etc.
  • What kind of industries and sectors are using Hadoop?
      Finance
    • Customer Risk Analysis; Fraud Detection; Market Risk Modeling; Trade Performance Analytics
    • Energy and Sciences
    • Genome sequencing, Weather analysis, prediction; Utilities and Power Grid; Biodiversity Indexing; Network Failures; Seismic Data analysis
    • Retail and Manufacturing
    • Customer Churn; Brand and Sentiment Analysis; Point of Sales; Pricing Models ; Customer Loyalty; Targeted Offers
    • Web, ecommerce, Social networking
    • Online Media ; Mobile; Online Gaming; Search Quality
  • COURSE SPECIFIC
    Is this a online video conferencing course?
    Live sessions through Video conferencing is one of the many methods used in this course.
  • What other methods are used in this course? How does the course work?
    This course has 4 kinds of learning resources, high quality concept videos, live online lectures, lab sessions and Project work.
  • Are the live lectures one on one?
    Yes, live lectures are one on one interaction between you and the faculty.
  • What is the frequency of live lectures?
    Live lectures are scheduled as per the needs of the student. They can be scheduled after a thery video or to guide you through a lab session or to help you whenever you get stuck while doing a project.
  • Can the videos be referred at a later stage?
    Yes! all lectures, interaction and lab sessions are available for later viewing.
  • How are lab sessions handled in this course?
    Lab means, when you run codes on your own Hadoop cluster. You will always be given recorded video guides of lab assignments. You will then be given a new data set to work on your own Hadoop Cluster. During the lab session, your faculty/mentor will have discussions with you whenever needed.
  • How is Project work handled?
    For a project, you shall be given a real data set, and project-problem statement. You would also be told the various stages of the project. While you are working on the project, you will submit out.
  • How is Project work handled?
    For a project, you shall be given a real data set, and project-problem statement. You would also be told the various stages of the project. While you are working on the project, you will submit outputs at every stage for evaluation. Wherever needed, the faculty will hold a live session for a discussion.
  • What is different about the Projects in this course?
    This course has taken industry case studies and very practical problems as the project statements. Most of them have been contributed by industry experts from across the world. While you are reading this, our faculty team is continuously working to find better case studies and projects for you from professional networks.
  • How is this course better in terms of practical experience?
    The projects you will do with us are never simple bookish problems. These data sets are very recent and collated from professional network and are continuously upgraded. We give you the most challenging and relevant exposure in Hadoop, just like your recruited would want you to know.
  • What are the kind of profiles who can take this course?
    CSE graduates, Senior Architects, Data Warehousing professionals and Java Developers.
  • Does this course need any prior knowledge of Java?
    This course can be done by people with knowledge of Java or Python, with equal ease. The MapReduce layer of hadoop needs a different kind of programming paradigm called Functional Programming and it is nothing like the regular OOP in java and hence it's a level ground for everybody. If there is an extra reading required, we would provide you additional material.
  • Can a person with no prior knowledge about NoSQL go for this course?
    Yes. Our expert would cover essential parts of NoSQL and other database types when necessary.
  • Can this course be taken by someone who wants to pursue purely the admin domain?
    Yes! This is a complete professional Hadoop Engineer course. IT covers all fundamentals essential for both Administrator and Developer profiles. You could say that for Administrators some modules will be more important than others and vice versa for developes.
  • Is there a certification provided in this course?
    Yes. You shall be given a certificate of achievement by Ufaber after successfully completing the course, however our focus is more on making you job ready and shining your experience with our projects, which we have curated from across the world.
  • JOB SPECIFIC
    What are the kind of jobs available in Big Data Hadoop?
    Currently indsutries are looking for the following profiles - Hadoop Admin - Hadoop Architect - MapReduce Developer - Data Scientist
  • Is there any job assistance provided after this course?
    Ufaber provides complete job support by CV review and feedback Job alerts and openings Interview preparation
  • What are the companies in India that have Hadoop teams and departments?
    Software companies with offices in India have all developed functional departments and are expanding at a rapid rate additionally all big data intensive big companies from telecom , banking and retails sector have started doing that Infosys,Wipro, IBM, TCS, Mahindra tech, Microsoft, Google, Amazon etc Airtel, Vodafone, Reliance telecom Future group, Tata retails, Reliance retail Banks, Analytics companies, Finance and Insurance companies