Cloudera Certified Developer for Apache Hadoop CDH3 (CCD-333)
Please note: this test discontinues December 31, 2012
Please read the transition FAQ for more information
To earn a CCDH: Cloudera Certified Developer for Apache Hadoop CDH3 (CCD-333) certification, candidates must pass the following test:
Test Name:Cloudera Certified Developer for Apache Hadoop CDH3 (CCD-333)
Number of Questions: 60
Time Limit: 90 minutes
Passing Score: 67%
Languages: English, Japanese
Price: USD$295, AUD285, EUR225, GBP185, JPY25,500
Schedule your test
Cloudera Certified Developer for Apache Hadoop CCD-333 is designed to test a candidate’s fluency with the concepts and skills required in the following areas:
Core Hadoop Concepts
Recognize and identify Apache Hadoop daemons and how they function both in data storage and processing. Understand how Apache Hadoop exploits data locality. Given a big data scenario, determine the challenges to large-scale computational models and how distributed systems attempt to overcome various challenges posed by the scenario.
Storing Files in Hadoop
Analyze the benefits and challenges of the HDFS architecture, including how HDFS implements file sizes, block sizes, and block abstraction. Understand default replication values and storage requirements for replication. Determine how HDFS stores, reads, and writes files. Given a sample architecture, determine how HDFS handles hardware failure.
Job Configuration and Submission
Construct proper job configuration parameters, including using JobConf and appropriate properties. Identify the correct procedures for MapReduce job submission. How to use various commands in job submission (“hadoop jar” etc.)
Job Execution Environment
Given a MapReduce job, determine the lifecycle of a Mapper and the lifecycle of a Reducer. Understand the key fault tolerance principles at work in a MapReduce job. Identify the role of Apache Hadoop Classes, Interfaces, and Methods. Understand how speculative execution exploits differences in machine configurations and capabilities in a parallel environment and how and when it runs.
Input and Output
Given a sample job, analyze and determine the correct InputFormat and OutputFormat to select based on job requirements. Understand the role of the RecordReader, and of sequence files and compression.
Analyze the order of operations in a MapReduce job, how data moves from place to place, how partitioners and combiners function, and the sort and shuffle process.
Analyze and determine the relationship of input keys to output keys in terms of both type and number, the sorting of keys, and the sorting of values. Given sample input data, identify the number, type, and value of emitted keys and values from the Mappers as well as the emitted data from each Reducer and the number and contents of the output file(s).
Key and Value Types
Given a scenario, analyze and determine which of Hadoop’s data types for keys and values are appropriate for the job. Understand common key and value types in the MapReduce framework and the interfaces they implement.
Common Algorithms and Design Patterns
Evaluate whether an algorithm is well-suited for expression in MapReduce. Understand implementation and limitations and strategies for joining datasets in MapReduce. Analyze the role of DistributedCache and Counters.
The Hadoop Ecosystem
Analyze a workflow scenario and determine how and when to leverage ecosystems projects, including Apache Hive, Apache Pig, Sqoop and Oozie. Understand how Hadoop Streaming might apply to a job workflow.