You are on page 1of 44

Quiz demo: Knowledge For Cloudera

Practise Quiz demo for ccd-333 (01-2014)

You are on Practise mode

Results

0 of 59 questions answered correctly

Your time: 00:00:05

You have reached 0 of 59 points, (0%)

Average score   0%

Your score   0%

Your performance has been rated as Keep trying!

Your result has been entered into leaderboard


Name: Name E-Mail: E-Mail
Send

View questions Show leaderboard

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36

37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
Answered Review

1. Question

You’ve written a MapReduce job that will process 500 million input records and generate 500
million key-value pairs. The data is not uniformly distributed. Your MapReduce job will create a
significant amount of intermediate data that it needs to transfer between mappers and reducers
which is a potential bottleneck. A custom implementation of which of the following interfaces is
most likely to reduce the amount of intermediate data transferred across the network?

OutputFormat

WritableComparable

Writable

InputFormat

Combiner

Partitioner

Incorrect

Users can optionally specify a combiner, via JobConf.setCombinerClass(Class), to


perform local aggregation of the intermediate outputs, which helps to cut down the amount of data
transferred from the Mapper to the Reducer.
Reference:Map/Reduce Tutorial
http://hadoop.apache.org/common/docs/r0.20.2/mapred_tutorial.html(Mapper, 9th paragraph)

Explanation:
Correct answer(s):
Combiner

2. Question

To process input key-value pairs, your mapper needs to load a 512 MB data file in memory. What
is the best way to accomplish this?

Place the data file in theDistributedCacheand read the data into memory in the configure
method of the mapper.

Place the data file in theDistributedCacheand read the data into memory in the map method of
the mapper.

Serialize the data file, insert it in the Jobconf object, and read the data into memory in the
configure method of the mapper.

Place the datafile in the DataCache and read the data into memory in the configure method
ofthe mapper.

Incorrect

Hadoop has a distributed cache mechanism to make available file locally that may
be needed by Map/Reduce jobs
Use Case
Lets understand our Use Case a bit more in details so that we can follow-up the code snippets.
We have a Key-Value file that we need to use in our Map jobs. For simplicity, lets say we need to
replace all keywords that we encounter during parsing, with some other value.
So what we need is
A key-values files (Lets use a Properties files)
The Mapper code that uses the code
Write the Mapper code that uses it
view sourceprint?

01.
public class DistributedCacheMapper extends Mapper<LongWritable, Text, Text, Text> {
02.
03.
Properties cache;
04.
05.
@Override
06.
protected void setup(Context context) throws IOException, InterruptedException {
07.
super.setup(context);
08.
Path localCacheFiles = DistributedCache.getLocalCacheFiles(context.getConfiguration());
09.
10.
if(localCacheFiles != null) {
11.
// expecting only single file here
12.
for (int i = 0; i < localCacheFiles.length; i++) {
13.
Path localCacheFile = localCacheFiles[i];
14.
cache = new Properties();
15.
cache.load(new FileReader(localCacheFile.toString()));
16.
}
17.
} else {
18.
// do your error handling here
19.
}
20.
21.

}
22.
23.
@Override
24.
public void map(LongWritable key, Text value, Context context) throws IOException,
InterruptedException {
25.
// use the cache here
26.
// if value contains some attribute, cache.get(<value>)
27.
// do some action or replace with something else
28.
}
29.
30.
}
Note:
* Distribute application-specific large, read-only files efficiently.
DistributedCache is a facility provided by the Map-Reduce framework to cache files (text, archives,
jars etc.) needed by applications.
Applications specify the files, via urls (hdfs:// or http://) to be cached via the JobConf. The
DistributedCache assumes that the files specified via hdfs:// urls are already present on the
FileSystem at the path specified by the url.
Reference:Using Hadoop Distributed Cache

Explanation:
Correct answer(s):
Place the data file in theDistributedCacheand read the data into memory in the map method of
the mapper.

3. Question

Which of the following statements most accurately describes the relationship between MapReduce
and Pig?

Pig provides the additional capability of allowing you to control the flow of multiple MapReduce
jobs.

Pig provides no additional capabilities to MapReduce. Pig programs are executed as


MapReduce jobs via the Pig interpreter.

Pig programs rely on MapReduce but are extensible, allowing developers to do special-purpose
processing not provided by MapReduce.

Pig provides additional capabilities that allow certain types of data manipulation not possible
with MapReduce.

Incorrect

In addition to providing many relational and data flow operators Pig Latin provides
ways for you to control how your jobs execute on MapReduce. It allows you to set values that
control your environment and to control details of MapReduce such as how your data is
partitioned.
Reference:http://ofps.oreilly.com/titles/9781449302641/advanced_pig_latin.html(topic: controlling
execution)

Explanation:
Correct answer(s):
Pig provides the additional capability of allowing you to control the flow of multiple MapReduce
jobs.

4. Question

Given a directory of files with the following structure: line number, tab character, string:
Example:
1. abialkjfjkaoasdfjksdlkjhqweroij
2. kadf jhuwqounahagtnbvaswslmnbfgy
3. kjfteiomndscxeqalkzhtopedkfslkj
You want to send each line as one record to your Mapper. Which InputFormat would you use to
complete the line: setInputFormat (________.class);

KeyValueTextInputFormat

SequenceFileInputFormat

BDBInputFormat

SequenceFileAsTextInputFormat

Incorrect

Note:
The output format for your first MR job should be SequenceFileOutputFormat – this will store the
Key/Values output from the reducer in a binary format, that can then be read back in, in your
second MR job using SequenceFileInputFormat.
Reference:http://stackoverflow.com/questions/9721754/how-to-parse-customwritable-from-text-
inhadoop(see answer 1 and then see the comment #1 for it)

Explanation:
Correct answer(s):
SequenceFileInputFormat

5. Question

What is the behavior of the default partitioner?

The default partitioner assigns key value pairs to reducers based on an internal random number
generator.

The default partitioner implements a round robin strategy, shuffling the key value pairs to each
reducer in turn. This ensures an even partition of the key space.

The default partitioner computes the hash of the key and divides that value modulo the number
of reducers. The result determines the reducer assigned to process the key-value pair.

The default partitioner computes the hash of the key. Hash values between specific ranges are
associated with different buckets, and each bucket is assigned to a specific reducer.

The default partitioner computes the hash of the value and takes the mod of that value with the
number of reducers. The result determines the reducer assigned to process the key value pair.

Incorrect

The default partitioner computes a hash value for the key and assigns the partition
based on this result.
The default Partitioner implementation is called HashPartitioner. It uses the hashCode() method of
the key objects modulo the number of partitions total to determine which partition to send a given
(key, value) pair to.
In Hadoop, the default partitioner is HashPartitioner, which hashes a record’s key to determine
which partition (and thus which reducer) the record belongs in.The number of partition is then
equal to the number of reduce tasks for the job.
Reference:Getting Started With (Customized) Partitioning

Explanation:
Correct answer(s):
The default partitioner computes the hash of the key and divides that value modulo the number
of reducers. The result determines the reducer assigned to process the key-value pair.

6. Question
You need a distributed, scalable, data Store that allows you random, realtime read/write access to
hundreds of terabytes of data. Which of the following would you use?

Hue

Oozie

Pig

Hive

Sqoop

HBase

Flume

Incorrect

Use Apache HBase when you need random, realtime read/write access to your Big
Data.
Note:This project’s goal is the hosting of very large tables — billions of rows X millions of columns — atop
clusters of commodity hardware. Apache HBase is an open-source, distributed, versioned,

column-oriented store modeled after Google’s Bigtable: A Distributed Storage System for
Structured Data by Chang et al. Just as Bigtable leverages the distributed data storage provided
by the Google File System, Apache HBase provides Bigtable-like capabilities on top of Hadoop
and HDFS.
Features
Linear and modular scalability.
Strictly consistent reads and writes.
Automatic and configurable sharding of tables
Automatic failover support between RegionServers.
Convenient base classes for backing Hadoop MapReduce jobs with Apache HBase tables.
Easy to use Java API for client access.
Block cache and Bloom Filters for real-time queries.
Query predicate push down via server side Filters
Thrift gateway and a REST-ful Web service that supports XML, Protobuf, and binary data
encoding options
Extensible jruby-based (JIRB) shell
Support for exporting metrics via the Hadoop metrics subsystem to files or Ganglia; or via JMX
Reference:http://hbase.apache.org/(when would I use HBase? First sentence)

Explanation:
Correct answer(s):
HBase
7. Question

For each intermediate key, each reducer task can emit:

One final key-value pair per value associated with the key; no restrictions on the type.

One final key value pair per key; no restrictions on the type.

As many final key-value pairs as desired, as long as all the keys have the same type and all the
values have the same type.

As many final key value pairs as desired. There are no restrictions on the types of those keyvalue pairs
(i.e., they can be heterogeneous)

As many final key-value pairs as desired, but they must have the same type as the intermediate
key-value pairs.

Incorrect

Reducer reduces a set of intermediate values which share a key to a smaller set of
values.
Reference:Hadoop Map-Reduce Tutorial

Explanation:
Correct answer(s):
One final key value pair per key; no restrictions on the type.

8. Question

Which two of the following are valid statements? (Choose two)

HDFS is optimized for storing a large number of files smaller than the HDFS block size.

HDFS has the Characteristic of supporting a "write once, read many" data access model.

HDFS is a distributed file system that runs on top of native OS filesystems and is well suited to
storage of very large data sets.

HDFS is a distributed file system that replaces ext3 or ext4 on Linux nodes in a Hadoop cluster.

Incorrect

B:HDFS is designed to support very large files. Applications that are compatible with
HDFS are those that deal with large data sets. These applications write their data only once but
they read it one or more times and require these reads to be satisfied at streaming speeds. HDFS

supports write-once-read-many semantics on files.


D:
*Hadoop Distributed File System: A distributed file system that provides high-throughput access to
application data.
*DFS is designed to support very large files.
Reference:24 Interview Questions & Answers for Hadoop MapReduce developers

Explanation:
Correct answer(s):
HDFS has the Characteristic of supporting a "write once, read many" data access model.
HDFS is a distributed file system that runs on top of native OS filesystems and is well suited to
storage of very large data sets.

9. Question

When is the reduce method first called in a MapReduce job?

Reducers start copying intermediate key value pairs from each Mapper as soon as it has
completed. The reduce method is called as soon as the intermediate key-value pairs start to
arrive.

Reduce methods and map methods all start at the beginning of a job, in order to provide
optimal performance for map-only or reduce-only jobs.

Reducers start copying intermediate key-value pairs from each Mapper as soon as it has
completed. The reduce method is called only after all intermediate data has been copied and
sorted.

Reducers start copying intermediate key-value pairs from each Mapper as soon as it has
completed. The programmer can configure in the job what percentage of the intermediate data
should arrive before the reduce method begins.

Incorrect

In a MapReduce job reducers do not start executing the reduce method until the all
Map jobs have completed. Reducers start copying intermediate key-value pairs from the mappers
as soon as they are available. The programmer defined reduce method is called only after all the
mappers have finished.
Reference:24 Interview Questions & Answers for Hadoop MapReduce developers,When is the
reducers are started in a MapReduce job?
http://www.fromdev.com/2010/12/interview-questions-hadoop-mapreduce.html(question no. 17)

Explanation:
Correct answer(s):
Reducers start copying intermediate key-value pairs from each Mapper as soon as it has
completed. The reduce method is called only after all intermediate data has been copied and
sorted.

10. Question

Which of the following best describes the map method input and output?

It accepts a list of key-value pairs as input hut run emit only one key value pair as output.

It accepts a single key-value pair as input and can emit only one key-value pair as output.

It accepts a single key-value pair as input and emits a single key and list of corresponding
values as output

It accepts a single key-value pair as input and can emit any number of key-value pairs as
output, including zero.

Incorrect

public class Mapper<KEYIN,VALUEIN,KEYOUT,VALUEOUT>


extends Object
Maps input key/value pairs to a set of intermediate key/value pairs.
Maps are the individual tasks which transform input records into a intermediate records. The
transformed intermediate records need not be of the same type as the input records. A given input
pair may map to zero or many output pairs.
Reference: org.apache.hadoop.mapreduce
Class Mapper<KEYIN,VALUEIN,KEYOUT,VALUEOUT>

Explanation:
Correct answer(s):
It accepts a single key-value pair as input and can emit any number of key-value pairs as
output, including zero.

11. Question

Which of the following utilities allows you to create and run MapReduce jobs with any executable
or script as the mapper and/or the reducer?

Hadoop Streaming

Oozie

Sqoop

Flume
Incorrect

Hadoop streaming is a utility that comes with the Hadoop distribution. The utility
allows you to create and run Map/Reduce jobs with any executable or script as the mapper and/or
the reducer.
Reference:http://hadoop.apache.org/common/docs/r0.20.1/streaming.html(Hadoop
Streaming,second sentence)

Explanation:
Correct answer(s):
Hadoop Streaming

12. Question

What happens in a MapReduce job when you set the number of reducers to one?

A single reducer gathers and processes all the output from all the mappers. The output is
written to a single file in HDFS.

A single reducer gathers and processes all the output from all the mappers. The output is
written in as many separate files as there are mappers.

Setting the number of reducers to one creates a processing bottleneck, and since the number
of reducers as specified by the programmer is used as a reference value only, the MapReduce
runtime provides a default setting for the number of reducers.

Setting the number of reducers to one is invalid, and an exception is thrown.

Incorrect

*It is legal to set the number of reduce-tasks to zero if no reduction is desired.


In this case the outputs of the map-tasks go directly to the FileSystem, into the output path set
bysetOutputPath(Path). The framework does not sort the map-outputs before writing them out to
the FileSystem.
*Often, you may want to process input data using a map function only. To do this, simply set
mapreduce.job.reduces to zero. The MapReduce framework will not create any reducer tasks.
Rather, the outputs of the mapper tasks will be the final output of the job.

Explanation:
Correct answer(s):
A single reducer gathers and processes all the output from all the mappers. The output is
written in as many separate files as there are mappers.

13. Question
If you run the word count MapReduce program with m mappers and r reducers, how many output
files will you get at the end of the job? And how many key-value pairs will there be in each file?
Assume k is the number of unique words in the input files.

There will be m files, each with approximately k/m key-value pairs.

There will be r files, each with approximately k/m key-value pairs.

There will be r files, each with exactly k/r key-value pairs.

There will be m files, each with exactly k/m key value pairs.

There will be r files, each with approximately k/r key-value pairs.

Incorrect

Note:
*A MapReduce job withm mappers and r reducers involves up to m*r distinct copy operations,

since eachmapper may have intermediate output going to every reducer.


*In the canonical example of word counting, a key-value pair is emitted for every word found. For
example, if we had 1,000 words, then 1,000 key-value pairs will be emitted from the mappers to
the reducer(s).

Explanation:
Correct answer(s):
There will be r files, each with exactly k/r key-value pairs.

14. Question

Can you use MapReduce to perform a relational join on two large tables sharing a key? Assume
that the two tables are formatted as comma-separated file in HDFS.

Yes.

Yes, but only if one of the tables fits into memory.

Yes, so long as both tables fit into memory.

No, MapReduce cannot perform relational operations.

No, but it can be done with either Pig or Hive.

Incorrect

Note:
* Join Algorithms in MapReduce
A) Reduce-side join
B) Map-side join
C) In-memory join
/ Striped Striped variant variant

/ Memcached variant
* Which join to use?
/ In-memory join > map-side join > reduce-side join
/ Limitations of each?
In-memory join: memory
Map-side join: sort order and partitioning
Reduce-side join: general purpose

Explanation:
Correct answer(s):
Yes.

15. Question

You have written a Mapper which invokes the following five calls to the outputcollector.collect
method:
C:\\Documents and Settings\\RraAsShHiIdD\\Desktop\\untitled.JPG
How many times will the Reducer’s reduce method be invoked.

Incorrect

Note:
org.apache.hadoop.mapred Interface OutputCollector<K,V>
Collects the <key, value> pairs output by Mappers and Reducers.
OutputCollector is the generalization of the facility provided by the Map-Reduce framework to
collect data output by either the Mapper or the Reducer i.e. intermediate outputs or the output of
the job.

Explanation:
Correct answer(s):
3
16. Question

Workflows expressed in Oozie can contain:

Sequences of MapReduce and Pig jobs. These are limited to linear sequences of actions with
exception handlers but no forks.

Iterative repetition of MapReduce jobs until a desired answer or state is reached.

Sequences of MapReduce jobs only; no Pig or Hive tasks or jobs. These MapReduce
sequences can be combined with forks and path joins.

Sequences of MapReduce and Pig. These sequences can be combined with other actions
including forks, decision points, and path joins.

Incorrect

Reference:http://incubator.apache.org/oozie/docs/3.1.3/docs/WorkflowFunctionalSpec.html(workfl
ow definition, first sentence)

Explanation:
Correct answer(s):
Sequences of MapReduce and Pig. These sequences can be combined with other actions
including forks, decision points, and path joins.

17. Question

Given a Mapper, Reducer, and Driver class packaged into a jar, which is the correct way of
submitting the job to the cluster?

jar MyJar.jar MyDriverClass inputdir outputdir

jar MyJar.jar

hadoop jar class MyJar.jar MyDriverClass inputdir outputdir

hadoop jar MyJar.jar MyDriverClass inputdir outputdir

Incorrect

Example:

Run the application:


$ bin/hadoop jar /usr/joe/wordcount.jar org.myorg.WordCount /usr/joe/wordcount/input
/usr/joe/wordcount/output
Explanation:
Correct answer(s):
hadoop jar MyJar.jar MyDriverClass inputdir outputdir

18. Question

Your client application submits a MapReduce job to your Hadoop cluster. The Hadoop framework
looks for an available slot to schedule the MapReduce operations on which of the following
Hadoop computing daemons?

NameNode

TaskTracker

JobTracker

DataNode

Secondary NameNode

Incorrect

JobTracker is the daemon service for submitting and tracking MapReduce jobs in
Hadoop. There is only One Job Tracker process run on any hadoop cluster. Job Tracker runs on
its own JVM process. In a typical production cluster its run on a separate machine. Each slave
node is configured with job tracker node location. The JobTracker is single point of failure for the
Hadoop MapReduce service. If it goes down, all running jobs are halted. JobTracker in Hadoop
performs following actions(from Hadoop Wiki:)
Client applications submit jobs to the Job tracker.
The JobTracker talks to the NameNode to determine the location of the data
The JobTracker locates TaskTracker nodes with available slots at or near the data
The JobTracker submits the work to the chosen TaskTracker nodes.
The TaskTracker nodes are monitored. If they do not submit heartbeat signals often enough, they
are deemed to have failed and the work is scheduled on a different TaskTracker.
A TaskTracker will notify the JobTracker when a task fails. The JobTracker decides what to do
then: it may resubmit the job elsewhere, it may mark that specific record as something to avoid,
and it may may even blacklist the TaskTracker as unreliable.
When the work is completed, the JobTracker updates its status.
Client applications can poll the JobTracker for information.
Reference:24 Interview Questions & Answers for Hadoop MapReduce developers,What is a
JobTracker in Hadoop? How many instances of JobTracker run on a Hadoop Cluster?

Explanation:
Correct answer(s):
JobTracker
19. Question

In the reducer, the MapReduce API provides you with an iterator over Writable values. Calling the
next () method:

Returns a reference to the same writable object each time, but populated with different data.

Returns a reference to a different Writable object each time.

Returns a reference to a Writable object. The API leaves unspecified whether this is a reused
object or a new object.

Returns a reference to the same writable object if the next value is the same as the previous
value, or a new writable object otherwise.

Returns a reference to a Writable object from an object pool.

Incorrect

Calling Iterator.next() will always return the SAME EXACT instance of IntWritable,
with the contents of that instance replaced with the next value.
Reference:manupulating iterator in mapreduce

Explanation:
Correct answer(s):
Returns a reference to the same writable object each time, but populated with different data.

20. Question

Which MapReduce daemon runs on each slave node and participates in job execution?

TaskTracker

NameNode

JobTracker

Secondary NameNode

Incorrect

Single instance of a Task Tracker is run on each Slave node. Task tracker is run as
a separate JVM process.
Reference:24 Interview Questions & Answers for Hadoop MapReduce developers,What is
configuration of a typical slave node on Hadoop cluster? How many JVMs run on a slave node?
http://www.fromdev.com/2010/12/interview-questions-hadoop-mapreduce.html(See answer to
question no. 5)
Explanation:
Correct answer(s):
TaskTracker

21. Question

You use the hadoop fs –put command to write a 300 MB file using an HDFS block size of 64 MB.
Just after this command has finished writing 200 MB of this file, what would another user see
when trying to access this file?

They would see the current state of the file, up to the last bit written by the command.

They would see the content of the file through the last completed block.

They would see no content until the whole file is written and closed.

They would see Hadoop throw an concurrentFileAccessException when they try to access this
file.

Incorrect

Note:
*put
Usage: hadoop fs -put <localsrc> … <dst>
Copy single src, or multiple srcs from local file system to the destination filesystem. Also reads
input from stdin and writes to destination filesystem.

Explanation:
Correct answer(s):
They would see no content until the whole file is written and closed.

22. Question

Combiners Increase the efficiency of a MapReduce program because:

They provide an optimization and reduce the total number of computations that are needed to
execute an algorithm by a factor of n, where is the number of reducer.

They aggregate intermediate map output locally on each individual machine and therefore
reduce the amount of data that needs to be shuffled across the network to the reducers.

They provide a mechanism for different mappers to communicate with each Other, thereby
reducing synchronization overhead.

They aggregate intermediate map output horn a small number of nearby (i.e., rack-local)
machines and therefore reduce the amount of data that needs to be shuffled across the network to
the reducers.

Incorrect

Combiners are used to increase the efficiency of a MapReduce program. They are used to
aggregate intermediate map output locally on individual mapper outputs. Combiners can help you
reduce the amount of data that needs to be transferred across to the reducers. You can use your
reducer code as a combiner if the operation performed is commutative and associative. The
execution of combiner is not guaranteed, Hadoop may or may not execute a combiner. Also, if
required it may execute it more then 1 times. Therefore your MapReduce jobs should not depend
on the combiners execution.

Reference:24 Interview Questions & Answers for Hadoop MapReduce developers,What are
combiners? When should I use a combiner in my MapReduce Job?
http://www.fromdev.com/2010/12/interview-questions-hadoop-mapreduce.html(question no. 12)

Explanation:
Correct answer(s):
They aggregate intermediate map output locally on each individual machine and therefore
reduce the amount of data that needs to be shuffled across the network to the reducers.

23. Question

In a MapReduce job with 500 map tasks, how many map task attempts will there be?

At least 500.

Exactly 500.

At most 500.

Between 500 and 1000.

It depends on the number of reducers in the job.

Incorrect

From Cloudera Training Course:


Task attempt is a particular instance of an attempt to execute a task
– There will be at least as many task attempts as there are tasks
– If a task attempt fails, another will be started by the JobTracker
– Speculative execution can also result in more task attempts than completed tasks

Explanation:
Correct answer(s):
At least 500.

24. Question

You have a large dataset of key-value pairs, where the keys are strings, and the values are
integers. For each unique key, you want to identify the largest integer. In writing a MapReduce
program to accomplish this, can you take advantage of a combiner?

No, a combiner would not be useful in this case.

Yes, as long as all the integer values that share the same key fit into memory on each node.

Yes.

Yes, but the number of unique keys must be known in advance.

Yes, as long as all the keys fit into memory on each node.

Incorrect

Correct answer(s):
Yes.

25. Question

The NameNode uses RAM for the following purpose:

To store the edits log that keeps track of changes in HDFS.

To manage distributed read and write locks on files in HDFS.

To store the contents of files in HDFS.

To store filenames, list of blocks and other meta information.

Incorrect

The NameNode is the centerpiece of an HDFS file system. It keeps the directory
tree of all files in the file system, and tracks where across the cluster the file data is kept. It does
not store the data of these files itself. There is only One NameNode process run on any hadoop
cluster. NameNode runs on its own JVM process. In a typical production cluster its run on a
separate machine. The NameNode is a Single Point of Failure for the HDFS Cluster. When the
NameNode goes down, the file system goes offline. Client applications talk to the NameNode
whenever they wish to locate a file, or when they want to add/copy/move/delete a file. The
NameNode responds the successful requests by returning a list of relevant DataNode servers
where the data lives
Reference:24 Interview Questions & Answers for Hadoop MapReduce developers,What is a
NameNode? How many instances of NameNode run on a Hadoop Cluster?

Explanation:
Correct answer(s):
To store filenames, list of blocks and other meta information.

26. Question

In the standard word count MapReduce algorithm, why might using a combiner reduce the overall
Job running time?

Because combiners perform local aggregation of word counts, thereby allowing the mappers to
process input data faster.

Because combiners perform local aggregation of word counts, thereby reducing the number of
mappers that need to run.

Because combiners perform local aggregation of word counts, and then transfer that data to
reducers without writing the intermediate data to disk.

Because combiners perform local aggregation of word counts, thereby reducing the number of
key-value pairs that need to be snuff let across the network to the reducers.

Incorrect

*Simply speaking a combiner can be considered as a“mini reducer”that will be applied potentially
several times still during the map phase before to send the new (hopefully reduced) set of
key/value pairs to the reducer(s). This is why a combiner must implement the Reducer interface
(or extend the Reducer class as of hadoop 0.20).
*Combiners are used to increase the efficiency of a MapReduce program. They are used to
aggregate intermediate map output locally on individual mapper outputs. Combiners can help you
reduce the amount of data that needs to be transferred across to the reducers. You can use your
reducer code as a combiner if the operation performed is commutative and associative. The
execution of combiner is not guaranteed, Hadoop may or may not execute a combiner. Also, if
required it may execute it more then 1 times. Therefore your MapReduce jobs should not depend
on the combiners execution.
Reference:24 Interview Questions & Answers for Hadoop MapReduce developers,What are
combiners? When should I use a combiner in my MapReduce Job?

Explanation:
Correct answer(s):
Because combiners perform local aggregation of word counts, thereby allowing the mappers to
process input data faster.
27. Question

Which of the Following best describes the lifecycle of a Mapper?

The JobTracker calls the FastTracker's configure () method, then its map () method and finally
its closer ()

The TaskTracker spawns a new Mapper to process each key-value pair.

The TaskTracker spawns a new Mapper to process all records in a single input split.

The JobTracker spawns a new Mapper to process all records in a single file.

Incorrect

For each map instance that runs, the TaskTracker creates a new instance of your
mapper.
Note:
*The Mapper is responsible for processing Key/Value pairs obtained from the InputFormat. The
mapper may perform a number of Extraction and Transformation functions on the Key/Value pair
before ultimately outputting none, one or many Key/Value pairs of the same, or different Key/Value
type.
*With the new Hadoop API, mappers extend the org.apache.hadoop.mapreduce.Mapper class.
This class defines an ‘Identity’ map function by default – every input Key/Value pair obtained from
the InputFormat is written out.
Examining the run() method, we can see the lifecycle of the mapper:
/**
* Expert users can override this method for more complete control over the
* execution of the Mapper.
* @param context
* @throws IOException
*/
public void run(Context context) throws IOException, InterruptedException {
setup(context);
while (context.nextKeyValue()) {
map(context.getCurrentKey(), context.getCurrentValue(), context);
}
cleanup(context);
}
setup(Context) – Perform any setup for the mapper. The default implementation is a no-op method.
map(Key, Value, Context) – Perform a map operation in the given Key / Value pair. The default

implementation calls Context.write(Key, Value)


cleanup(Context) – Perform any cleanup for the mapper. The default implementation is a no-op
method.
Reference:Hadoop/MapReduce/Mapper
Explanation:
Correct answer(s):
The TaskTracker spawns a new Mapper to process each key-value pair.

28. Question

What types of algorithms are difficult to express MapReduce?

Relational operations on large amounts of structured and semi structured data.

Text analysis algorithms on large collections of unstructured text (e.g., Web crawls).

Algorithms that require applying the same mathematical function to large numbers of individual
binary records.

Large-scale graph algorithms that require one-step link traversal.

Algorithms that requite global, shared state.

Incorrect

See 3) below.
Limitations of Mapreduce–where not to use Mapreduce
While very powerful and applicable to a wide variety of problems, MapReduce is not the answer to
every problem. Here are some problems I found where MapReudce is not suited and some papers
that address the limitations of MapReuce.
1. Computation depends on previously computed values
If the computation of a value depends on previously computed values, then MapReduce cannot be
used. One good example is the Fibonacci series where each value is summation of the previous
two values. i.e., f(k+2) = f(k+1) + f(k). Also, if the data set is small enough to be computed on a
single machine, then it is better to do it as a single reduce(map(data)) operation rather than going
through the entire map reduce process.
2. Full-text indexing or ad hoc searching
The index generated in the Map step is one dimensional, and the Reduce step must not generate
a large amount of data or there will be a serious performance degradation. For example,
CouchDB’s MapReduce may not be a good fit for full-text indexing or ad hoc searching. This is a
problem better suited for a tool such as Lucene.
3. Algorithms depend on shared global state
Solutions to many interesting problems in text processing do not require global synchronization.As
a result, they can be expressed naturally in MapReduce, since map and reduce tasks run
independently and in isolation. However, there are many examples of algorithms that depend
crucially on the existence of shared global state during processing, making them difficult to
implement in MapReduce (since the single opportunity for global synchronization in MapReduce is
the barrier between the map and reduce phases of processing)
Reference: Limitations of Mapreduce–where not to use Mapreduce
Explanation:
Correct answer(s):
Algorithms that requite global, shared state.

29. Question

In a MapReduce job, you want each of you input files processed by a single map task. How do you
configure a MapReduce job so that a single map task processes each input file regardless of how
many blocks the input file occupies?

Increase the parameter that controls minimum split size in the job configuration.

Write a custom MapRunner that iterates over all key-value pairs in the entire file.

Write a custom FileInputFormat and override the method isSplittable to always return false.

Set the number of mappers equal to the number of input files you want to process.

Incorrect

Note:
*// Do not allow splitting.
protected boolean isSplittable(JobContext context, Path filename) {
return false;
}
*InputSplits: An InputSplit describes a unit of work that comprises a single map task in a
MapReduce program. A MapReduce program applied to a data set, collectively referred to as a
Job, is made up of several (possibly several hundred) tasks. Map tasks may involve reading a
whole file; they often involve reading only part of a file. By default, the FileInputFormat and its
descendants break a file up into 64 MB chunks (the same size as blocks in HDFS). You can
control this value by setting the mapred.min.split.size parameter in hadoop-site.xml, or by
overriding the parameter in the JobConf object used to submit a particular MapReduce job. By
processing a file in chunks, we allow several map tasks to operate on a single file in parallel. If the
file is very large, this can improve performance significantly through parallelism. Even more
importantly, since the various blocks that make up the file may be spread across several different
nodes in the cluster, it allows tasks to be scheduled on each of these different nodes; the

individual blocks are thus all processed locally, instead of needing to be transferred from one node
to another. Of course, while log files can be processed in this piece-wise fashion, some file
formats are not amenable to chunked processing. By writing a custom InputFormat, you can
control how the file is broken up (or is not broken up) into splits.

Explanation:
Correct answer(s):
Write a custom FileInputFormat and override the method isSplittable to always return false.
30. Question

You need to create a GUI application to help your company’s sales people add and edit customer
information. Would HDFS be appropriate for this customer information file?

Yes, because HDFS is optimized for fast retrieval of relatively small amounts of data.

No, because HDFS is optimized for write-once, streaming access for relatively large files.

No, because HDFS can only be accessed by MapReduce applications.

Yes, because HDFS is optimized for random access writes.

Incorrect

HDFS is designed to support very large files. Applications that are compatible with
HDFS are those that deal with large data sets. These applications write their data only once but
they read it one or more times and require these reads to be satisfied at streaming speeds. HDFS
supports write-once-read-many semantics on files.
Reference:24 Interview Questions & Answers for Hadoop MapReduce developers,What is HDFS
? How it is different from traditional file systems?

Explanation:
Correct answer(s):
No, because HDFS is optimized for write-once, streaming access for relatively large files.

31. Question

You need to import a portion of a relational database every day as files to HDFS, and generate
Java classes to Interact with your imported data. Which of the following tools should you use to
accomplish this?

Hue

Pig

Sqoop

Hive

Flume

Oozie

fuse-dfs
Incorrect

Sqoop (“SQL-to-Hadoop”) is a straightforward command-line tool with the following


capabilities:
Imports individual tables or entire databases to files in HDFS
Generates Java classes to allow you to interact with your imported data

Provides the ability to import from SQL databases straight into your Hive data warehouse
Note:
Data Movement Between Hadoop and Relational Databases
Data can be moved between Hadoop and a relational database as a bulk data transfer, or
relational tables can be accessed from within a MapReduce map function.
Note:
*Cloudera’s Distribution for Hadoop provides a bulk data transfer tool (i.e., Sqoop) that imports
individual tables or entire databases into HDFS files. The tool also generates Java classes that
support interaction with the imported data. Sqoop supports all relational databases over JDBC,
and Quest Software provides a connector (i.e., OraOop) that has been optimized for access to
data residing in Oracle databases.
Reference:http://log.medcl.net/item/2011/08/hadoop-and-mapreduce-big-data-analyticsgartner/(Data
Movement between hadoop and relational databases, second paragraph)

Explanation:
Correct answer(s):
Sqoop

32. Question

How does the NameNode detect that a DataNode has failed?

The NameNode does not need to know that a DataNode has failed.

The NameNode periodically pings the datanode. If the DataNode does not respond, the
NameNode considers the DataNode as failed.

When HDFS starts up, the NameNode tries to communicate with the DataNode and considers
the DataNode as failed if it does not respond.

When the NameNode fails to receive periodic heartbeats from the DataNode, it considers the
DataNode as failed.

Incorrect

NameNode periodically receives a Heartbeat and a Blockreport from each of the


DataNodes in the cluster. Receipt of a Heartbeat implies that the DataNode is functioning properly.
A Blockreport contains a list of all blocks on a DataNode. When NameNode notices that it has not
recieved a hearbeat message from a data node after a certain amount of time, the data node is
marked as dead. Since blocks will be under replicated the system begins replicating the blocks
that were stored on the dead datanode. The NameNode Orchestrates the replication of data
blocks from one datanode to another. The replication data transfer happens directly between
datanodes and the data never passes through the namenode.
Reference:24 Interview Questions & Answers for Hadoop MapReduce developers,How
NameNode Handles data node failures?

Explanation:
Correct answer(s):
When the NameNode fails to receive periodic heartbeats from the DataNode, it considers the
DataNode as failed.

33. Question

Your cluster has 10 DataNodes, each with a single 1 TB hard drive. You utilize all your disk
capacity for HDFS, reserving none for MapReduce. You implement default replication settings.
What is the storage capacity of your Hadoop cluster (assuming no compression)?

about 10 TB

about 3 TB

about 11 TB

about 5 TB

Incorrect

In default configuration there are total 3 copies of a datablock on HDFS, 2 copies


are stored on datanodes on same rack and 3rd copy on a different rack.
Note:HDFS is designed to reliably store very large files across machines in a large cluster. It
stores each file as a sequence of blocks; all blocks in a file except the last block are the same
size. The blocks of a file are replicated for fault tolerance. The block size and replication factor are
configurable per file. An application can specify the number of replicas of a file. The replication

factor can be specified at file creation time and can be changed later. Files in HDFS are write-once
and have strictly one writer at any time. The NameNode makes all decisions regarding replication
of blocks. HDFS uses rack-aware replica placement policy.
Reference:24 Interview Questions & Answers for Hadoop MapReduce developers,How the HDFS
Blocks are replicated?

Explanation:
Correct answer(s):
about 3 TB
34. Question

What is the preferred way to pass a small number of configuration parameters to a mapper or
reducer?

As key-value pairs in the jobconf object.

Using a plain text file via the Distributedcache, which each mapper or reducer reads.

Through a static variable in the MapReduce driver class (i.e., the class that submits the
MapReduce job).

As a custom input key-value pair passed to each mapper or reducer.

Incorrect

In Hadoop, it is sometimes difficult to pass arguments to mappers and reducers. If


the number of arguments is huge (e.g., big arrays), DistributedCache might be a good choice.
However, here, we’re discussing small arguments, usually a hand of configuration parameters.
In fact, the way to configure these parameters is simple. When you initialize“JobConf”object to
launch a mapreduce job, you can set the parameter by using“set”method like:
1JobConf job = (JobConf)getConf();
2job.set("NumberOfDocuments", args[0]);
Here,“NumberOfDocuments”is the name of parameter and its value is read from“args[0]“, a
command line argument.
Reference:Passing Parameters and Arguments to Mapper and Reducer in Hadoop

Explanation:
Correct answer(s):
As key-value pairs in the jobconf object.

35. Question

For each input key-value pair, mappers can emit:

One intermediate key value pair, but of the same type.

As many intermediate key-value pairs as desired, but they cannot be of the same type as the
input key-value pair.

As many intermediate key-value pairs as desired. There are no restrictions on the types of
those key-value pairs (i.e., they can be heterogeneous).

As many intermediate key value pairs as desired, as long as all the keys have the same type
and all the values have the same type.
One intermediate key value pair, of a different type.

Incorrect

Mapper maps input key/value pairs to a set of intermediate key/value pairs.


Maps are the individual tasks that transform input records into intermediate records. The
transformed intermediate records do not need to be of the same type as the input records. A
giveninput pair may map to zero or many output pairs.
Reference: Hadoop Map-Reduce Tutorial

Explanation:
Correct answer(s):
As many intermediate key-value pairs as desired. There are no restrictions on the types of
those key-value pairs (i.e., they can be heterogeneous).

36. Question

Which statement best describes the data path of intermediate key-value pairs (i.e., output of the

mappers)?

Intermediate key-value pairs are written to HDFS. Reducers read the intermediate data from
HDFS.

Intermediate key-value pairs are written to HDFS. Reducers copy the intermediate data to the
local disks of the machines running the reduce tasks.

Intermediate key-value pairs are written to the local disks of the machines running the map
tasks, and then copied to the machine running the reduce tasks.

Intermediate key-value pairs are written to the local disks of the machines running the map
tasks, and are then copied to HDFS. Reducers read the intermediate data from HDFS.

Incorrect

The mapper output (intermediate data) is stored on the Local file system (NOT
HDFS) of each individual mapper nodes. This is typically a temporary directory location which can
be setup in config by the hadoop administrator. The intermediate data is cleaned up after the
Hadoop Job completes.
Note:
*Reducers start copying intermediate key-value pairs from the mappers as soon as they are
available. The progress calculation also takes in account the processing of data transfer which is
done by reduce process, therefore the reduce progress starts showing up as soon as any
intermediate key-value pair for a mapper is available to be transferred to reducer. Though the
reducer progress is updated still the programmer defined reduce method is called only after all the
mappers have finished.
*Reducer is input the grouped output of a Mapper. In the phase the framework, for each Reducer,
fetches the relevant partition of the output of all the Mappers, via HTTP.
*Mapper maps input key/value pairs to a set of intermediate key/value pairs.
Maps are the individual tasks that transform input records into intermediate records. The
transformed intermediate records do not need to be of the same type as the input records. A given
input pair may map to zero or many output pairs.
*All intermediate values associated with a given output key are subsequently grouped by the
framework, and passed to the Reducer(s) to determine the final output.
Reference:Questions & Answers for Hadoop MapReduce developers,Where is the Mapper Output
(intermediate kay-value data) stored ?

Explanation:
Correct answer(s):
Intermediate key-value pairs are written to the local disks of the machines running the map
tasks, and then copied to the machine running the reduce tasks.

37. Question

During the standard sort and shuffle phase of MapReduce, keys and values are passed to
reducers. Which of the following is true?

Keys are presented to a reducer in random order; values for a given key are not sorted.

Keys are presented to a reducer in sorted order; values for a given key are not sorted.

Keys are presented to a reducer in soiled order; values for a given key are sorted in ascending
order.

Keys are presented to a reducer in random order; values for a given key are sorted in
ascending order.

Incorrect

Correct answer(s):
Keys are presented to a reducer in random order; values for a given key are sorted in
ascending order.

38. Question

You are running a job that will process a single InputSplit on a cluster which has no other jobs
currently running. Each node has an equal number of open Map slots. On which node will Hadoop
first attempt to run the Map task?
The node with the lowest system load

The node with the most memory

The node on which this InputSplit is stored

The node with the most free local disk space

Incorrect

The TaskTrackers send out heartbeat messages to the JobTracker, usually every
few minutes, to reassure the JobTracker that it is still alive. These message also inform the

JobTracker of the number of available slots, so the JobTracker can stay up to date with where in
the cluster work can be delegated. When the JobTracker tries to find somewhere to schedule a
task within the MapReduce operations, it first looks for an empty slot on the same server that
hosts the DataNode containing the data, and if not, it looks for an empty slot on a machine in the
same rack.

Explanation:
Correct answer(s):
The node on which this InputSplit is stored

39. Question

In a large MapReduce job with m mappers and r reducers, how many distinct copy operations will
there be in the sort/shuffle phase?

m+r (i.e., m plus r)

mr (i.e., m to the power of r)

mxr (i.e., m multiplied by r)

Incorrect

A MapReduce job withm mappers and r reducers involves up to m*r distinct copy
operations, since eachmapper may have intermediate output going to every reducer.

Explanation:
Correct answer(s):
mxr (i.e., m multiplied by r)
40. Question

What is the difference between a failed task attempt and a killed task attempt?

A failed task attempt is a task attempt that did not generate any key value pairs. A killed task
attempt is a task attempt that threw an exception, and thus killed by the execution framework.

A failed task attempt is a task attempt that completed, but with an unexpected status value. A
killed task attempt is a duplicate copy of a task attempt that was started as part of speculative
execution.

A failed task attempt is a task attempt that threw an unhandled exception. A killed task attempt
is one that was terminated by the JobTracker.

A failed task attempt is a task attempt that threw a RuntimeException (i.e., the task fails). A
killed task attempt is a task attempt that threw any other type of exception (e.g., IOException); the
execution framework catches these exceptions and reports them as killed.

Incorrect

Note:
*Hadoop uses "speculative execution." The same task may be started on multiple boxes. The first
one to finish wins, and the other copies are killed.
Failed tasks are tasks that error out.
*There are a few reasons Hadoop can kill tasks by his own decisions:
a) Task does not report progress during timeout (default is 10 minutes)
b) FairScheduler or CapacityScheduler needs the slot for some other pool (FairScheduler) or
queue (CapacityScheduler).
c) Speculative execution causes results of task not to be needed since it has completed on other
place.
Reference:Difference failed tasks vs killed tasks

Explanation:
Correct answer(s):
A failed task attempt is a task attempt that completed, but with an unexpected status value. A
killed task attempt is a duplicate copy of a task attempt that was started as part of speculative
execution.

41. Question

Does the MapReduce programming model provide a way for reducers to communicate with each
other?

Yes, all reducers can communicate with each other by passing information through the jobconf
object.
Yes, reducers running on the same machine can communicate with each other through shared
memory, but not reducers on different machines.

Yes, reducers can communicate with each other by dispatching intermediate key value pairs
that get shuffled to another reduce

No, each reducer runs independently and in isolation.

Incorrect

MapReduce programming model does not allow reducers to communicate with each
other. Reducers run in isolation.
Reference:24 Interview Questions & Answers for Hadoop MapReduce developers
http://www.fromdev.com/2010/12/interview-questions-hadoop-mapreduce.html(See question no.
9)

Explanation:
Correct answer(s):
No, each reducer runs independently and in isolation.

42. Question

The Hadoop framework provides a mechanism for coping with machine issues such as faulty
configuration or impending hardware failure. MapReduce detects that one or a number of
machines are performing poorly and starts more copies of a map or reduce task. All the tasks run
simultaneously and the task that finish first are used. This is called:

IdentityMapper

IdentityReducer

Combiner

Speculative Execution

Default Partitioner

Incorrect

Speculative execution: One problem with the Hadoop system is that by dividing the
tasks across many nodes, it is possible for a few slow nodes to rate-limit the rest of the program.
For example if one node has a slow disk controller, then it may be reading its input at only 10% the
speed of all the other nodes. So when 99 map tasks are already complete, the system is still
waiting for the final map task to check in, which takes much longer than all the other nodes.
By forcing tasks to run in isolation from one another, individual tasks do not know where their
inputs come from. Tasks trust the Hadoop platform to just deliver the appropriate input. Therefore,
the same input can be processed multiple times in parallel, to exploit differences in machine
capabilities. As most of the tasks in a job are coming to a close, the Hadoop platform will schedule
redundant copies of the remaining tasks across several nodes which do not have other work to
perform. This process is known as speculative execution. When tasks complete, they announce
this fact to the JobTracker. Whichever copy of a task finishes first becomes the definitive copy. If
other copies were executing speculatively, Hadoop tells the TaskTrackers to abandon the tasks
and discard their outputs. The Reducers then receive their inputs from whichever Mapper
completed successfully, first.
Reference:Apache Hadoop,Module 4: MapReduce

Explanation:
Correct answer(s):
Speculative Execution

43. Question

Which of the following describes how a client reads a file from HDFS?

The client queries the NameNode for the block location(s). The NameNode returns the block
location(s) to the client. The client reads the data directly off the DataNode(s).

The client queries all DataNodes in parallel. The DataNode that contains the requested data
responds directly to the client. The client reads the data directly off the DataNode.

The client contacts the NameNode for the block location(s). The NameNode then queries the
DataNodes for block locations. The DataNodes respond to the NameNode, and the NameNode
redirects the client to the DataNode that holds the requested data block(s). The client then reads
the data directly off the DataNode.

The client contacts the NameNode for the block location(s). The NameNode contacts
theDataNode that holds the requested data block. Data is transferred from the DataNode to the
NameNode, and then from the NameNode to the client.

Incorrect

The Client communication to HDFS happens using Hadoop HDFS API. Client
applications talk to the NameNode whenever they wish to locate a file, or when they want to
add/copy/move/delete a file on HDFS. The NameNode responds the successful requests by
returning a list of relevant DataNode servers where the data lives. Client applications can talk
directly to a DataNode, once the NameNode has provided the location of the data.
Reference: 24 Interview Questions & Answers for Hadoop MapReduce developers,How the Client
communicates with HDFS?

Explanation:
Correct answer(s):
The client contacts the NameNode for the block location(s). The NameNode then queries the
DataNodes for block locations. The DataNodes respond to the NameNode, and the NameNode
redirects the client to the DataNode that holds the requested data block(s). The client then reads
the data directly off the DataNode.

44. Question

You have an employee who is a Date Analyst and is very comfortable with SQL. He would like to
run ad-hoc analysis on data in your HDFS duster. Which of the following is a data warehousing
software built on top of Apache Hadoop that defines a simple SQL-like query language well-suited
for this kind of user?

Pig

Hue

Sqoop

Hive

Oozie

Hadoop Streaming

Flume

Incorrect

Hive defines a simple SQL-like query language, called QL, that enables users
familiar with SQL to query the data. At the same time, this language also allows programmers who
are familiar with the MapReduce framework to be able to plug in their custom mappers and
reducers to perform more sophisticated analysis that may not be supported by the built-in
capabilities of the language. QL can also be extended with custom scalar functions (UDF’s),
aggregations (UDAF’s), and table functions (UDTF’s).

Reference:https://cwiki.apache.org/Hive/(Apache Hive, first sentence and second paragraph)

Explanation:
Correct answer(s):
Hive

45. Question

Which of the following best describes the workings of TextInputFormat?

Input file splits may cross line breaks. A line that crosses file splits is read by the
RecordReaders of both splits containing the broken line.

Input file splits may cross line breaks. A line that crosses file splits is read by the RecordReader
of the split that contains the beginning of the broken line.

The input file is split exactly at the line breaks, so each Record Reader will read a series of
complete lines.

Input file splits may cross line breaks. A line that crosses tile splits is ignored.

Input file splits may cross line breaks. A line that crosses file splits is read by the RecordReader
of the split that contains the end of the broken line.

Incorrect

As the Map operation is parallelized the input file set is first split to several pieces
called FileSplits. If an individual file is so large that it will affect seek time it will be split to several
Splits. The splitting does not know anything about the input file’s internal logical structure, for
example line-oriented text files are split on arbitrary byte boundaries. Then a new map task is
created per FileSplit.
When an individual map task starts it will open a new output writer per configured reduce task. It
will then proceed to read its FileSplit using the RecordReader it gets from the specified
InputFormat. InputFormat parses the input and generates key-value pairs. InputFormat must also
handle records that may be split on the FileSplit boundary. For example TextInputFormat will read
the last line of the FileSplit past the split boundary and, when reading other than the first FileSplit,
TextInputFormat ignores the content up to the first newline.
Reference:How Map and Reduce operations are actually carried out
http://wiki.apache.org/hadoop/HadoopMapReduce(Map, second paragraph)

Explanation:
Correct answer(s):
Input file splits may cross line breaks. A line that crosses file splits is read by the RecordReader
of the split that contains the end of the broken line.

46. Question

You need to create a job that does frequency analysis on input data. You will do this by writing a
Mapper that uses TextInputForma and splits each value (a line of text from an input file) into
individual characters. For each one of these characters, you will emit the character as a key and
as IntWritable as the value. Since this will produce proportionally more intermediate data than
input data, which resources could you expect to be likely bottlenecks?

Processor and network I/O

Processor and RAM

Processor and disk I/O


Disk I/O and network I/O

Incorrect

Correct answer(s):
Processor and disk I/O

47. Question

Your Custer’s HOFS block size is 64MB. You have a directory containing 100 plain text files, each
of which Is 100MB in size. The InputFormat for your job is TextInputFormat. How many Mappers
will run?

200

64

100

640

Incorrect

Each file would be split into two as the block size (64 MB) is less than the file size
(100 MB), so 200 mappers would be running.
Note:
If you’re not compressing the files then hadoop will process your large files (say 10G), with a
number of mappers related to the block size of the file.
Say your block size is 64M, then you will have ~160 mappers processing this 10G file (160*64 ~=
10G). Depending on how CPU intensive your mapper logic is, this might be an
acceptable blocks size, but if you find that your mappers are executing in sub minute times, then
you might want to increase the work done by each mapper (by increasing the block size to 128,
256, 512m – the actual size depends on how you intend to process the data).
Reference:http://stackoverflow.com/questions/11014493/hadoop-mapreduce-appropriate-inputfiles-
size(first answer, second paragraph)

Explanation:
Correct answer(s):
200

48. Question

You have the following key value pairs as output from your Map task:
(The, 1)
(Fox, 1)
(Runs, 1)
(Faster, 1)
(Than, 1)
(The, 1)
(Dog, 1)
How many keys will be passed to the reducer?

Two

Four

Three

Six

One

Five

Incorrect

Only one key value pair will be passed from thetwo(The, 1) key value pairs.

Explanation:
Correct answer(s):
Six

49. Question

In a MapReduce job, the reducer receives all values associated with the same key. Which
statement is most accurate about the ordering of these values?

The values are in sorted order.

Since the values come from mapper outputs, the reducers will receive contiguous sections of
sorted values.

The values are arbitrarily ordered, but multiple runs of the same MapReduce job will always
have the same ordering.

The values are arbitrarily ordered, and the ordering may vary from run to run of the same
MapReduce job.

Incorrect

Note:
*The Mapper outputs are sorted and then partitioned per Reducer.
*The intermediate, sorted outputs are always stored in a simple (key-len, key, value-len, value)
format.
*Input to the Reducer is the sorted output of the mappers. In this phase the framework fetches the
relevant partition of the output of all the mappers, via HTTP.
*A MapReduce job usually splits the input data-set into independent chunks which are processed
by the map tasks in a completely parallel manner. The framework sorts the outputs of the maps,
which are then input to the reduce tasks.
*The MapReduce framework operates exclusively on <key, value> pairs, that is, the framework
views the input to the job as a set of <key, value> pairs and produces a set of <key, value> pairs
as the output of the job, conceivably of different types.
The key and value classes have to be serializable by the framework and hence need to implement
the Writable interface. Additionally, the key classes have to implement the WritableComparable
interface to facilitate sorting by the framework.
Reference:MapReduce Tutorial

Explanation:
Correct answer(s):
Since the values come from mapper outputs, the reducers will receive contiguous sections of
sorted values.

50. Question

What happens in a MapReduce job when you set the number of reducers to zero?

No reducer executes, but the mappers generate no output.

No reducer executes, and the output of each mapper is written to a separate file in HDFS.

No reducer executes, but the outputs of all the mappers are gathered together and written to a
single file in HDFS.

Setting the number of reducers to zero is invalid, and an exception is thrown.

Incorrect

*It is legal to set the number of reduce-tasks to zero if no reduction is desired.


In this case the outputs of the map-tasks go directly to the FileSystem, into the output path set by
setOutputPath(Path). The framework does not sort the map-outputs before writing them out to the
FileSystem.
*Often, you may want to process input data using a map function only. To do this, simply set
mapreduce.job.reduces to zero. The MapReduce framework will not create any reducer tasks.

Rather, the outputs of the mapper tasks will be the final output of the job.

Explanation:
Correct answer(s):
No reducer executes, and the output of each mapper is written to a separate file in HDFS.

51. Question

Which happens if the NameNode crashes?

HDFS becomes unavailable until the NameNode is restored.

The Secondary NameNode seamlessly takes over and there is no service interruption.

HDFS becomes temporarily unavailable until an administrator starts redirecting client requests
to the Secondary NameNode.

HDFS becomes unavailable to new MapReduce jobs, but running jobs will continue until
completion.

Incorrect

The NameNode is a Single Point of Failure for the HDFS Cluster. When the
NameNode goes down, the file system goes offline.
Reference:24 Interview Questions & Answers for Hadoop MapReduce developers,What is a
NameNode? How many instances of NameNode run on a Hadoop Cluster?

Explanation:
Correct answer(s):
HDFS becomes unavailable until the NameNode is restored.

52. Question

What is the standard configuration of slave nodes in a Hadoop cluster?

Each slave node either runs a TaskTracker or a DataNode daemon, but not both.

Each slave node runs a JobTracker and a DataNode daemon.

Each slave node runs a TaskTracker and a DataNode daemon.

Each slave node runs a TaskTracker, but only a fraction of the slave nodes run DataNode
daemons.

Each slave node runs a DataNode daemon, but only a fraction of the slave nodes run
TaskTrackers.

Incorrect
Single instance of a Task Tracker is run on each Slave node. Task tracker is run as
a separate JVM process.
Single instance of a DataNode daemon is run on each Slave node. DataNode daemon is run as a
separate JVM process.
One or Multiple instances of Task Instance is run on each slave node. Each task instance is run as
a separate JVM process. The number of Task instances can be controlled by configuration.
Typically a high end machine is configured to run more task instances.

Reference:24 Interview Questions & Answers for Hadoop MapReduce developers,What is


configuration of a typical slave node on Hadoop cluster? How many JVMs run on a slave node?

Explanation:
Correct answer(s):
Each slave node runs a TaskTracker and a DataNode daemon.

53. Question

What is a SequenceFile?

A SequenceFile contains a binary encoding of an arbitrary number of heterogeneous writable


objects.

A SequenceFile contains a binary encoding of an arbitrary number of homogeneous writable


objects.

A SequenceFile contains a binary encoding of an arbitrary number key-value pairs. Each key
must be the same type. Each value must be same type.

A SequenceFile contains a binary encoding of an arbitrary number of WritableComparable


objects, in sorted order.

Incorrect

SequenceFile is a flat file consisting of binary key/value pairs.


There are 3 different SequenceFile formats:
Uncompressed key/value records.
Record compressed key/value records – only ‘values’ are compressed here.
Block compressed key/value records – both keys and values are collected in ‘blocks’ separately
and compressed. The size of the ‘block’ is configurable.
Reference:http://wiki.apache.org/hadoop/SequenceFile

Explanation:
Correct answer(s):
A SequenceFile contains a binary encoding of an arbitrary number key-value pairs. Each key
must be the same type. Each value must be same type.
54. Question

All keys used for intermediate output from mappers must do which of the following:

Override isSplitable

Be a subclass of Filelnput-Format

Implement WritableComparable

Use a comparator for speedy sorting

Be compressed using a splittable compression algorithm.

Incorrect

The MapReduce framework operates exclusively on <key, value> pairs, that is, the
framework views the input to the job as a set of <key, value> pairs and produces a set of <key,
value> pairs as the output of the job, conceivably of different types.
The key and value classes have to be serializable by the framework and hence need to implement
the Writable interface. Additionally, the key classes have to implement the WritableComparable
interface to facilitate sorting by the framework.

Reference:MapReduce Tutorial

Explanation:
Correct answer(s):
Implement WritableComparable

55. Question

You are developing a combiner that takes as input Text keys, IntWritable values, and emits Text
keys, Intwritable values. Which interface should your class implement?

Reducer <Text, Text, IntWritable, IntWritable>

Reducer <Text, IntWritable, Text, IntWritable>

Combiner <Text, Text, IntWritable, IntWritable>

Mapper <Text, IntWritable, Text, IntWritable>

Combiner <Text, IntWritable, Text, IntWritable>

Incorrect
Correct answer(s):
Combiner <Text, IntWritable, Text, IntWritable>

56. Question

Custom programmer-defined counters in MapReduce are:

Lightweight devices for bookkeeping within MapReduce programs.

Lightweight devices for ensuring the correctness of a MapReduce program. Mappers Increment
counters, and reducers decrement counters. If at the end of the program the counters read zero,
then you are sure that the job completed correctly.

Lightweight devices for synchronization within MapReduce programs. You can use counters to
coordinate execution between a mapper and a reducer.

Incorrect

Countersare a useful channel for gathering statistics about the job; for qualitycontrol, or for application-
level statistics. They are also useful for problem diagnosis. Hadoop
maintains somebuilt-in counters for every job, which reports various metrics for your job.
Hadoop MapReduce also allows the user to define a set of user-defined counters that can be
incremented (or decremented by specifying a negative value as the parameter), by the driver,
mapper or the reducer.
Reference:Iterative MapReduce and Counters,Introduction to Iterative MapReduce and Counters
http://hadooptutorial.wikispaces.com/Iterative+MapReduce+and+Counters(counters, second
paragraph)

Explanation:
Correct answer(s):
Lightweight devices for bookkeeping within MapReduce programs.

57. Question

Which of the following statements best describes how a large (100 GB) file is stored in HDFS?

The file is divided into fixed-size blocks, which are stored on multiple datanodes. Each block is
replicated three times by default.HDFS guarantees that different blocks from the same file are
never on the same datanode.

The file is divided into variable size blocks, which are stored on multiple data nodes. Each block
is replicated three times by default.

The master copy of the file is stored on a single datanode. The replica copies are divided into
fixed-size blocks, which are stored on multiple datanodes.

The file is divided into fixed-size blocks, which are stored on multiple datanodes. Each block is
replicated three times by default. Multiple blocks from the same file might reside on the same
datanode.

The file is replicated three times by default. Eachcopy of the file is stored on a separate
datanodes.

Incorrect

HDFS is designed to reliably store very large files across machines in a large
cluster. It stores each file as a sequence of blocks; all blocks in a file except the last block are the
same size. The blocks of a file are replicated for fault tolerance. The block size and replication
factor are configurable per file. An application can specify the number of replicas of a file. The
replication factor can be specified at file creation time and can be changed later. Files in HDFS are
write-once and have strictly one writer at any time. The NameNode makes all decisions regarding
replication of blocks. HDFS uses rack-aware replica placement policy. In default configuration
there are total 3 copies of a datablock on HDFS, 2 copies are stored on datanodes on same rack
and 3rd copy on a different rack.
Reference:24 Interview Questions & Answers for Hadoop MapReduce developers,How the HDFS
Blocks are replicated?

Explanation:
Correct answer(s):
The file is divided into fixed-size blocks, which are stored on multiple datanodes. Each block is
replicated three times by default.HDFS guarantees that different blocks from the same file are
never on the same datanode.

58. Question

What is a Writable?

Writable is an abstract class that all keys and values in MapReduce must extend. Classes
extending this abstract base class must implement methods for serializing and deserializing
themselves

Writable is an interface that all keys, but not values, in MapReduce must implement. Classes
implementing this interface must implement methods for serializing and deserializing themselves.

Writable is an interface that all keys and values in MapReduce must implement. Classes
implementing this interface must implement methods for serializing and deserializing themselves.

Writable is an abstract class that all keys, but not values, in MapReduce must extend. Classes
extending this abstract base class must implement methods for serializing and deserializing
themselves.
Incorrect

public interface Writable


A serializable object which implements a simple, efficient, serialization protocol, based on
DataInput and DataOutput.
Any key or value type in the Hadoop Map-Reduce framework implements this interface.
Implementations typically implement a static read(DataInput) method which constructs a new
instance, calls readFields(DataInput) and returns the instance.
Reference: org.apache.hadoop.io,Interface Writable

Explanation:
Correct answer(s):
Writable is an interface that all keys and values in MapReduce must implement. Classes
implementing this interface must implement methods for serializing and deserializing themselves.

59. Question

MapReduce is well-suited for all of the following applications EXCEPT? (Choose one):

Text mining on a large collections of unstructured documents.

Analysis of large amounts of Web logs (queries, clicks, etc.).

Online transaction processing (OLTP) for an e-commerce Website.

Graph mining on a large social network (e.g., Facebook friends network).

Incorrect

Hadoop Map/Reduce is designed for batch-oriented work load.


MapReduce is well suited for data warehousing (OLAP), but not for OLTP.

Explanation:
Correct answer(s):
Online transaction processing (OLTP) for an e-commerce Website.

We do not provide actual exam questions from any vendor like Microsoft, Cisco, Oracle, EMC etc. top

You might also like