Answer Posted / bindu
It is data processing layer of hadoop.map and reduce are the two tasks for processing data in hadoop.
| Is This Answer Correct ? | 0 Yes | 0 No |
Post New Answer View All Answers
How does inputsplit in mapreduce determines the record boundaries correctly?
Is it important for Hadoop MapReduce jobs to be written in Java?
Is Mapreduce Required For Impala? Will Impala Continue To Work As Expected If Mapreduce Is Stopped?
Mention what is the hadoop mapreduce apis contract for a key and value class?
Is it necessary to write a mapreduce job in java?
Why the output of map tasks are stored (spilled ) into local disc and not in hdfs?
What can be optimum value for Reducer?
Explain combiners.
Where the mapper's intermediate data will be stored?
What are ‘maps’ and ‘reduces’?
A number of combiners can be changed or not in MapReduce?
Clarify what is shuffling in map reduce?
How much space will the split occupy in Mapreduce?
What is a partitioner and how the user can control which key will go to which reducer?
In which kind of scenarios MapReduce jobs will be more useful than PIG in Hadoop?