For a job in Hadoop, is it possible to change the number of mappers to be created?
No Answer is Posted For this Question
Be the First to Post Answer
Explain what combiners are and when you should use a combiner in a mapreduce job?
What is identity mapper and reducer? In which cases can we use them?
In Hadoop, which file controls reporting in Hadoop?
What is the job of blend () and repartition () in Map Reduce?
When Namenode is down what happens to job tracker?
What is a "reducer" in Hadoop?
When the reducers are are started in a mapreduce job?
Map reduce jobs take too long. What can be done to improve the performance of the cluster?
Which are the methods in the mapper interface?
Define MapReduce?
Whether the output of mapper or output of partitioner written on local disk?
How to create custom key and custom value in MapReduce Job?