Explain task granularity
No Answer is Posted For this Question
Be the First to Post Answer
How to specify more than one directory as input in the Hadoop MapReduce Program?
What is a combiner and where you should use it?
What is the difference between an RDBMS and Hadoop?
How to set the number of mappers for a MapReduce job?
what does the conf.setMapper Class do ?
Clarify what combiners are and when you should utilize a combiner in a map reduce job?
What are the four essential parameters of a mapper?
For a Hadoop job, how will you write a custom partitioner?
What is a Speculative Execution in Hadoop MapReduce?
Does Partitioner run in its own JVM or shares with another process?
What are the identity mapper and reducer in MapReduce?
Is it mandatory to set input and output type/format in MapReduce?