Explain task granularity
No Answer is Posted For this Question
Be the First to Post Answer
What are the benefits of Spark over MapReduce?
When the reducers are are started in a mapreduce job?
What does a 'MapReduce Partitioner' do?
Explain what you understand by speculative execution
how indexing in HDFS is done?
List out Hadoop's three configuration files?
What is shuffleing in mapreduce?
What are the four basic parameters of a reducer?
Does mapreduce programming model provide a way for reducers to communicate with each other? In a mapreduce job can a reducer communicate with another reducer?
How hadoop mapreduce works?
What is Combiner in MapReduce?
Explain how mapreduce works.
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)