How is reporting controlled in hadoop?
No Answer is Posted For this Question
Be the First to Post Answer
In MapReduce, ideally how many mappers should be configured on a slave?
List the configuration parameters that have to be specified when running a MapReduce job.
What is the function of mapreducer partitioner?
Why Hadoop MapReduce?
What is identity mapper and identity reducer?
What is the relationship between Job and Task in Hadoop?
How to submit extra files(jars,static files) for MapReduce job during runtime in Hadoop?
what daemons run on a master node and slave nodes?
Explain about the partitioning, shuffle and sort phase in MapReduce?
For a job in Hadoop, is it possible to change the number of mappers to be created?
What is streaming?
Whether the output of mapper or output of partitioner written on local disk?
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)