What are the fundamental configurations parameters specified in map reduce?
No Answer is Posted For this Question
Be the First to Post Answer
For a job in Hadoop, is it possible to change the number of mappers to be created?
How to optimize Hadoop MapReduce Job?
How to sort intermediate output based on values in MapReduce?
What main configuration parameters are specified in mapreduce?
What is SequenceFileInputFormat in Hadoop MapReduce?
When is it suggested to use a combiner in a MapReduce job?
How many Mappers run for a MapReduce job?
what is storage and compute nodes?
Explain the Reducer's reduce phase?
what does the conf.setMapper Class do ?
Does Partitioner run in its own JVM or shares with another process?
How to create custom key and custom value in MapReduce Job?
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)