Why is block size large in Hadoop?
Answer / Ashutosh Kumar Jatav
Block size in HDFS is large (default is 128 MB) to reduce the number of small files, improve data locality for map-reduce jobs, and minimize network traffic. A larger block size means that less metadata needs to be stored, resulting in increased performance.
| Is This Answer Correct ? | 0 Yes | 0 No |
When NameNode enter in Safe Mode?
What is the difference between an input split and hdfs block?
How to copy file from HDFS to local?
Difference Between Hadoop and HDFS?
What is non-dfs used in hdfs web console
Why is block size set to 128 MB in Hadoop HDFS?
How to split single hdfs block into partitions rdd?
Why do we need hdfs?
How does hdfs get a good throughput?
How does a client read/write data in HDFS?
What are the main features of hdfssite.xml?
What does heartbeat in hdfs means?
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)