Why is block size set to 128 MB in HDFS?
Answer / Tanzeem Akhter
The default block size of 128MB in HDFS was chosen due to several reasons: n1. Network latency: A larger block size helps reduce the overhead caused by network communication.n2. Storage efficiency: It allows for better utilization of storage resources as data is written in larger chunks.n3. Processing efficiency: Larger blocks can be processed faster compared to smaller ones.
| Is This Answer Correct ? | 0 Yes | 0 No |
How to read file in HDFS?
How is indexing done in Hadoop HDFS?
List the various HDFS daemons in HDFS cluster?
Since the data is replicated thrice in hdfs, does it mean that any calculation done on one node will also be replicated on the other two?
Replication causes data redundancy then why is is pursued in HDFS?
Explain how are file systems checked in hdfs?
What is a task tracker?
Does the HDFS go wrong? If so, how?
What is throughput in HDFS?
How to perform the inter-cluster data copying work in HDFS?
How is NFS different from HDFS?
Describe HDFS Federation?
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)