Why is block size set to 128 MB in Hadoop HDFS?
Answer / Deepak Shukla
The default block size of 128 MB in HDFS was chosen for a balance between network bandwidth, disk space utilization, and data replication. Larger blocks reduce the overhead of managing many small files and provide better disk space utilization, while smaller blocks allow for faster transfer times.
| Is This Answer Correct ? | 0 Yes | 0 No |
Tell me two most commonly used commands in HDFS?
Explain the difference between mapreduce engine and hdfs cluster?
What is the difference between namenode, backup node and checkpoint namenode?
How to create directory in HDFS?
What is the difference between MapReduce engine and HDFS cluster?
What is the difference between nas (network attached storage) and hdfs?
What happens if the block on Hadoop HDFS is corrupted?
What is the difference between input split and hdfs block?
Can you change the block size of hdfs files?
Define hadoop archives? What is the command for archiving a group of files in hdfs.
Does HDFS allow a client to read a file which is already opened for writing?
Can we change the document present in hdfs?
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)