How is hdfs block size different from traditional file system block size?
Answer / Ajay Chaturtvedi
HDFS block size is configurable, while the block size in a traditional file system is usually fixed. Additionally, HDFS block size can be larger than that of traditional file systems to reduce the number of blocks and improve data locality.
| Is This Answer Correct ? | 0 Yes | 0 No |
Why HDFS performs replication, although it results in data redundancy in Hadoop?
What is the difference between Input Split and an HDFS Block?
How does hdfs ensure information integrity of data blocks squares kept in hdfs?
How to split single hdfs block into partitions rdd?
Distinguish HDFS Block and Input Unit?
Replication causes data redundancy and consume a lot of space, then why is it pursued in hdfs?
What is the benifit of Distributed cache, why can we just have the file in HDFS and have the application read it?
Define HDFS and talk about their respective components?
Why HDFS stores data using commodity hardware despite the higher chance of failures?
How to restart NameNode or all the daemons in Hadoop HDFS?
What is the difference between MapReduce engine and HDFS cluster?
What are the main properties of hdfs-site.xml file?
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)