What is a block in HDFS? what is the default size in Hadoop 1 and Hadoop 2? Can we change the block size?
Answer / Awanish Kumar Jaiswal
A block in HDFS is the smallest unit of data that can be written to or read from the file system. The default block size in Hadoop 1 is 64MB, while in Hadoop 2 it's 128MB. Yes, you can change the block size according to your requirements.
| Is This Answer Correct ? | 0 Yes | 0 No |
What is hdfs block size?
Hdfs stores data using commodity hardware which has higher chances of failures. So, how hdfs ensures the fault tolerance capability of the system?
How one can format Hadoop HDFS?
Mention what is the difference between hdfs and nas?
Explain the process that overwrites the replication factors in HDFS?
Explain NameNode and DataNode in HDFS?
how does hdfs ensure data integrity of data blocks stored in hadoop hdfs?
Since the data is replicated thrice in hdfs, does it mean that any calculation done on one node will also be replicated on the other two?
Can multiple clients write into a Hadoop HDFS file concurrently?
What do you mean by metadata in HDFS? Where is it stored in Hadoop?
What do you mean by the high availability of a namenode? How is it achieved?
Define HDFS and talk about their respective components?
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)