Hdfs stores data using commodity hardware which has higher chances of failures. So, how hdfs ensures the fault tolerance capability of the system?
Answer / Anil Kumar Verma
HDFS ensures fault tolerance by replicating each block across multiple DataNodes. By default, HDFS stores three copies of each data block to ensure that a single point of failure does not cause data loss. Additionally, HDFS relies on the NameNode's journal for logging all file system changes and recovers from failures using this log.
| Is This Answer Correct ? | 0 Yes | 0 No |
Why ‘Reading‘ is done in parallel and ‘Writing‘ is not in HDFS?
Would you be able to change the block size of hdfs files?
How does hdfs ensure information integrity of data blocks squares kept in hdfs?
How to keep files in HDFS?
File permissions in HDFS?
How is indexing done in Hadoop HDFS?
What is Block in HDFS?
What do you mean by meta data in hdfs? List the files associated with metadata.
Write the command to copy a file from linux to hdfs?
What is a block?
What do you mean by meta information in hdfs? List the documents related to metadata.
Define data integrity? How does hdfs ensure data integrity of data blocks stored in hdfs?
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)