What happens if the block in HDFS is corrupted?
Answer / Ashish Kumar Pankaj
If a block in HDFS is corrupted, the NameNode marks it as corrupt and doesn't allow data nodes to serve that block. DataNodes will periodically replicate the blocks to other DataNodes for redundancy, so if one block is lost or corrupted, another copy can be used to recover the data.
| Is This Answer Correct ? | 0 Yes | 0 No |
How is hdfs block size different from traditional file system block size?
What is throughput? How does HDFS get a good throughput?
Compare hbase vs hdfs?
Does the HDFS go wrong? If so, how?
Why HDFS stores data using commodity hardware despite the higher chance of failures in hadoop?
How to Delete directory from HDFS?
How are file systems checked in hdfs?
What is the module in HDFS?
What do you mean by meta data in hdfs? List the files associated with metadata.
Why is block size set to 128 MB in HDFS?
How does hdfs ensure information integrity of data blocks squares kept in hdfs?
What is NameNode and DataNode in HDFS?
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)