What happens if the block on Hadoop HDFS is corrupted?
Answer / Akshya Dixit
When a block in HDFS (Hadoop Distributed File System) gets corrupted, the NameNode will detect this issue and initiate a block replacement process. The DataNodes will start replicating another copy of the same block, if available, or create a new one using the original data from other replicas. This ensures data integrity and reliability in HDFS.
| Is This Answer Correct ? | 0 Yes | 0 No |
Does HDFS allow a client to read a file which is already opened for writing?
What do you mean by block scanner in hdfs?
How data or file is written into Hadoop HDFS?
What is non-dfs used in hdfs web console
Can multiple clients write into an HDFS file concurrently?
What are the different file permissions in the HDFS for files or directory levels?
How data or file is read in HDFS?
How one can format Hadoop HDFS?
What do you mean by meta information in hdfs?
How to create Users in hadoop HDFS?
How does hdfs provides good throughput?
How to Delete directory from HDFS?
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)