How does hdfs ensure information integrity of data blocks squares kept in hdfs?
Answer / Vishwasana
HDFS ensures data block integrity by using checksums. Each DataNode calculates a checksum for each data block before sending it to the NameNode. The NameNode then computes its own checksum and verifies that it matches with the one received from the DataNode. If there's any discrepancy, the block is retransmitted.n
| Is This Answer Correct ? | 0 Yes | 0 No |
Is the hdfs block size reduced to achieve faster query results?
Explain what is a difference between an input split and hdfs block?
What happens when two users try to access to the same file in HDFS?
What is a block in HDFS, why block size 64MB?
What is the difference between nas (network attached storage) and hdfs?
Explain what is heartbeat in hdfs?
Mention what is the difference between hdfs and nas?
What are problems with small files and hdfs?
How one can change Replication factor when Data is already stored in HDFS
Compare hbase vs hdfs?
Does HDFS allow a client to read a file which is already opened for writing in hadoop?
Explain how HDFS communicates with Linux native file system?
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)