how does hdfs ensure data integrity of data blocks stored in hadoop hdfs?
Answer / Nawal Kishore
HDFS ensures data integrity by using replication, where each data block is stored in multiple DataNodes. Additionally, it uses checksums to verify the integrity of data blocks during read and write operations.
| Is This Answer Correct ? | 0 Yes | 0 No |
Write the command to copy a file from linux to hdfs?
Who divides the file into Block while storing inside hdfs in hadoop?
How data or file is read in HDFS?
How data or file is read in Hadoop HDFS?
What are file permissions in HDFS and how HDFS check permissions for files or directory?
How does hdfs get a good throughput?
What is Hadoop Distributed File System- HDFS?
What does hdfs mean?
What is Fault Tolerance in HDFS?
Clarify the difference between nas and hdfs.
List the various HDFS daemons in HDFS cluster?
What are the key features of HDFS?
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)