Why HDFS performs replication, although it results in data redundancy in Hadoop?
Answer / Umesh Kumar Chourasia
HDFS performs replication to ensure high availability and fault tolerance. By replicating data across multiple DataNodes, the system can continue functioning even if some DataNodes fail.
| Is This Answer Correct ? | 0 Yes | 0 No |
What do you mean by block scanner in hdfs?
Tell me two most commonly used commands in HDFS?
What is the difference between input split and hdfs block?
How is indexing done in HDFS?
How does HDFS Index Data blocks? Explain.
How to split single hdfs block into partitions rdd?
Write the command to copy a file from linux to hdfs?
While processing data from hdfs, does it execute code near data?
What is throughput? How does HDFS provide good throughput?
Describe HDFS Federation?
How is indexing done in Hadoop HDFS?
What are the key features of HDFS?
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)