Explain the process that overwrites the replication factors in HDFS?
Answer / Mukesh Kumar Chaudhary
In HDFS, you can change the replication factor for a file by using the 'dfs.datanode.update.upgrade-failed-blocks' and 'dfs.client.block.write.replace' configuration properties. When you increase the replication factor, the NameNode schedules new replicas to be created on other DataNodes; when you decrease it, existing replicas are deleted according to the deletion policy.
| Is This Answer Correct ? | 0 Yes | 0 No |
How to Delete file from HDFS?
What is Fault Tolerance in Hadoop HDFS?
How to Delete directory and files recursively from HDFS?
How data or file is written into Hadoop HDFS?
What is hdfs in big data?
Why HDFS stores data using commodity hardware despite the higher chance of failures in hadoop?
What are tools available to send the streaming data to hdfs?
Characterize data integrity? How does hdfs ensure information integrity of data blocks squares kept in hdfs?
Why is block size large in Hadoop?
What do you mean by metadata in Hadoop?
What is a difference between an input split and hdfs block?
Explain how indexing in hdfs is done?
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)