How to keep files in HDFS?
Answer / Anil Singh Dariyal
Files in HDFS are automatically replicated across multiple DataNodes to ensure data availability. By default, each file is replicated three times, but this number can be configured according to the needs of your specific use case.
| Is This Answer Correct ? | 0 Yes | 0 No |
Distinguish HDFS Block and Input Unit?
What is a job tracker?
What is throughput? How does hdfs provides good throughput?
What is the difference between MapReduce engine and HDFS cluster?
What happens if the block in HDFS is corrupted?
How to Delete directory from HDFS?
What is the difference between Input Split and an HDFS Block?
How hdfs is different from traditional file systems?
How one can format Hadoop HDFS?
Describe HDFS Federation?
In HDFS, how Name node determines which data node to write on?
How to Delete file from HDFS?
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)