Why HDFS stores data using commodity hardware despite the higher chance of failures?
Answer / Sarover Singh
Hadoop's HDFS uses commodity hardware because it's designed to be fault-tolerant and scalable. By utilizing inexpensive, off-the-shelf hardware, HDFS can achieve high data redundancy through replication and automatic failover mechanisms, making the system highly reliable.
| Is This Answer Correct ? | 0 Yes | 0 No |
Explain the difference between mapreduce engine and hdfs cluster?
What is hdfs block size?
Characterize data integrity? How does hdfs ensure information integrity of data blocks squares kept in hdfs?
What is the benifit of Distributed cache, why can we just have the file in HDFS and have the application read it?
What is a Block Scanner in HDFS?
What is a block in Hadoop HDFS? What should be the block size to get optimum performance from the Hadoop cluster?
List the files associated with metadata in hdfs?
Can you change the block size of hdfs files?
Can you modify the file present in hdfs?
Why rack awareness algorithm is used in hadoop?
Would you be able to change the block size of hdfs files?
Explain HDFS “Write once Read many” pattern?
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)