Explain how are file systems checked in hdfs?
Answer / Mohit Vendra Sharma
In HDFS, file system checks are performed using the 'fsck' command. It checks for corruption or inconsistency in the NameNode's metadata and reports any issues found.
| Is This Answer Correct ? | 0 Yes | 0 No |
Explain the difference between mapreduce engine and hdfs cluster?
Explain what is difference between an input split and hdfs block?
How to Delete directory and files recursively from HDFS?
How NameNode tackle Datanode failures in HDFS?
How to copy a file into HDFS with a different block size to that of existing block size configuration?
Explain how indexing in hdfs is done?
Data node block size in HDFS, why 64MB?
What is Hadoop HDFS – Hadoop Distributed File System?
What happens if the block on Hadoop HDFS is corrupted?
What is a block?
Explain what happens if, during the PUT operation, HDFS block is assigned a replication factor 1 instead of the default value 3?
If a particular file is 50 mb, will the hdfs block still consume 64 mb as the default size?
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)