What is hdfs block size?
Answer / Ashok Anand
HDFS block size refers to the size of individual data blocks that make up files in Hadoop Distributed File System. The default block size is 128 MB, but it can be configured according to specific requirements.
| Is This Answer Correct ? | 0 Yes | 0 No |
Which classes are used by the hive to read and write hdfs files?
What do you mean by meta data in hdfs? List the files associated with metadata.
Explain what is heartbeat in hdfs?
What is throughput? How does hdfs provides good throughput?
What is non-dfs used in hdfs web console
How to create directory in HDFS?
HDFS is used for applications with large data sets, not why Many small files?
What should be the HDFS Block size to get maximum performance from Hadoop cluster?
Mention what is the difference between hdfs and nas?
What happens if the block on Hadoop HDFS is corrupted?
Who divides the file into Block while storing inside hdfs in hadoop?
What is the problem in having lots of small files in hdfs?
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)