Answer Posted / Amar Agrawal
The optimal block size in HDFS depends on various factors such as data access patterns, network bandwidth, and storage capacity. The default block size in Hadoop is 128 MB, but it can be changed according to specific requirements. A smaller block size may result in increased overhead for small files, while a larger block size may lead to poor performance for large files with frequent updates.
| Is This Answer Correct ? | 0 Yes | 0 No |
Post New Answer View All Answers
No New Questions to Answer in this Category !! You can
Post New Questions
Answer Questions in Different Category