Answer Posted / Tabinda Iram
Data is written into HDFS using the Hadoop FileSystem API. The process involves splitting the input data into blocks, each of which is written to a different DataNode in the cluster. This is done by the NameNode, which keeps track of where each block is located and manages replication as specified by the Replication Factor (RF).
| Is This Answer Correct ? | 0 Yes | 0 No |
Post New Answer View All Answers
No New Questions to Answer in this Category !! You can
Post New Questions
Answer Questions in Different Category