Answer Posted / Dharmendra Kumar Kannaujiya
Data or a file is written into HDFS (Hadoop Distributed File System) in the following steps: 1. The client application creates a stream of data to be written. 2. The client divides this stream into blocks, typically 128MB in size. 3. Each block is then sent to DataNodes for storage. 4. The NameNode keeps track of which blocks belong to which files and their locations on DataNodes.
| Is This Answer Correct ? | 0 Yes | 0 No |
Post New Answer View All Answers
No New Questions to Answer in this Category !! You can
Post New Questions
Answer Questions in Different Category