Explain HDFS “Write once Read many” pattern?
Answer / Deep Prakash
The Write Once Read Many (WORM) pattern in HDFS is a mechanism that ensures files can be written only once and read multiple times. It's useful for archiving data, as it prevents the accidental or intentional modification of the data. This is achieved by restricting certain operations like modification and deletion on specific files or directories.
| Is This Answer Correct ? | 0 Yes | 0 No |
Explain the difference between nas and hdfs?
How to access HDFS?
Why is Reading done in parallel and writing is not in HDFS?
Define HDFS and talk about their respective components?
Explain the difference between mapreduce engine and hdfs cluster?
How to copy a file into HDFS with a different block size to that of existing block size configuration?
What is the difference between an hdfs block and input split?
If the source data gets updated every now and then, how will you synchronize the data in hdfs that is imported by sqoop?
Can we have different replication factor of the existing files in hdfs?
What is the difference between nas (network attached storage) and hdfs?
What do you mean by metadata in Hadoop?
Suppose there is file of size 514 mb stored in hdfs (hadoop 2.x) using default block size configuration and default replication factor. Then, how many blocks will be created in total and what will be the size of each block?
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)