Golgappa.net | Golgappa.org | BagIndia.net | BodyIndia.Com | CabIndia.net | CarsBikes.net | CarsBikes.org | CashIndia.net | ConsumerIndia.net | CookingIndia.net | DataIndia.net | DealIndia.net | EmailIndia.net | FirstTablet.com | FirstTourist.com | ForsaleIndia.net | IndiaBody.Com | IndiaCab.net | IndiaCash.net | IndiaModel.net | KidForum.net | OfficeIndia.net | PaysIndia.com | RestaurantIndia.net | RestaurantsIndia.net | SaleForum.net | SellForum.net | SoldIndia.com | StarIndia.net | TomatoCab.com | TomatoCabs.com | TownIndia.com
Interested to Buy Any Domain ? << Click Here >> for more details...


Explain HDFS “Write once Read many” pattern?



Explain HDFS “Write once Read many” pattern?..

Answer / Deep Prakash

The Write Once Read Many (WORM) pattern in HDFS is a mechanism that ensures files can be written only once and read multiple times. It's useful for archiving data, as it prevents the accidental or intentional modification of the data. This is achieved by restricting certain operations like modification and deletion on specific files or directories.

Is This Answer Correct ?    0 Yes 0 No

Post New Answer

More Apache HDFS Hadoop Distributed File System Interview Questions

Explain the difference between nas and hdfs?

1 Answers  


How to access HDFS?

1 Answers  


Why is Reading done in parallel and writing is not in HDFS?

1 Answers  


Define HDFS and talk about their respective components?

1 Answers  


Explain the difference between mapreduce engine and hdfs cluster?

1 Answers  


How to copy a file into HDFS with a different block size to that of existing block size configuration?

1 Answers  


What is the difference between an hdfs block and input split?

1 Answers  


If the source data gets updated every now and then, how will you synchronize the data in hdfs that is imported by sqoop?

1 Answers  


Can we have different replication factor of the existing files in hdfs?

1 Answers  


What is the difference between nas (network attached storage) and hdfs?

1 Answers  


What do you mean by metadata in Hadoop?

1 Answers  


Suppose there is file of size 514 mb stored in hdfs (hadoop 2.x) using default block size configuration and default replication factor. Then, how many blocks will be created in total and what will be the size of each block?

1 Answers  


Categories