How is NFS different from HDFS?
Answer / Vandna
NFS (Network File System) is a distributed file system that allows a server to share its directory structure over a network. Unlike HDFS, it does not offer the same level of scalability, fault tolerance, and high throughput access for large data sets. HDFS is optimized for storing and processing big data in a cluster, while NFS is more suitable for general-purpose file sharing.
| Is This Answer Correct ? | 0 Yes | 0 No |
While processing data from hdfs, does it execute code near data?
Replication causes data redundancy then why is is pursued in HDFS?
Which classes are used by the hive to read and write hdfs files?
how does hdfs ensure data integrity of data blocks stored in hadoop hdfs?
Can you define a block and block scanner in hdfs?
How can one set space quota in Hadoop (HDFS) directory?
What do you mean by the High Availability of a NameNode in Hadoop HDFS?
What is the throughput? How does hdfs give great throughput?
How is hdfs block size different from traditional file system block size?
What do you mean by the high availability of a namenode?
What should be the HDFS Block size to get maximum performance from Hadoop cluster?
How data or a file is written into hdfs?
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)