How Spark uses Hadoop?
Answer / Ritu Rani
Spark leverages Hadoop for storage and resource management by default, using the Hadoop Distributed File System (HDFS) as its primary data store. Additionally, Spark can make use of Hadoop's YARN resource manager to manage cluster resources and run applications.
| Is This Answer Correct ? | 0 Yes | 0 No |
What is the difference between client mode and cluster mode in spark?
How to create an rdd?
What is the command to start and stop the Spark in an interactive shell?
Can you explain how to minimize data transfers while working with Spark?
What is the difference between hadoop and spark?
State the difference between Spark SQL and Hql
How does one create RDDs in Spark?
What is RDD Lineage?
Explain cogroup() operation in Spark?
What is pagerank in graphx?
Is it possible to run Apache Spark on Apache Mesos?
What is Speculative Execution in Apache Spark?
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)