Do you need to install Spark on all nodes of Yarn cluster while running Spark on Yarn?
Answer / Naveen Kumar Sonia
No, when running Spark on Yarn (Yet Another Resource Manager), you only need to install the Spark application on the client nodes that will run your tasks. The resource manager (YARN) and Hadoop Distributed File System (HDFS) are already available in the cluster.
| Is This Answer Correct ? | 0 Yes | 0 No |
Name three companies which is used Spark Streaming services
What file systems does spark support?
What is map side join?
How is spark fault tolerance?
What is difference between map and flatmap in spark?
Can you define yarn?
What is full form of rdd?
How is dag created in spark?
What is data skew in spark?
What is the user of sparkContext?
Name the languages which are supported by apache spark and which one is most popular?
Explain how RDDs work with Scala in Spark
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)