Should I install spark on all nodes of yarn cluster?
Answer Posted / Shailly
When running Apache Spark on a Hadoop YARN cluster, you only need to install the Spark daemons (namenode, resourcemanager, nodemanager, and historyserver) on the master node. The worker nodes (node managers) do not require Spark installation, as they run the user-submitted tasks.
| Is This Answer Correct ? | 0 Yes | 0 No |
Post New Answer View All Answers