Answer Posted / Nikhil Agarwal
Spark supports three types of cluster managers: 1) Standalone: A simple, self-contained mode where the application master runs on one worker node, and other workers are started by the ApplicationMaster. 2) Apache Mesos: A distributed systems kernel that allows for managing resources across multiple frameworks like Spark, Hadoop, and others. 3) YARN (Yet Another Resource Negotiator): The default cluster manager in Hadoop 2.0, it provides a shared resource pool for various big data processing frameworks.
| Is This Answer Correct ? | 0 Yes | 0 No |
Post New Answer View All Answers