Answer Posted / Anjali Nayak
"Spark's runtime architecture consists of a Driver Program (Java or Scala code) that submits tasks to the Cluster Manager (YARN, Mesos, or Standalone). The Cluster Manager then allocates resources for executors. Each Executor runs multiple tasks concurrently and manages memory and storage through RDDs (Resilient Distributed Datasets). Spark uses DAG (Directed Acyclic Graph) scheduler to optimize task execution."
| Is This Answer Correct ? | 0 Yes | 0 No |
Post New Answer View All Answers