Answer Posted / Pankaj Kumar Tripathi
When a Spark job is submitted, it is broken down into smaller tasks and distributed across the cluster. Each task is executed by a worker node, which runs an executor process. The driver program coordinates the execution of these tasks, keeps track of their progress, and handles communication between the driver and executors.
| Is This Answer Correct ? | 0 Yes | 0 No |
Post New Answer View All Answers