Answer Posted / Dineshkumar
Tasks in Apache Spark are created based on the data partitioning. Each partition is processed by a separate task, which may include multiple subtasks if necessary. Tasks are distributed across nodes in the cluster and executed by the executor running on each node.
| Is This Answer Correct ? | 0 Yes | 0 No |
Post New Answer View All Answers