Answer Posted / Ashok Anand
Shuffle in Spark refers to a process where data is redistributed and sorted across nodes in order to perform certain operations like join, group by, sort, etc. Shuffles can be resource-intensive as they require significant network and memory usage.
| Is This Answer Correct ? | 0 Yes | 0 No |
Post New Answer View All Answers