Answer Posted / Yashwant Singh Rana
Spark transformations (such as map(), filter(), and groupBy()) are lazy because they don't immediately execute the operations on the underlying data. Instead, they create new RDDs or DataFrames that represent the results of the transformation. This allows for efficient execution by batching multiple transformations together and optimizing the execution plan before actually performing the computations.
| Is This Answer Correct ? | 0 Yes | 0 No |
Post New Answer View All Answers