Answer Posted / Bramha Prasad Chaturvedi
Apache Spark provides a variety of transformations that can be applied to RDDs, including map(), filter(), reduce(), join(), groupByKey(), and sortBy(). Transformations in Spark are lazy evaluations, which means they create new datasets without executing them immediately. Execution occurs only when an action (e.g., count(), collect()) is triggered.
| Is This Answer Correct ? | 0 Yes | 0 No |
Post New Answer View All Answers