How to process data using Transformation operation in Spark?
Answer Posted / Kuldeep Kumar Singh
Data processing using Transformation operations in Apache Spark involves creating a new Dataset or DataFrame from an existing one. These transformations are lazy, meaning they are not executed immediately but stored as tasks in a task graph. Common transformation functions include map(), filter(), and groupBy(). Actions are required to trigger the execution of these transformations.
| Is This Answer Correct ? | 0 Yes | 0 No |
Post New Answer View All Answers