Answer Posted / Sapna Kumari
"In Apache Spark, persistence refers to the process of keeping RDDs in memory across multiple actions. When an RDD is persisted, Spark maintains it in memory for future use, allowing subsequent transformations to be executed more efficiently as they avoid recomputation."n
| Is This Answer Correct ? | 0 Yes | 0 No |
Post New Answer View All Answers