Answer Posted / Akansha Singh
In Spark, 'persist' and 'cache' are methods used to retain RDDs in memory for reuse. The main difference lies in the level of persistence: 'cache' stores RDDs using the default storage level, while 'persist' allows specifying a custom storage level (such as MEMORY_ONLY, MEMORY_AND_DISK, or DISK_ONLY).
| Is This Answer Correct ? | 0 Yes | 0 No |
Post New Answer View All Answers