Answer Posted / Ravi Prakash Mourya
Spark was initially designed to run on top of Hadoop's MapReduce but it is not a MapReduce itself. Spark provides an RDD (Resilient Distributed Datasets) API that allows for more flexibility, speed, and in-memory computing compared to MapReduce.
| Is This Answer Correct ? | 0 Yes | 0 No |
Post New Answer View All Answers