Hadoop uses replication to achieve fault tolerance. How is this achieved in Apache Spark?
Answer Posted / Akhilesh Kumari
Apache Spark doesn't use explicit replication for fault tolerance like Hadoop does. Instead, it uses resilient distributed datasets (RDDs) which are immutable distributed collections of data that can tolerate failures by default.
| Is This Answer Correct ? | 0 Yes | 0 No |
Post New Answer View All Answers