Answer Posted / Om Prakash Divakar
"No, Apache Spark can run standalone or on Hadoop YARN (Yet Another Resource Negotiator), but it is not dependent on Hadoop MapReduce. However, running Spark on a Hadoop cluster allows you to leverage the distributed file system of HDFS and use existing data that is already stored there."n
| Is This Answer Correct ? | 0 Yes | 0 No |
Post New Answer View All Answers