Answer Posted / Chandrabhan Kushwaha
Apache Spark can be integrated with Hadoop using Hadoop Distributed File System (HDFS) for data storage and MapReduce as a batch processing engine. Spark can read data from HDFS directly or use YARN as a resource manager to run Spark applications on a Hadoop cluster.
| Is This Answer Correct ? | 0 Yes | 0 No |
Post New Answer View All Answers