Is spark a mapreduce?
Answer / Ravi Prakash Mourya
Spark was initially designed to run on top of Hadoop's MapReduce but it is not a MapReduce itself. Spark provides an RDD (Resilient Distributed Datasets) API that allows for more flexibility, speed, and in-memory computing compared to MapReduce.
| Is This Answer Correct ? | 0 Yes | 0 No |
What are transformations in spark?
Define Spark-SQL?
Did edmond berger invent the spark plug?
What is the method to create a data frame?
What is spark accreditation?
Is spark built on top of hadoop?
Do I need to learn scala for spark?
What is data skew in spark?
List some use cases where Spark outperforms Hadoop in processing.
Is scala required for spark?
What do you understand by Executor Memory in a Spark application?
Explain a scenario where you will be using spark streaming.
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)