In how many ways can we use Spark over Hadoop?
Answer / Mohd Nadeem
Spark can be used in three different modes with Hadoop: Standalone mode, Hadoop Batch mode, and Live mode. In standalone mode, the entire cluster is managed by Spark itself. In Hadoop Batch mode, Spark applications run on a YARN (Yet Another Resource Negotiator) cluster as map-reduce jobs. Lastly, in Live mode, Spark runs alongside with other map-reduce jobs to provide real-time data processing.
| Is This Answer Correct ? | 0 Yes | 0 No |
Is spark a mapreduce?
What do you understand by receivers in Spark Streaming ?
What is spark database?
What is a spark rdd?
What is Immutable?
How is spark different from hadoop?
Explain about trformations and actions in the context of rdds?
Why do we need rdd in spark?
What database does spark use?
Define the roles of the file system in any framework?
Explain write ahead log(journaling) in spark?
Explain Accumulator in Spark?
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)