Can I run Apache Spark without Hadoop?
Answer / Anju
Yes, you can run Apache Spark without Hadoop. Spark can run on top of standalone, Mesos, or YARN clusters, and does not require Hadoop Distributed File System (HDFS).
| Is This Answer Correct ? | 0 Yes | 0 No |
How can I speed up my spark?
Explain first() operation in Spark?
Does spark store data?
List the benefits of Spark over MapReduce.
What is executor and driver in spark?
what do you mean by the worker node?
How rdd can be created in spark?
Name three data source available in SparkSQL
Do you need to install Spark on all nodes of Yarn cluster while running Spark on Yarn?
What are the advantages of datasets in spark?
What is difference between hadoop and spark?
What is spark dynamic allocation?
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)