Can we run spark without hadoop?
Answer / Sharad Kuamr Sain
"Yes", Apache Spark can be run independently of Hadoop. However, it can also integrate with Hadoop for better performance on large data sets.
| Is This Answer Correct ? | 0 Yes | 0 No |
Explain benefits of lazy evaluation in RDD in Apache Spark?
What is the difference between hive and spark?
Define actions in spark.
How can we launch Spark application on YARN?
What is the function of "MLlib"?
List down the languages supported by Apache Spark?
What is pipelined rdd?
Explain partitions?
What is difference between dataset and dataframe in spark?
Does spark use yarn?
explain the key features of Apache Spark?
Name a few commonly used spark ecosystems?
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)