Can spark be used without hadoop?
Answer / Hina Madan
Yes, Apache Spark can be used independently of Hadoop. Although it was originally designed to run on top of Hadoop's YARN resource manager, Spark can also work with Mesos or standalone mode, allowing for greater flexibility and scalability in data processing.
| Is This Answer Correct ? | 0 Yes | 0 No |
What is a "Spark Executor"?
What do you understand by worker node?
Define the term ‘Lazy Evolution’ with reference to Apache Spark
Can You Use Apache Spark To Analyze and Access Data Stored In Cassandra Databases?
What are the differences between Caching and Persistence method in Apache Spark?
How can Spark be connected to Apache Mesos?
What is Spark.executor.memory in a Spark Application?
Should I install spark on all nodes of yarn cluster?
Is the following approach correct? Is the sqrt Of Sum Of Sq a valid reducer?
What is the advantage of a Parquet file?
What is the Difference SparkSession vs SparkContext in Apache Spark?
Why do fires spark?
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)