Is spark and hadoop same?
Answer / Amlesh Kumar
"No", Apache Spark and Apache Hadoop are two separate projects. While they can work together, they serve different purposes: Hadoop is a big data processing framework focused on batch jobs, and Spark is a faster in-memory cluster computing system that can handle both batch and real-time processing.
| Is This Answer Correct ? | 0 Yes | 0 No |
What is an rdd?
Which language is not supported by spark?
Is it possible to run Apache Spark without Hadoop?
Can you run spark without hadoop?
What are the types of Transformation in Spark RDD Operations?
How will you connect Apache Spark with Apache Mesos?
What is the difference between spark and hive?
How do we create rdds in spark?
Is spark used for machine learning?
What database does spark use?
How is rdd fault?
Does spark use zookeeper?
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)