Is spark distributed computing?
Answer / Sunil Kumar Gautam
Yes, Apache Spark is a distributed computing system that allows you to process large datasets on clusters of computers. It provides an abstraction layer over Hadoop MapReduce and other big data frameworks.
| Is This Answer Correct ? | 0 Yes | 0 No |
Compare Spark vs Hadoop MapReduce
What are the actions in spark?
Explain the use of broadcast variables
What is the abstraction of Spark Streaming?
Is spark written in scala?
What is Sparse Vector?
What is executor spark?
What is the difference between dataframe and dataset in spark?
Why do we need spark?
Describe the distnct(),union(),intersection() and substract() transformation in Apache Spark RDD?
What are the great features of spark sql?
Can you define yarn?
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)