What does spark do during speculative execution?
Answer / Rameez Iqbal Khan
During speculative execution, Spark runs multiple copies of the same task simultaneously. If one fails or takes too long to complete, Spark discards the slow/failed copies and continues with the remaining ones.
| Is This Answer Correct ? | 0 Yes | 0 No |
How can you launch Spark jobs inside Hadoop MapReduce?
Explain SparkContext in Apache Spark?
How many ways can you create rdd in spark?
Does spark need hadoop?
How is streaming implemented in spark?
Explain the key features of Spark.
Explain schemardd?
How will you calculate the number of executors required to do real-time processing using Apache Spark? What factors need to be considered for deciding on the number of nodes for real-time processing?
Explain Catalyst framework?
What is Starvation scenario in spark streaming?
Explain sortbykey() operation?
What is a "worker node"?
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)