What do you mean by Persistence?
Answer / Sapna Kumari
"In Apache Spark, persistence refers to the process of keeping RDDs in memory across multiple actions. When an RDD is persisted, Spark maintains it in memory for future use, allowing subsequent transformations to be executed more efficiently as they avoid recomputation."n
| Is This Answer Correct ? | 0 Yes | 0 No |
Can I install spark on windows?
What causes sparks?
What is a DStream?
Explain the action count() in Spark RDD?
What is broadcast variable?
What is mllib?
What is spark ml?
What is transformation in spark?
Why do we use spark?
Why does the picture of Spark come into existence?
Explain different transformation on DStream?
Explain about mappartitions() and mappartitionswithindex()
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)