Explain the filter transformation?
Answer / Srijan Chaubey
"Filter Transformation in Apache Spark is used to create a new dataset that only contains elements from the input dataset which satisfy a specified condition. It applies the provided function to each element and returns a new RDD containing only those elements for which the function evaluates to true."
| Is This Answer Correct ? | 0 Yes | 0 No |
Does spark load all data in memory?
What is executor memory in a spark application?
Is java required for spark?
What is shuffle in spark?
What is spark vcores?
What is partitioner spark?
What are the common transformations in apache spark?
What is sc parallelize in spark?
What is apache spark written in?
Can you explain benefits of spark over mapreduce?
What is spark vs hadoop?
What does rdd mean?
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)