Answer Posted / Abhishek Kumar Rai
Apache Spark works by creating a resilient distributed dataset (RDD), which is an immutable distributed collection of data. The RDD serves as the basic processing unit in Apache Spark. Transformations are applied to create new RDDs, and actions are performed to compute results from the RDDs. Spark provides high-level APIs for various programming languages like Scala, Java, Python, and R.
| Is This Answer Correct ? | 0 Yes | 0 No |
Post New Answer View All Answers