Answer Posted / Roshan Lal
In Apache Spark, there are several ways to create an RDD (Resilient Distributed Datasets). The primary methods include: text file (using TextFile), sequence (using ParallelCollection), and parallelizing collections from Scala or Java.
| Is This Answer Correct ? | 0 Yes | 0 No |
Post New Answer View All Answers