Answer Posted / Rashi Gupta
Hadoop is a distributed processing framework that focuses on storing and processing large data sets in a distributed manner. It's particularly good for batch processing tasks. On the other hand, Spark extends Hadoop by providing an API for streaming data, machine learning, and graph processing, making it more suitable for real-time data analytics and iterative algorithms.
| Is This Answer Correct ? | 0 Yes | 0 No |
Post New Answer View All Answers