Apache Spark is another project of Apache that offers parallel data processing and which can work with Hadoop to develop Big Data applications. It is a fast and general engine for large-scale data processing. Let us look at some of the features of Apache Spark one by one –
Real Time Processing
Unlike Map-Reduce, Spark can handle real-time processing and real-time data.
Faster Processing
With Apache Spark, you can run programs up to 100x faster than Hadoop MapReduce in memory, or 10x faster on disk. This is done by reducing number of read and write operations to disc. It stores the intermediate data in-memory instead of writing it to the disk. It uses the concept of Resilient Distributed Dataset (RDD), which allows it to store data in-memory and persists it till it is needed. This helps reduce the disk reads and writes.
Generality
In addition to Map and Reduce, it supports SQL, streaming, and complex analytics. Spark powers a stack of libraries including SQL and DataFrames, MLlib for machine learning, GraphX, and Spark Streaming. You can combine these libraries seamlessly in the same application.
Spark runs on Hadoop, Mesos, standalone, or in the cloud. It can access diverse data sources including HDFS, Cassandra, HBase, and S3. ou can run Spark using its standalone cluster mode, on EC2, on Hadoop YARN, or on Apache Mesos. Access data in HDFS, Cassandra, HBase, Hive, Tachyon, and any Hadoop data source.
Ease of Use
It supports many languages and applications can be written in Java, Scala, Python, R. Spark offers over 80 high-level operators that make it easy to build parallel apps. And you can use it interactively from the Scala, Python and R shells.