10
edits
Changes
no edit summary
===The implementation of Spark has the following steps===
*1. The SparkContext applies for computing resources from the Cluster Manager.*2. The Cluster Manager receives the request and start allocating the resources. (Creates and activates the executor on the worker node.)*3. The SparkContext sends the program/application code (jar package or python file, etc.) and task to the Executor. Executor executes the task and saves/returns the result*4. The SparkContext will collect the results.
==Apache Spark Core API==