Open main menu

CDOT Wiki β

GPU621/ApacheSpark

Revision as of 07:38, 26 November 2018 by Sathia (talk | contribs) (Why Apache Spark)

Team Members

Introduction

What is Apache Spark ?

An open-source distributed general-purpose cluster-computing framework for Big Data.

History of Apache Spark

2009: a distributed system framework initiated at UC Berkeley AMPLab by MateiZaharia
2010: Open sourced under a BSD license
2013: The project was donated to the Apache Software Foundation and the license was changed to Apache 2.0
2014: Became an Apache Top-Level Project. Used by Databricks to set a world record in large-scale sorting in November
2014-present: Exists as a next generation real-time and batch processing framework

Why Apache Spark

Data is exploded in volume, velocity and variety The need to have faster analytic results becomes increasingly important Support near real time analytics to answer business questions

Features

Easy to use Supporting python. Java and Scala Libraries for sql, ml, streaming General-purpose Batch like MapReduce is included Iterative algorithm Interactive queries and streaming which return results immediately Speed In memory computations Faster than MapReduce for complex application on disks

Examples