Difference between revisions of "The Real A Team"
(→Assignment) |
(→Assignment) |
||
Line 24: | Line 24: | ||
As mentioned above, there are three main steps in a Spark program. Creation, Transformation, Action. | As mentioned above, there are three main steps in a Spark program. Creation, Transformation, Action. | ||
+ | |||
+ | |||
====Creation==== | ====Creation==== | ||
The creation step is the step in which the data is loaded into the first RDD. This data can be loaded from multiple sources, but for the purpose of this assignment it is loaded from a local text file. | The creation step is the step in which the data is loaded into the first RDD. This data can be loaded from multiple sources, but for the purpose of this assignment it is loaded from a local text file. | ||
In scala, the following line will load a text file into a variable. | In scala, the following line will load a text file into a variable. | ||
− | val test = sc.textFile("food.txt") | + | |
+ | <code>val test = sc.textFile("food.txt")</code> | ||
====Transformation==== | ====Transformation==== | ||
The next step is to transform the data that is in the first RDD into something you want to use. In this example we would like to count the number of times each word is used in a piece of text. | The next step is to transform the data that is in the first RDD into something you want to use. In this example we would like to count the number of times each word is used in a piece of text. | ||
− | The following code snip-it will split the RDD by spaces, and map each word with a key value. | + | The following code snip-it will split the RDD by spaces, and map each word with a key value. The value is set to 1 to count each word as 1. |
− | test.flatMap { line => | + | |
+ | <code>test.flatMap { line => | ||
+ | line.split(" ") | ||
+ | { | ||
+ | .map { word => | ||
+ | (word,1) | ||
+ | }</code> | ||
+ | |||
+ | |||
+ | ====Action==== | ||
+ | Finally an action is taken to reduce the RDD. In this example we want to reduce each word by adding the value of the word. To do this we can use: | ||
+ | |||
+ | <code>.reduceByKey(_ + _)</code> | ||
+ | |||
+ | ====Full Program==== | ||
+ | To run the program on windows, the following is the main. | ||
+ | <code> | ||
+ | object WordCount { | ||
+ | def main(args: Array[String]) ={ | ||
+ | |||
+ | System.setProperty("hadoop.home.dir", "c:\\winutil\\") | ||
+ | |||
+ | val conf = new SparkConf() | ||
+ | .setAppName("WordCount") | ||
+ | .setMaster("local") | ||
+ | |||
+ | val sc = new SparkContext(conf) | ||
+ | |||
+ | val test = sc.textFile("food.txt") | ||
+ | |||
+ | test.flatMap { line => | ||
line.split(" ") | line.split(" ") | ||
} | } | ||
Line 39: | Line 72: | ||
(word,1) | (word,1) | ||
} | } | ||
− | + | .reduceByKey(_ + _) | |
− | + | .saveAsTextFile("food.count.txt") | |
− | .reduceByKey(_ + _) | + | |
+ | sc.stop | ||
+ | } | ||
+ | } | ||
+ | </code> |
Revision as of 20:33, 29 March 2016
GPU621/DPS921 | Participants | Groups and Projects | Resources | Glossary
Contents
Using Apache's Spark
A Team Members
- Adrian Sauvageot, All
- ...
Assignment
This assignment will be going over how to do a simple word count in Scala using Spark.
Introduction To Scala
Scala is a programming language that is based off of Java, but is meant for more scalable applications. Information on scala can be found at: http://www.scala-lang.org/
Introduction To Spark
To introduce myself to spark I watched a couple of youtube videos that were filmed at a Spark conference. The videos can be found here: https://www.youtube.com/watch?v=nxCm-_GdTl8 They include 7 videos that go over what Spark is, and how it is meant to be used.
In summery Spark is built to run big data across many machines. It is not meant for many transactions, but is meant for the analysis of data.
Spark is built using a RDD (Resilient Distributed Dataset). This means that Spark does not edit the data that is passed in, but rather used the data to preform transformations (filters, joins, maps, etc.) and then actions (reductions, counts, etc.)
The results are stored into new datasets instead of altering existing ones.
The RDD's are meant to be stored in memory for quick access, however Spark is built so that if necessary the RDD's can be written to the drive. (At a reduced IO speed).
As mentioned above, there are three main steps in a Spark program. Creation, Transformation, Action.
Creation
The creation step is the step in which the data is loaded into the first RDD. This data can be loaded from multiple sources, but for the purpose of this assignment it is loaded from a local text file. In scala, the following line will load a text file into a variable.
val test = sc.textFile("food.txt")
Transformation
The next step is to transform the data that is in the first RDD into something you want to use. In this example we would like to count the number of times each word is used in a piece of text. The following code snip-it will split the RDD by spaces, and map each word with a key value. The value is set to 1 to count each word as 1.
test.flatMap { line =>
line.split(" ")
{ .map { word => (word,1) }
Action
Finally an action is taken to reduce the RDD. In this example we want to reduce each word by adding the value of the word. To do this we can use:
.reduceByKey(_ + _)
Full Program
To run the program on windows, the following is the main.
object WordCount {
def main(args: Array[String]) ={ System.setProperty("hadoop.home.dir", "c:\\winutil\\")
val conf = new SparkConf() .setAppName("WordCount") .setMaster("local") val sc = new SparkContext(conf) val test = sc.textFile("food.txt") test.flatMap { line => line.split(" ") } .map { word => (word,1) } .reduceByKey(_ + _) .saveAsTextFile("food.count.txt")
sc.stop }
}