Difference between revisions of "Avengers"

From CDOT Wiki
Jump to: navigation, search
(Assignment 2)
(Progress)
Line 62: Line 62:
 
In the code, the function that took up a significant amount of time was the calculateDimensions() function. The flat profile indicates that this function takes 97.67% of the execution time.
 
In the code, the function that took up a significant amount of time was the calculateDimensions() function. The flat profile indicates that this function takes 97.67% of the execution time.
  
 +
==== Identifying Parallelize-able Code ====
 
calculateDimensions() has 3 nested for loops. Each for loop is used to set the value of one of the triangle sides. The inner-most for loop compares the two shorter sides of the triangle by first squaring them and then adding the squared results together. A condition is used to check if the sum of the squared side values is equivalent to the squared value of the hypotenuse. The results are printed when the condition is true.
 
calculateDimensions() has 3 nested for loops. Each for loop is used to set the value of one of the triangle sides. The inner-most for loop compares the two shorter sides of the triangle by first squaring them and then adding the squared results together. A condition is used to check if the sum of the squared side values is equivalent to the squared value of the hypotenuse. The results are printed when the condition is true.
  
 
[[File:NestedLoops.PNG]]
 
[[File:NestedLoops.PNG]]
  
The nested for loops represent the serial way of calculating the dimensions. To parallelize this, we did the following:
+
The nested for loops represent the serial way of calculating the dimensions.  
 +
 
 +
==== Offloading Process ====
 +
To parallelize the code mentioned above, we did the following:
  
 
1. Use CUDA device properties to design the grid and blocks.
 
1. Use CUDA device properties to design the grid and blocks.
Line 92: Line 96:
 
[[File:kernel.PNG]]
 
[[File:kernel.PNG]]
  
 +
==== Time Logging ====
 
To compare the timings of the serial version and the parallel version, we modified the original file to have 2 functions: calculateCUDA() and calculateSerial(). The execution of both of these functions was timed to see which function was quicker.
 
To compare the timings of the serial version and the parallel version, we modified the original file to have 2 functions: calculateCUDA() and calculateSerial(). The execution of both of these functions was timed to see which function was quicker.
  
Line 100: Line 105:
 
calculateCUDA() contains the parallelized version of the application. It sets the properties of a grid and its blocks, and launches a kernel to find the Pythagorean triples. The time taken to find the triples is printed out after execution.
 
calculateCUDA() contains the parallelized version of the application. It sets the properties of a grid and its blocks, and launches a kernel to find the Pythagorean triples. The time taken to find the triples is printed out after execution.
  
 +
 +
==== Results ====
 
Below is a graph that shows the time taken for execution of both the serial approach and the parallel approach.
 
Below is a graph that shows the time taken for execution of both the serial approach and the parallel approach.
  

Revision as of 14:36, 30 March 2019


GPU610/DPS915 | Student List | Group and Project Index | Student Resources | Glossary

Team Avengers

Team Members

  1. Bruno Alexander Cremonese de Morais, Pythagorean Triples
  2. Jaideep Sidhu, Array Processing

Progress

Assignment 1

Bruno - Pythagorean Triples

Pythagorean Triples by Violet_82

"A Pythagorean triple is a triple of positive integers a, b, and c such that a right triangle exists with legs a,b and hypotenuse c. By the Pythagorean theorem, this is equivalent to finding positive integers a, b, and c satisfying:

a^2+b^2=c^2. 	

The smallest and best-known Pythagorean triple is (a,b,c)=(3,4,5). The right triangle having these side lengths is sometimes called the 3, 4, 5 triangle. [...]

In addition, one side of every Pythagorean triple is divisible by 3, another by 4, and another by 5. One side may have two of these divisors, as in (8, 15, 17), (7, 24, 25), and (20, 21, 29), or even all three, as in (11, 60, 61)."

Weisstein, Eric W. "Pythagorean Triple." From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/PythagoreanTriple.html

This algorithm (supposedly) calculates all Pythagorean triple (not primitive Pythagorean triples) values from initial values of the opposed side of a triangle. The algorithm needs some logical improvement to display only values from the starting range, these improvements should not be hard to implement but it still serves the purpose to calculate all Pythagorean triples from the simplest up to the maximum hypotenuse number given by the user. The code has been slightly modified to receive command line parameters, facilitating its execution on UNIX enviroments.

The flat profile generated by the program is as follows:

Flat profile:

Each sample counts as 0.01 seconds.

 %   cumulative   self              self     total           
time   seconds   seconds    calls  Ts/call  Ts/call  name    
97.67      5.98     5.98                             Triangle::calculateDimensions(double, double, double, int)
 0.00      5.98     0.00     2842     0.00     0.00  Triangle::printDimensions(double, double, double)
 0.00      5.98     0.00        1     0.00     0.00  _GLOBAL__sub_I__ZN8TriangleC2Edddi

We can see that the function calculateDimensions, which is called only once is responsible for almost all of the execution time. For this profiling the program calculated all triples from the most primitive (3, 4, 5) up to all the triples that satisfy the hypotenuse value of 1500. This function has O(n^3) with its nested for loops to calculate all possible values for each triple, demanding a lot of computational power to generate all of them.

Since every loop executes the same instructions I believe the parallelization of this code should be straightforward and would make it a lot faster. Calculations could be divided in tasks or the offload could be done in order since one calculation does not depend on the other explicitly.

For further utility, the results could be sorted for later display, since the values will be printed in any order from the GPU results.

Code and Execution Instructions on GitHub

Jaideep - Array Processing

In Blaise Barney's Notes on Array Processing, an example of Array Processing is discussed. The example "demonstrates calculations on 2-dimensional array element; a function is evaluated on each array element."

I used the pseudo-code provided to create a program that creates a 2-dimensional array. The purpose of the program is to create and populate a 2-dimensional array of size n (provided by the user) with random numbers. The code is available in the link below:

Link to Array Processing code

After using gprof to profile my program, a call graph is generated with this content:

Array Processing - Call Graph.jpeg

From the call graph file, it is evident that the generateRandom() function is an obvious hotspot. It is hogging 100% of the execution time. The function consists of 2 for loops, one nested in the other, which makes the function have a Big-O notation of O(n^2).

The computations involved with each element in the array is independent from the rest of the elements, and therefore this function is a deserving candidate for parallelization. Additionally, the array elements can be evenly distributed into sub-arrays and a process can be assigned to each sub-array.

Assignment 2

For Assignment 2, we decided to parallelize the application selected by Bruno. In the code, the function that took up a significant amount of time was the calculateDimensions() function. The flat profile indicates that this function takes 97.67% of the execution time.

Identifying Parallelize-able Code

calculateDimensions() has 3 nested for loops. Each for loop is used to set the value of one of the triangle sides. The inner-most for loop compares the two shorter sides of the triangle by first squaring them and then adding the squared results together. A condition is used to check if the sum of the squared side values is equivalent to the squared value of the hypotenuse. The results are printed when the condition is true.

NestedLoops.PNG

The nested for loops represent the serial way of calculating the dimensions.

Offloading Process

To parallelize the code mentioned above, we did the following:

1. Use CUDA device properties to design the grid and blocks.

CudaDevProps.PNG

2. Adjust the number of threads to be used in the grid depending on the value passed in by the user (max hyptonuse value).

AdjustNTPG.PNG

3. Allocated 2 arrays on device, initialized from 1 to the maximum hypotenuse (given by the user as an argument):

  • 1 array represents the hypotenuse side
  • 1 array represents one of the sides of the triangle
  • This was done by using CUDA's thrust library.

AllocInit.PNG

4. Calculated the number of blocks required and iterated through the thrust vector, passing each individual element to the kernel launch along with the two previously allocated arrays.

NBandLaunch.PNG

5. The kernel contains the instructions for verifying whether the value passed in is part of a Pythagorean triple. If a Pythagorean triple is found, the values are printed out.

Kernel.PNG

Time Logging

To compare the timings of the serial version and the parallel version, we modified the original file to have 2 functions: calculateCUDA() and calculateSerial(). The execution of both of these functions was timed to see which function was quicker.

Timings.PNG

calculateSerial() contains the initial version of the application. It has 3 nested for loops and has a serialized approach to finding the Pythagorean triples. The time taken to find the triples is printed out after execution.

calculateCUDA() contains the parallelized version of the application. It sets the properties of a grid and its blocks, and launches a kernel to find the Pythagorean triples. The time taken to find the triples is printed out after execution.


Results

Below is a graph that shows the time taken for execution of both the serial approach and the parallel approach.

(Image)

Assignment 3