212
edits
Changes
BetaT
,no edit summary
equations they can be used to model and study magnetohydrodynamics. courtesy of wikipedia ("https://en.wikipedia.org/wiki/Navier%E2%80%93Stokes_equations")
=== problem Application Code to be parallelized===
The problem with this application comes in the main function trying to calculate the finite-difference
}
=== Initial Speed Tests ran with no optimization on linux ===
By using the command line argument cat /proc/cpuinfo
||12500 x 12500 || 220198||
|}
=== gprof ===
System Specifications
== Application 2 Calculating Pi==
This application is pretty straightforward, it calculates Pi to the decimal point which is given by the user. So an input of 10 vs 100,000 will calculate Pi to either the 10th or 100 thousandth decimal.
=== problem Application code to be parallelized ===
Inside the function calculate we have:
I Believe the 2 for loops will cause a delay in the program execution time.
=== Initial Speed Tests ran with no optimization on linux ===
for this test the linux VM has:
||500000 ||671163||
|}
=== gprof ===
'''
for (int i=0; i <= nx-1; i++)
{
if (i*dx >= 0.5 && i*dx <= 1)
u[i][it] = un[i][it-1] - c*dt/dx*(un[i][it-1]-un[i-1][it-1]);
}
}'''
u[k * nt + 0] = 1;
}
for (int it = 1; it <= nx - 1; it++)
{
u[m * nx + it] = un[m * nx + it - 1] - c*dt / dx*(un[m * nx + it - 1] - un[(m - 1) * nx + it - 1]);
}
}'''
After these implementations, testing the code produced the same results as the original program, so it is a positive confirmation that we can proceed to optimizing the cod using the GPU
== Optimizing Problems Parallelizing with 2 Kernels ==The kernels have been initialized as a 2D Grid '''dim3 dGrid(nbx, nbx); AND dim3 dBlock(ntpb, ntpb);''' In the first kernel I have Replaced the for loop statement.The goal of this first statement was to set the first value in each column to either 1 or 2 based off the condition in the if statement.The for loop is not needed. === INITIALIZE KERNEL === __global__ void Initalize(float* u, float* un, int nx, int nt, float dx) { int i = blockIdx.x * blockDim.x + threadIdx.x; int j = blockIdx.y * blockDim.y + threadIdx.y; if (i < nx && j < nx) { if (i*dx >= 0.5 && i*dx <= 1) { u[i * nx] = 2; } else { u[i * nx] = 1; } } } === CALCULATE WAVE KERNEL === This was the tricky part in converting the original code into the kernel.I have removed the 2 inner for loops but kept the outer loop.The program takes 2 arrays. Let us say the X's represent the arrays below __global__ void Calculate (float* u, float* un,int nx, int c, float dx, float dt) { int j = blockIdx.x * blockDim.x + threadIdx.x; int i = blockIdx.y * blockDim.y + threadIdx.y; // removes from instructions because no need to do this NX amount of times float total = c*dt / dx; if (i < nx && j < nx) { for (int it = 1; it <= nx- 1; it++) { if (i != 0 || i < nx ) { un[i * nx + it-1] = u[i * nx + it-1]; __syncthreads(); u[it] = un[1 * nx + it - 1]; __syncthreads(); u[i * nx + it ] = un[i * nx + it- 1] - c*dt / dx* (un[i * nx + it - 1] - un[(i - 1) * nx + it - 1]); __syncthreads(); } } } ==== HOW THE ALGORITHM WORKS ==== This is focusing on the algorithm inside the CALCULATE Kernel only. 1. We begin with 2 Arrays [[File:2Arrazs.png]] 2. The first column of the First array is initialized by the INITIALIZE Kernel. [[File:Initialize.png]] 3. The second array copies the values from the first column of the First array [[File:Copy1stColumn.png]] 4. The First array copies a single value from the Second array [[File:2ndCall.png]] 5. The remaining values for the 2nd column of the First array are calculated through the Second array as follows.
Executing the program again with a problem size of 2000 2000 or 4,000,000 we yield the following results.
Fist for loop - took - 0 millisecs
2nd for Loop - took - 0 millisecs
Press any key to continue . . .
__global__ void Initalize(float* u, float* un, int nx, int nt, float dx) { int i = blockIdx.x * blockDim.x + threadIdx.x; if (ki < nx) { for (int it = 0; it < nx; it++) { u[i * nx + it] = 0; un[i * nx + it] = 0; } if (i*dx >= 0.5 && ki*dx <= 1) u[i * nx] = 2; else u[i * nx] = 1; } } == POST OPTIMIZATION - Execution Comparison Times== If you have not, please take a look at section 3.1.1.1(just above), as it shows how the first iteration of optimization has been delivered. Below is a comparison of times from the original CPU to the newly optimized kernel execution. These comaprison times are for the WHOLE execution of the program, not just parts. These include memory transfers, allocation, de-allocation and calculations. TIMES ARE IN MILLISECONDS N Linux Visual No Parallel Parallized Optimized_A (2000 ^ 2) 1160 | 20520 | 6749 | 971 (5000 ^ 2) 28787 | 127373 | n/a | 1417 (10000 ^ 2) 124179 | 522576 | n/a | 3054 [[File:ParallelizedVSOptimized.png]] == SECOND OPTIMIZATION == === Shared Memory === In order to speed up the execution time I will incorporate shared data into the Calculate Kernel. The problem I am facing is determining in what way to use shared memory. As I outlined above in section 2.2.2 regarding how to calculation on each Array is performed the program is calculating column by column and not rows by rows. However, it is also moving between rows after calculating each column. I can only allocate a static array and not dynamic so my shared memory will be the same size I use as my predefined ntpb variable, which represents the threads I use per block. So as of writing this, my ntpb variable is 32, therefor each shared array will be a size of 128 bytes. I cannot copy the whole array into shared memory, and I cannot copy the array row by row, so we will need to copy the array column by column into shared memory. As for the second array it has become clear that it is no longer needed, as we can simply use the shared memory array to perform the calculations of each column and save the results in the original arrays next column, then copy that column into the shared array and repeat the calculations. === SHARED MEMORY KERNEL === // kernerl __global__ void Calculate(float* u, float* un, int nx, int c, float dx, float dt) { __shared__ float s[ntpb]; int i = blockIdx.x * blockDim.x + threadIdx.x; int t = threadIdx.x; float total = c*dt / dx; if (i < nx && i != 0 && t != 0) { for (int it = 1; it <= nx - 1; it++) { s[t - 1] = u[(i - 1) * nx + it - 1]; u[it] = s[1]; __syncthreads(); u[k i * ntnx + it] = s[t] - total * (s[t] - s[t - 1]); __syncthreads(); } } } === EXECUTION COMPARISON BETWEEN OPTIMIZED AND SHARED KERNELS === Below in milliseconds are the execution times for the former Kernel and new shared Kernel {| class="wikitable sortable" border="1" cellpadding="5"|+ Time Comparison! n !! Optimized !! Shared |-||2000 x 2000 ||971|| 661 |||-||5000 x 5000 ||1417|| 936 |||-||10000 x 10000 ||3054|| 2329 |||} == THIRD OPTIMIZATION == === SAVING TRAVEL COSTS BY REMOVING THE UNNECESSARY ARRAY === As we discovered above, the second array is not necessary while we are performing all the calculations on Shared Memory which can be seen in section 3.3.2. This provides us with the ability to further optimize our Kernel by reducing the amount of time we spend transferring data across the PCI bus. Below is an image of the data transfer times for the CALCULATE kernel. Since both of the original Arrays are not needed in the final Kernel solution, we can save 50% of our transfer time across the PCI bus by removing one of the arrays. [[File:MEmCpy10000.png]] === GETTING 100% OCCUPANCY PER MULTIPROCESSOR=== '''Occupancy Calculator The CUDA Toolkit includes a spreadsheet that accepts as parameters the compute capability, the number of threads per block, the number of registers per thread and the shared memory per block. This spreadsheet evaluates these parameters against the resource limitations of the specified compute capability. This spreadsheet is named CUDA_Occupancy_Calculator.xls and stored under the ../tools/ sub-directory of the installed Toolkit.''' Source--> https://scs.senecac.on.ca/~gpu610/pages/content/resou.html With the existing CALCULATE Kernel the CUDA Occupancy Calculator is providing the following statistics as shown below... [[File:OriginalCalculator.png]] The current CALCULATE Kernel is only utilizing 50% of the MultiProcessor as shown above. If the threads per block are switched from 32 to 512 we will achieve 100% occupancy as shown below. [[File:100Calculator.png]] === CALCULATE KERNEL === Here is the final CALCULATE Kernel for the application.The changes include removal of the second array. // kernerl __global__ void Calculate(float* u, int nx, int c, float dx, float dt) { __shared__ float s[ntpb]; int i = blockIdx.x * blockDim.x + threadIdx.x; int t = threadIdx.x; float total = c*dt / dx; if (i < nx && i != 0 && t != 0) { for (int it = 1; it <= nx - 1; it++) { s[t - 1] = u[(i - 1) * nx + it - 1]; u[it] = s[1]; __syncthreads(); u[i * nx + it] = s[t] - total * (s[t] - s[t - 1]); __syncthreads(); } } } === OPTIMIZATION TIME COMPARISONS === Below is a graph comparing times between Optimizations illustrating the amount of execution time saved in each iteration. The times are listed in milliseconds. [[File:OPTIMIZATIONCOMPARISON.png]] = CONCLUSIONS = == OVERALL TIME COMPARISONS ==
Upon completion of the application it will create a file based on the output of the algorithm. The following image below displays that output comparing the original program to the parallelized program.
[[File:outputs.png]]