Open main menu

CDOT Wiki β

Changes

BetaT

1,491 bytes added, 00:08, 12 April 2017
SECOND OPTIMIZATION
In order to speed up the execution time I will incorporate shared data into the Calculate Kernel. The problem I am facing is determining in what way to use shared memory.
As I outline outlined above in secionsection 2.2.2 regarding how to calculation on each Array is performed the program is calculating column by column and not rows by rows. However, it is also moving between rows after calculating each column. I can only allocate a static array and not dynamic so my shared memory will be the same size I use as my predefined ntpb variable, which represents the threads I use per block. So as of writing this, my ntpb variable is 32, therefor each shared array will be a size of 128 bytes. I cannot copy the whole array into shared memory, and I cannot copy the array row by row, so we will need to copy the array column by column into shared memory. As for the second array it has become clear that it is no longer needed, as we can simply use the shared memory array to perform the calculations of each column and save the results in the original arrays next column, then copy that column into the shared array and repeat the calculations. === SHARED MEMORY KERNEL ===  // kernerl __global__ void Calculate(float* u, float* un, int nx, int c, float dx, float dt) { __shared__ float s[ntpb]; int i = blockIdx.x * blockDim.x + threadIdx.x; int t = threadIdx.x; float total = c*dt / dx; if (i < nx && i != 0 && t != 0) { for (int it = 1; it <= nx - 1; it++) { s[t - 1] = u[(i - 1) * nx + it - 1]; u[it] = s[1]; __syncthreads(); u[i * nx + it] = s[t] - total * (s[t] - s[t - 1]); __syncthreads(); } } }
212
edits