Changes

Jump to: navigation, search

BarraCUDA Boiz

1,067 bytes removed, 00:53, 14 April 2017
Progress
==== Problem ====
After surveying the original code. We found one three major hot-spots for heavy CPU usage.
This block of code handles reshapes input pixels into a set of samples for classification.
const int N = width * height;[[File:SetSamplesSerial.png]]   const int dim = imgThis block of code computes the distances between sampled centers and other input samples.channels();  cv[[File::Mat samples = cv::Mat(N, dim, CV_32FC1); for (int x = 0; x<width; x++) { for (int y = 0; y<height; y++) { for (int d = 0; d<dim; d++) {CalculateDistanceSerial.png|550px]] int index = y * width + x; samplesThis block of code generates the image that has to be outputted.at<float>(index, d) = (float)img.at<uchar>(y, x*dim + d); } } }[[File:GenerateImageSerial.png|550px]]
==== Analysis ====
You can find the new parallelized KmeansPlusPlus code
[https://github.com/agamdograMajinBui/KmeansPlusPlusCuda herekmeansplusplusCUDA].  Here are the kernels that we programmed.  Set Samples kernel  [[File:SetSamplesKernel.png|550px]]  Calculate Distance kernel  [[File:CalculateDistanceKernel.png|550px]] Generate Image kernel  [[File:GenerateImageKernel.png|550px]]   ==== Conclusion ==== By comparing the run-times of the serial KmeansPlusPlus and the parallelized version, we can see that the performance of the program has improved. [[File:GraphAssignment2.png|900px]] The performance improvement is not significant for smaller clusters and iterations. But you can see that the performance has been improved for the higher test cases. === Assignment 3 === For assignment 3, we optimized the kernels by allocating the correct amounts of grids and block for each kernel. Previously, we allocated 32 threads by 32 blocks for every kernel call even when it did not require it. After adjustments, we found significant improvements for many of the kernels.
Here is the kernel that we programmed.====Runtime of program====
__global__ void setCenter(float* d_centerHere, float* d_sample, int n, int dim, int randi) { int i = blockIdx.x * blockDim.x + threadIdx.x; int j = blockIdx.y * blockDim.y + threadIdxwe see that the program was improved by the optimizations of threads per block.y; if (i < n && j < n) d_center[j * n + i] = d_sample[j * randi + i]; }
Launching the kernelRuntime of program:
int nb = (n + ntpb - 1) / ntpb; dim3 dGrid(nbFor larger images, nb, 1); dim3 dBlock(ntpb, ntpb, 1); float* d_center = nullptr; cudaMalloc((void**)&d_center, centerswe found that the program was improved more and more as the amount of clusters and iterations increased.rows * centers.cols * sizeof(float)); cudaMemcpy(d_center, (float*)centers.data, centers.rows * centers.cols * sizeof(float), cudaMemcpyHostToDevice); check(cudaGetLastError()); float* d_sample = nullptr; cudaMalloc((void**)&d_sample, samples.rows * samples.cols * sizeof(float)); cudaMemcpy(d_sample, (float*)samples.data, centers.rows * centers.cols * sizeof(float), cudaMemcpyHostToDevice); int rand = genrand_int31() % n; setCenter << <dGrid, dBlock >> >(d_center, d_sample, N, dim, rand); cudaDeviceSynchronize();
[[File:Big Image.png]]
The kernels:For medium images, we found more inconsistent results.
setSamples - goes through the entire image and collects samples from the image (the current pixel and the next x number of pixels). __global__ void setSamples(cv::cuda::PtrStepSz<float> samples, cv::cuda:[[File:PtrStepSz<uchar> img, int dimC) { int i = blockIdx.y*blockDim.y + threadIdx.y; int j = blockIdx.x*blockDim.x + threadIdx.x; if (i >= img.rows || j >= img.cols) return; int index = i * imgMed Image.cols + j; for (int d = 0; d<dimC; d++) { samples(index, d) = (float)img(i, j * dimC + d); } }png]]
calculateDistance - goes through For small images, we found the image and computes the difference between the samples and the centers from the input imagemost inconsistent results after optimizations.
__global__ void calculateDistance(cv::cuda::PtrStepSz<float> centers, cv::cuda:[[File:PtrStepSz<float> samples, int k, int N, int dim, double* minval, float* D) { // Compute distances between already sampled centers and other input samples. // Update nearest distance if it is smaller than previous ones. int col = blockIdx.x * blockDim.x + threadIdx.x; int row = blockIdx.y * blockDim.y + threadIdx.y; int i = col + row * N; //int i = blockIdx.y*blockDim.y + threadIdxSmall Image.y; if (i >= N) return; double dist = 0.0; for (int d = 0; d<dim; d++) { double diff = centers(k - 1, d) - samples(i, d); dist += diff * diff; } if (dist < minval[i]) { minval[ipng] = dist; } *D += minval[i]; }
generateImage - takes When the modified image and then writes it to side increases, the file using more efficient the function "out()"kernel.
__global__ void generateImage(cv::cuda::PtrStepSz<uchar> out, cv::cuda::PtrStepSz<int> indices, cv::cuda::PtrStepSz<float> centers, int dim) { // Generate output image int i = blockIdx.y*blockDim.y + threadIdx.y; int j = blockIdx.x*blockDim.x + threadIdx.x; if (i >= out.rows || j >= out.cols) return; int index Runtime of each kernel= i * out.cols + j; int ci = indices(index, 0); for (int d = 0; d<dim; d++) { out(i, j*dim + d) = (uchar)centers(ci, d); } }
After programming these Each kernel. we noticed an improvement in performaceindividually found significant or marginal improvements after adjusting for thread/block size.
==== Conclusion ====Runtime of kernels: Set samples found small improvements on average. [[File:Set Samples.png]] Here we changed the calculation of y_index to the outside of the inner loop. [[File:SetSamplesKernelOptimized.png|550px]] Calcuate distance found a significant improvements. [[File:Calculate Distance Kernel.png]]  The biggest change was the thread/block size. [[File:CalculateDistanceKernelOptimized.png|550px]]  Generate image found improvements as well since image sizes varied. Changing the thread/block size to the correct amount of pixels enabled better usage of memory. [[File:Generate Image Kernel.png]]
By comparing The biggest change was the run-times of the serial KmeansPlusPlus and the parallelized version, we can see that the performance of the program has improved slightlythread/block size.
This program can further be improved by off-loading some more operations from the CPU to the GPU. But this will require more time and research[[File:GenerateImageKernelOptimized.png|550px]]
52
edits

Navigation menu