Open main menu

CDOT Wiki β

Changes

BLAStoise

1,641 bytes added, 23:50, 5 April 2017
Assignment 2
=== Assignment 2 ===
<h4>Parallelized Oil Painting Program</h4>
Despite our initial decision of choosing to parallelize the Sudoku solver in assignment 1, we came to the conclusion that parallelizing the oil painting program would be better suited for us. We were able to grasp the logic behind the oil painting program whereas we had a lot of trouble working out the logic behind solving Sudoku puzzles.  <h4>The Code</h4>[[Media: A2-Blastoise.zip]] This is a zip file that contains our full code and an executable version. To create your own project in visual studio with this code, you will need to download OpenCV. You will need the following property settings in your project: add the include path of OpenCV to VC++ Directories -> Include Directories and also add opencv_world320d.lib to Linker -> Input and add a post build command to copy OpenCV dll files into your project directory. (The last step can also be done by copying OpenCV bin files into debug directory of your project.) In the serial program, the oil painting worked by going one pixel at a time in a double for loop going across the height and width of the iamge.
<pre>
</pre>
We were able to remove the need for this loop through the utilization of a kernel. In the main function, we created the following block and grid and called our oil painting kernel:. The ceil function was used to calculate the grid dimensions, so that based on the block size the entire image will be included in the block.
<pre>
const dim3 block(16ntpb, 16, 1ntpb);// ntpb was calculated based on device property (maxThreadsDim).
const dim3 grid(ceil((float)width / block.x), ceil((float)height / block.y), 1);
oilPaint << <grid, block >> >(gpu_src, gpu_dst, width, height);
In our kernel, through the use of the primitive types, we are able to determine the exact position of the pixel in the 2D array:. Now instead of iterating through every pixel in the image, each thread in the block will adjust its own pixel intensity. The overall logic of the code stayed the same. We moved the double for loop into kernel and used i and j to locate the pixel.
<pre>
</pre>
Now instead <h4>The Results</h4>The first graph is a comparison of iterating through every pixel the original execution time and the parallelize version. There was a considerable speed up. The original seems to have a growth rate that is exponential while the parallelized version is practically logarithmic. The second graph shows the time spent in the imagekernel in milliseconds, each thread while also showing the percentage of time spend on the device verses host. The time spent in the block will adjust its own pixel intensitykernel increases with the problem size, as expected. You can see that the time spent on the device also has a growth rate that is logarithmic, this means that for extremely small problem sizes the parallel version might not have a great speed up. (This is caused by the amount of CUDA API calls.)  [[File:A2-Result.PNG]]
=== Assignment 3 ===
23
edits