Open main menu

CDOT Wiki β

Changes

N/A

5,463 bytes added, 21:02, 15 February 2019
Assignment 1
This of time is spent in the compress function and the hashtable takes up most of the time because it is constantly being manipulated and read from. It looks like if the hashtable and the compress function were to be parallelized about 90% of the run time would be affected. The big-O for the application should be O(n) time so there is a linear increase in time based on file size, however the hashtable grew more in time compared to the compress function the larger the file. This application is not good for parallelization because of the dictionary hashtable. Due to the hastable or the compress dictionary needing to be accessible globally and be constantly modifiable and read this could pose issues if multiple threads were running especially since modifying and reading the table needs to be done sequentially for efficient compression. Threading the compress could lead to errors in compressions making this difficult to parallelize.
 
 
 
'''Application 2 - Image Processing'''
 
 
'''Description:'''
The code in focus was borrowed from user "cwginac" at DreamInCode:
[https://www.dreamincode.net/forums/topic/76816-image-processing-tutorial/ cwginac - Image Processing Tutorial]
 
I stumbled across this code while searching for a program written in C++ that holds the purpose of processing images, and can be deployed on Linux without too many issues with libraries. I began my research looking for open source image processing programs, which lead me to a number of libraries and sources, (including CLMG). However, those sources were mainly in JAVA. Concerned that I didn't truly understand the process, I then redefined my focus to understanding image processing. I thus searched for "image processing tutorials in C++". The DreamInCode website was one that was listed on the front page.
The code uses standard libraries to handle images in the PGM format. It is a fairly straight forward program that intakes a number of images as command-line arguments, with the first being the image to edit, and provides the user some options to process the image, outputting the result to one of the provided image paths. The one thing I noticed the program lacks is a method for converting images to PGM, considering the program requires it. Therefore, for my testing of the program, I took a JPEG image and converted it to PGM using an online method found here: [https://www.files-conversion.com/image/pgm Dan's Tools - Convert Files]. Providing just the one image as a command-line argument only yielded 4 options that include getting/setting the values of pixels and getting other information. Looking through the code I knew there was more to it, but the method to process an image using the program required 2 arguments: first is the original image, second is the output image. With these provided, the program allows the user to rotate, invert/ reflect, enlarge, shrink, crop, translate, and negate.
 
The code is found on the site, near the end of the article. To run it, I made a Makefile. The code downloaded and borrowed from the site are stored as text files, so I renamed them as .cpp and .h files within the Linux environment. Here is the Makefile:
 
#Makefile for A1 - Image Processing
#
GCC_VERSION = 8.2.0
PREFIX = /usr/local/gcc/${GCC_VERSION}/bin/
CC = ${PREFIX}gcc
CPP = ${PREFIX}g++
main: main.o
$(CPP) -pg -omain main.o
main.o: main.cpp
$(CPP) -c -O2 -g -pg -std=c++17 main.cpp image.cpp
clean:
rm *.o
 
From my test, I used a 768 KB image borrowed from the web, enlarged it a couple times, shrank it, rotated it, and negated it. The result was an 18.7 MB image. The time it took to run the program was:
 
real 1m33.427s
user 0m0.431s
sys 0m0.493s
 
The generated FLAT profile of the program revealed:
 
% cumulative self self total
time seconds seconds calls ms/call ms/call name
34.46 0.21 0.21 7 30.03 30.03 Image::operator=(Image const&)
26.26 0.37 0.16 6 26.69 26.69 Image::Image(int, int, int)
13.13 0.45 0.08 Image::rotateImage(int, Image&)
11.49 0.52 0.07 writeImage(char*, Image&)
9.85 0.58 0.06 Image::enlargeImage(int, Image&)
1.64 0.59 0.01 readImage(char*, Image&)
1.64 0.60 0.01 Image::negateImage(Image&)
1.64 0.61 0.01 Image::shrinkImage(int, Image&)
0.00 0.61 0.00 7 0.00 0.00 Image::~Image()
0.00 0.61 0.00 1 0.00 0.00 _GLOBAL__sub_I__ZN5ImageC2Ev
0.00 0.61 0.00 1 0.00 0.00 Image::Image(Image const&)
 
 
'''Conclusion:'''
 
The majority of time is spent in the equals operator function and the class constructor, most likely because the image is constantly being manipulated, read from, and being copied to and from temporary storage for ease of use and object safety. Other than the basic functions (like read/write), it looks like the rotate and enlarge functions take a larger amount of time, which could mean that, if they were to be parallelized, it could positively affect the run time. My discernment of the big-O notation for the rotate function is O(n^2) which shows a quadratic growth rate, whereas the enlarge function had a notation of O(n^3) or greater. The reason for the rotate function having a longer run-time could be due to the fact that I enlarged the image before rotating it, but the notations don't lie. Personally, I'd say that this application is not the best for parallelization because of its simplicity in handling the images, but I can definitely see how one or more of the functions in the program can be parallelized. Some of the issues posed in making the program parallel is centered upon the image needing to be accessible to every other function, and, considering that the image is being processed, it would be constantly modified and read from. I simple terms, I think that, if multiple threads were running to quicken the program, the computation of the image could lead to errors in processing resulting in a corrupted image, distortions, and things of the sort. I may be wrong in this thought, but, to my knowledge, not being to avoid such issues makes this program somewhat difficult to safely parallelize.
=== Assignment 2 ===
=== Assignment 3 ===
5
edits