Changes

Jump to: navigation, search

N/A

887 bytes added, 23:55, 15 February 2019
Assignment 1
This of time is spent in the compress function and the hashtable takes up most of the time because it is constantly being manipulated and read from. It looks like if the hashtable and the compress function were to be parallelized about 90% of the run time would be affected. The big-O for the application should be O(n) time so there is a linear increase in time based on file size, however the hashtable grew more in time compared to the compress function the larger the file. This application is not good for parallelization because of the dictionary hashtable. Due to the hastable or the compress dictionary needing to be accessible globally and be constantly modifiable and read this could pose issues if multiple threads were running especially since modifying and reading the table needs to be done sequentially for efficient compression. Threading the compress could lead to errors in compressions making this difficult to parallelize.
 
----
----
'''Application 3 - Sorting Algorithms'''
'''Description:'''
I decided to select an option from the suggested projects – Sorting Algorithms. The sorting algorithms I included were: Bubble, Insertion, Selection, Merge, Heap, and Quicksort. In order to profile the sorting algorithms all at once and have a better comparison of what sorting algorithm uses the most time, I created a single module which functions that perform the sorting algorithms. I allocated memory for all of the arrays and populated a single array of size ''n'' with randomly generated data. I then copied the memory from the populated array to all of the other arrays to ensure consistency amongst the comparison of the sorting algorithms. I then passed each array to its respective sorting function which returned the sorted array using pass-by-reference. One of the things to keep in mind is that when ''n '' increases (the size of the array being sorted), the time increases. I have included 3 different profiles where ''n '' (the size of the array) equals 50000, 100000 and lastly 50000.
'''Source Code:'''
[[File:SortingAlgorithms.cpp.txt]] The link to the source code AND the make file can also be found on GitHub at: [http://www.example.com https://www.googlegithub.com title/Akshat55/Sorting-Algorithms-Comparison]
'''Conclusion:'''
Based on the given resultsfrom profiling 3 different sizes of arrays, we can assume that majority of the time taken to sort the arrays is taken by the O(n^2) algorithms (Bubble, Insertion, and Selection). However, the other sorting algorithms (Heap, Merge, Quick) that are O(n log(n)) are extremely fast even when the size of the arrays are large. As observed from the profile, the elapsed time increased as the size of the array increases increased. I went from approximately 8 seconds to execute the entire program to 13 minutes to execute. ---- '''Final Selection: Sorting Algorithms'''Based on the profiled applications above, we think that the sorting algorithms would benefit a lot from offloading to the time taken GPU. Also, it would be quite interesting to see if we can speed up the O(n^2) sorting algorithms to sort increasespotentially match the sorting speed of the O(n log n) algorithms since there won’t be much change for them as they are already fast.
=== Assignment 2 ===
=== Assignment 3 ===
45
edits

Navigation menu