1
edit
Changes
→Description
</pre>
The strange thing is that after these parallelisations The algorithm thealgorithm performed a lot worse. I think this has to do with the granularity of the parallelism. Our serial version can solve a 500 city problem around 10 times faster than the non-serial version can.
Another approach would have been to parallelise the whole nodes instead of the operations in them, but this is a little bit difficult because for each node to exist, the parent needs to have been evaluated already. There would have to be some very complex Node management logic to determine which nodes to spawn and which not to spawn, when to halt evaluation on certain nodes and that sort of thing.
Learn from me and think long and hard about what you want to parallelise in regards of granularity, because in this case parallelising the individual operations in the Node made my algorithm significantly worse. It took a lot of programmer effort and gave me no reward in terms of speed. I did learn a lot about C++ writing this algorithm, so all was not lost. I left the cilk/cilk_stubs.h in the solution below, because the serial solution was the best working solution I found. If you want to try the parallel version, just remove the stubs.
==Source==
The program takes it's input from a text file I generated in the same folder as the source code, the file contains the cities for the problem and looks like this: 8128 49708387 39499937 62154525 18741112 49604032 8333. The text file has 2 columns of integers.
The first column is read as the x value of the city and the second column is the y value of the city. You can adjust the number of cities to be taken from the file in the source code at the top of the main section.
#include <cilk/reducer_min.h>
#include <cilk/cilk_stub.h>
struct Selection {
lowerBound = lb;
}
class Tree {
}
}
void pruneNodeList(std::vector<Node>& nodeList, int upperBound) {