Changes

Jump to: navigation, search

Team Hortons

35 bytes removed, 17:13, 29 October 2017
no edit summary
''Results:''
[[File:http://public.hcoelho.com/images/blog/pstl_count_if.png]]
These results look a lot more like what we were expecting:
''Results:''
[[File:http://public.hcoelho.com/images/blog/pstl_for_each.png]]
Even though we had (what it looked like) a race condition,we still got good results with the parallel excution policies for array and vector. Again, this process could not be vectorized, and this is why the vectorization policy did not do well. For the **list**, we see the same pattern as before: since the slicing for the collection is costly, it seems like the either the compiler did not parallelize it, or the parallel version was just as slow as the serial version.
''Results:''
[[File:http://public.hcoelho.com/images/blog/pstl_reduce.png]]
The results here are very similar to the **for_each** algorithm - it seems like the race condition I made with the "*sum*" for the previous test was not really a problem for the algorithm.
''Results:''
[[File:http://public.hcoelho.com/images/blog/pstl_sort.png]]
Probably the most dramatic results so far. Vectorization for sorting is probably not something that can be done, so it makes sense that the vectorized policies did not yield a good result. The parallel versions, however, had a very dramatic improvement. It seems that this **std::sort** implementation is really optimised for parallel execution!
''Results:''
[[File:http://public.hcoelho.com/images/blog/pstl_transform.png]]
For the list, what happened here seems to be similar to what happened before: it is too costly to parallelize or vectorize the operations, so the execution policies did not have a good result. For the array, we also had a similar result as before, with the parallel version working better than the other ones.

Navigation menu