Open main menu

CDOT Wiki β

Changes

GPU621/Threadless Horsemen

1,206 bytes added, 11:28, 26 November 2018
OpenMP vs Julia Code
More code here: https://github.com/tsarkarsc/parallel_prog
 
* If you don't care about using Threads, Julia has some features called macros which look similar to OpenMP's parallel constructs
* OpenMP of course uses multi-threading, whereas Julia uses Tasks, which is what they call their coroutines / fibers
* The following is a comparison of parallel reduction in OpenMP and Julia
 
{| class="wikitable"
|-
! OpenMP
! Julia
|-
|
<source>
template <typename T>
T reduce(
const T* in, // points to the data set
int n, // number of elements in the data set
T identity // initial value
) {
 
T accum = identity;
#pragma omp parallel for reduction(+:accum)
for (int i = 0; i < n; i++)
accum += in[i];
return accum;
}
</source>
|
<source>
a = randn(1000)
@distributed (+) for i = 1:100000
some_func(a[rand(1:end)])
end
</source>
|}
 
 
* Note: Looks like Julia used to have a macro called @parallel so you could use, say, @parallel for, but it seems like it was deprecated in favour of @distributed
 
https://scs.senecac.on.ca/~gpu621/pages/content/omp_3.html
 
https://docs.julialang.org/en/v1/stdlib/Distributed/#Distributed.@distributed
 
https://docs.julialang.org/en/v1/manual/parallel-computing/index.html
== OpenMP vs Julia Results ==
93
edits