1
edit
Changes
Hu3Team
,Created page with '= Project Name Goes here = == Team Members == # [mailto:bdigiuseppecardosode@senecacollege.ca?subject=gpu610 Bruno Di Giuseppe] # [mailto:chdeandradesilva@senecacollege.ca?subje…'
= Project Name Goes here =
== Team Members ==
# [mailto:bdigiuseppecardosode@senecacollege.ca?subject=gpu610 Bruno Di Giuseppe]
# [mailto:chdeandradesilva@senecacollege.ca?subject=gpu610 Carlos Silva]
[mailto:bdigiuseppecardosode@senecacollege.ca;chdeandradesilva@senecacollege.ca?subject=dps901-gpu610 Email All]
== Progress ==
=== Assignment 1 ===
====Bruno's Findings====
I started off with this simple heat equation
It's interesting because the calculation involves taking the average between each matrix element's neighbour(north, south, east and west) and keeps on doing it over and over again until you get a precise calculation(when the biggest difference between the new calculated element and the older is smaller than the defined Epsilon error margin).
Like this you are able to get a nice heat dispersion calculation.
I was worried about data dependency, since, as I said, each element depends on each other to be calculated. But this solution uses 2 matrix, one old and one new, where the new matrix will receive the average value of the old matrix and, if the difference is still bigger than epsilon, then the old matrix recieves the values of the new matrix and the whole iteration happens again, where the new matrix is going to receive the average values of the old matrix, that now holds the most recent values.
So this is a good candidate for parallelization because we can send each iteration of the average calculation to a different GPU thread and since this is a simple average calculation, the GPU will be able to do it.
Running with a 1000x1000 matrix, with a epsilon error factor of 0.001, the program took almost 5 minutes to run completely and 99% of the time was on the getHeat() method (the heat calculation core).
=== Assignment 2 ===
=== Assignment 3 ===
== Team Members ==
# [mailto:bdigiuseppecardosode@senecacollege.ca?subject=gpu610 Bruno Di Giuseppe]
# [mailto:chdeandradesilva@senecacollege.ca?subject=gpu610 Carlos Silva]
[mailto:bdigiuseppecardosode@senecacollege.ca;chdeandradesilva@senecacollege.ca?subject=dps901-gpu610 Email All]
== Progress ==
=== Assignment 1 ===
====Bruno's Findings====
I started off with this simple heat equation
It's interesting because the calculation involves taking the average between each matrix element's neighbour(north, south, east and west) and keeps on doing it over and over again until you get a precise calculation(when the biggest difference between the new calculated element and the older is smaller than the defined Epsilon error margin).
Like this you are able to get a nice heat dispersion calculation.
I was worried about data dependency, since, as I said, each element depends on each other to be calculated. But this solution uses 2 matrix, one old and one new, where the new matrix will receive the average value of the old matrix and, if the difference is still bigger than epsilon, then the old matrix recieves the values of the new matrix and the whole iteration happens again, where the new matrix is going to receive the average values of the old matrix, that now holds the most recent values.
So this is a good candidate for parallelization because we can send each iteration of the average calculation to a different GPU thread and since this is a simple average calculation, the GPU will be able to do it.
Running with a 1000x1000 matrix, with a epsilon error factor of 0.001, the program took almost 5 minutes to run completely and 99% of the time was on the getHeat() method (the heat calculation core).
=== Assignment 2 ===
=== Assignment 3 ===